Transfer Learning – ML

Transfer Learning - ML 1

When practicing machine learning, training a model can take a long time. Creating a model architecture from scratch, training the model, and then tweaking the model is a massive amount of time and effort. A far more efficient way to train a machine learning model is to use an architecture that has already been defined, potentially with weights that have already been calculated. This is the main idea behind transfer learning, taking a model that has already been used and repurposing it for a new task.

Before delving into the different ways that transfer learning can be used, let’s take a moment to understand why transfer learning is such a powerful and useful technique.

Solving A Deep Learning Problem

When you are attempting to solve a deep learning problem, like building an image classifier, you have to create a model architecture and then train the model on your data. Training the model classifier involves adjusting the weights of the network, a process that can take hours or even days depending on the complexity of both the model and the dataset. The training time will scale in accordance with the size of the dataset and the complexity of the model architecture.

If the model does not achieve the kind of accuracy needed for the task, tweaking of the model will likely need to be done and then the model will need to be retrained. This means more hours of training until an optimal architecture, training length, and dataset partition can be found. When you consider how many variables must be aligned with one another for a classifier to be useful, it makes sense that machine learning engineers are always looking for easier, more efficient ways to train and implement models. For this reason, the transfer learning technique was created.

After designing and testing a model, if the model proved useful, it can be saved and reused later for similar problems.

Types Of Transfer Learning

In general, there are two different kinds of transfer learning: developing a model from scratch and using a pre-trained model.

When you develop a model from scratch, you’ll need to create a model architecture capable of interpreting your training data and extracting patterns from it. After the model is trained for the first time, you’ll probably need to make changes to it in order to get the optimal performance out of the model. You can then save the model architecture and use it as a starting point for a model that will be used on a similar task.

In the second condition – the use of a pre-trained model – you merely have to select a pre-trained model to utilize. Many universities and research teams will make the specifications of their model available for general use. The architecture of the model can be downloaded along with the weights.

When conducting transfer learning, the entire model architecture and weights can be used for the task at hand, or just certain portions/layers of the model can be used. Using only some of the pre-trained model and training the rest of the model is referred to as fine-tuning.

Finetuning

Finetuning a network describes the process of training just some of the layers in a network. If a new training dataset is much like the dataset used to train the original model, many of the same weights can be used.

The number of layers in the network that should be unfrozen and retrained should scale in accordance with the size of the new dataset. If the dataset that is being trained on is small, it is a better practice to hold the majority of the layers as they are and train just the final few layers. This is to prevent the network from overfitting. Alternatively, the final layers of the pre-trained network can be removed and new layers are added, which are then trained. In contrast, if the dataset is a large dataset, potentially larger than the original dataset, the entire network should be retrained. To use the network as a fixed feature extractor, the majority of the network can be used to extract the features while just the final layer of the network can be unfrozen and trained.

When you are finetuning a network, just remember that the earlier layers of the ConvNet are what contain the information representing the more generic features of the images. These are features like edges and colors. In contrast, the ConvNet’s later layers hold the details that are more specific to the individual classes held within the dataset that the model was initially trained on. If you are training a model on a dataset that is quite different from the original dataset, you’ll probably want to use the initial layers of the model to extract features and just retrain the rest of the model.

Transfer Learning Examples

The most common applications of transfer learning are probably those that use image data as inputs. These are often prediction/classification tasks. The way Convolutional Neural Networks interpret image data lends itself to reusing aspects of models, as the convolutional layers often distinguish very similar features. One example of a common transfer learning problem is the ImageNet 1000 task, a massive dataset full of 1000 different classes of objects. Companies who develop models that achieve high performance on this dataset often release their models under licenses that let others reuse them. Some of the models that have resulted from this process include the Microsoft ResNet modelthe Google Inception Model, and the Oxford VGG Model group.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

[ad_2]

Source link

Most Popular