#19.5 Transfer Learning in CNNs Transfer learning is a technique where a pretrained convolutional neural network (CNN), usually trained on a large dataset like ImageNet, is reused as the starting point for a new, related task. Instead of training a CNN from scratch (which requires a lot of data and computing resources), transfer learning leverages the knowledge the network has already learned—such as detecting edges, textures, and shapes.
There are two common ways to use transfer learning in CNNs:
Feature Extraction: The pretrained CNN’s convolutional base is used to extract features from new images. The fully connected layers at the end are replaced with new layers suited for the new task. The pretrained layers’ weights are usually frozen (not updated during training), and only the new classifier layers are trained.
Fine-tuning: After replacing the final layers, some or all of the pretrained convolutional layers are unfrozen and retrained (with a smaller learning rate). This allows the model to adapt more specifically to the new dataset while still using the valuable low-level features learned from the large original dataset.
Transfer learning is especially useful when you have limited labeled data for your target task, as it helps improve accuracy and speeds up training by starting from a pretrained model.