Working with images has evolved significantly in recent years, primarily due to the transformative impact of Vision Transformers (ViT). Originally a research concept, ViT has quickly transitioned into a practical tool for production tasks. Unlike traditional image models, ViT processes images as sequences by dividing them into patches, similar to tokens in a sentence.
This structural shift has created new opportunities for training image classifiers. In this article, we’ll explore how to fine-tune a ViT model using the Hugging Face Transformers library, covering everything from dataset preparation to the training process.
Understanding Vision Transformer and Its Shift to Image Tasks
Convolutional neural networks (CNNs) have long been the go-to choice for image processing, adept at detecting patterns by examining small image segments layer by layer. While effective for tasks like identifying edges and textures, CNNs require deep networks to comprehend entire images, which can be resource-intensive.
ViT presents a different approach by dividing an image into equal-sized patches, flattening them, and converting each patch into a vector—akin to sentence tokens. These vectors are inputted into a transformer encoder, enabling the model to understand the relationship between patches. A classification token (CLS) gathers this information, serving as the final output.
This approach enhances ViT’s ability to grasp global patterns and context without deep layers, making it particularly effective for tasks like satellite image analysis and medical imaging. Hugging Face simplifies ViT fine-tuning by providing pre-trained weights and user-friendly tools.
Preparing the Dataset and Preprocessing Pipeline
Before training, ensure your dataset is in the correct format. Typically, image classification datasets are organized into folders, with each folder name serving as the class label. Hugging Face’s datasets library can load these datasets using load_dataset("imagefolder", data_dir=your_path)
.
Once loaded, preprocessing is managed by AutoImageProcessor
, which resizes images, converts them to tensors, and normalizes them using the model’s training mean and standard deviation. Most ViT models require inputs of 224x224 pixels.
Here’s how preprocessing might look:
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
def preprocess(example):
return processor(images=example["image"], return_tensors="pt")
dataset = dataset.map(preprocess)
For training and validation, split the dataset if necessary. Hugging Face’s library supports train_test_split
for easy data separation.
Fine-Tuning the Vision Transformer Model
With your dataset and preprocessing ready, load and fine-tune the model. Hugging Face offers AutoModelForImageClassification
, which loads a pre-trained ViT model with a classification head. Provide the number of labels and mapping dictionaries for class names and IDs.
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained(
"google/vit-base-patch16-224-in21k",
num_labels=num_classes,
id2label=id2label,
label2id=label2id,
)
Configure your training arguments, including learning rates, epochs, batch sizes, and evaluation strategies. These settings are passed to the Trainer
class, which manages training and evaluation.
from transformers import TrainingArguments, Trainer
training_args = TrainingArguments(
output_dir="./vit-finetuned",
per_device_train_batch_size=16,
evaluation_strategy="epoch",
save_strategy="epoch",
num_train_epochs=4,
learning_rate=3e-5,
logging_dir="./logs",
save_total_limit=1,
remove_unused_columns=False,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=processor,
compute_metrics=compute_metrics_function,
)
Define a metric function using sklearn to track performance metrics like accuracy or F1-score. Monitoring these metrics helps determine when to adjust parameters or stop training.
Training duration varies based on the dataset and hardware. Smaller datasets like CIFAR-10 can be fine-tuned quickly on consumer-grade GPUs, while larger datasets may require more time.
Post-Training Considerations and Model Use
After training, save the model and processor for future use.
model.save_pretrained("vit-custom")
processor.save_pretrained("vit-custom")
Reload the model for predictions using the pipeline feature:
from transformers import pipeline
image_classifier = pipeline("image-classification", model="vit-custom", tokenizer="vit-custom")
results = image_classifier(image_path)
Review results thoroughly, especially when classes are similar. Evaluate the confusion matrix to identify areas for data improvement or additional training.
If results are unsatisfactory, consider training for additional epochs, using data augmentation, or switching to a different ViT variant based on your computational resources.
The fine-tuned ViT model is versatile and adaptable, suitable for further training on related tasks or using embeddings for other workflows. Hugging Face’s model hub facilitates sharing your trained model with others.
Conclusion
Fine-tuning a Vision Transformer with Hugging Face Transformers is now accessible and efficient. With pre-trained weights and supportive tools, adapting ViT for image classification can be achieved within hours. ViT’s unique transformer-based structure often yields superior performance when context is crucial. Whether dealing with small or large datasets, this approach offers a modern solution without the need to start from scratch.