22 October 2023 Contrastive knowledge-augmented self-distillation approach for few-shot learning
Lixu Zhang, Mingwen Shao, Sijie Chen, Fukang Liu
Author Affiliations +
Abstract

Few-shot learning consists in training a classifier, which can be quickly adapted to new tasks with a few samples. To address the few-shot learning tasks, metric-based meta-learning methods explore the appropriate metrices to measure the similarity between the support and the query samples. However, existing methods ignore the similarity relationship between the labeled samples in the support set. Consequently, we propose a contrastive knowledge-augmented self-distillation approach to leverage the similarity relationship between a few labeled samples in the support set and allow the model to focus on more regions of images. Specifically, we calculate the classification probability of query images and each class prototype, respectively, and consider the classification probability of each class prototype as the teacher to guide the classification of query samples. Meanwhile, we design a contrast loss to bring the eigenvectors of the same class closer and push the eigenvectors of different classes further. In addition, a transformation function is implemented to allow the model to focus on more regions of the images, so as to obtain the key features. Extensive experiments are conducted on miniImageNet, tieredImageNet, and Caltech-UCSD birds 200, and the results show that our method can enhance metric-based meta-learning methods and outperforms the state-of-the-art methods.

© 2023 SPIE and IS&T
Lixu Zhang, Mingwen Shao, Sijie Chen, and Fukang Liu "Contrastive knowledge-augmented self-distillation approach for few-shot learning," Journal of Electronic Imaging 32(5), 053037 (22 October 2023). https://doi.org/10.1117/1.JEI.32.5.053037
Received: 2 March 2023; Accepted: 12 September 2023; Published: 22 October 2023
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Prototyping

Education and training

Image classification

Eigenvectors

Statistical modeling

Design

Feature extraction

Back to Top