Unsupervised domain adaptation (UDA) person re-identification (re-ID) aims to apply useful knowledge learned on labeled source domain datasets to unlabeled target domain datasets. Most successful UDA re-ID methods combine clustering to generate pseudo labels for feature learning and further fine-tuning on the target domain in an alternating manner. However, the interaction of the two steps is offline, which may make the noisy pseudo labels greatly hinder the classification and retrieval ability of the whole model. In order to purify these noisy pseudo labels, in this paper, a framework called Unsupervised Confident Co-promoting (UCC) is proposed. Specially, two peer teach-student co-training models are adopted simultaneously to online refine the noisy pseudo labels with the offline clustering algorithm and supervise each other during iterations. More significantly, we introduce a confidence strategy that greatly improves the confidence of generating pseudo labels in the case of multi-network collaboratively guided learning. The combination of the above two allows our final method to greatly improve noisy pseudo labels on re-ID task, achieve a huge performance boost and generalize to more deep learning domains. Moreover, our method achieves a significant improvement in common evaluation indicators in the four most common re-ID experiments compared to the state-of-the-art (SOTA) methods, and even on some results is comparable to the supervised learning method.
Network pruning has achieved great success in the compression and acceleration of neural networks on resource- limited devices. Previous pruning algorithms utilize filter pruning or channel pruning with the definition of a specific global or local pruning rate. Conventional pruning only finds or considers global or local pruning rates. In the only consideration of global pruning works, they ignore the individual characteristics of each layer. Similarly, only consideration of local pruning works could lead to a fragmented connection between layers. In this paper, we propose a novel method named global and local pruning under knowledge distillation (GLKD) by a combination of filter pruning and channel pruning technology, which is trained with a mixture of global and local pruning rates. The proposed algorithm, GLKD, can accelerate the inference of ResNet-110 to 56.2% speed-up with 0.17% accuracy increase on the CIFAR-100 dataset, which has great trade-offs in accuracy and compression. Additionally, the experiments of GLKD on ImageNet with ResNet-56 and ResNet-110 are conducted to prove its effectiveness on the compressed model. Moreover, the knowledge distillation is adopted on the pruning step in GLKD algorithm and improves the accuracy of the pruned network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.