Grow your YouTube views, likes and subscribers for free
Get Free YouTube Subscribers, Views and Likes

Learning To Classify Images Without Labels (Paper Explained)

Follow
Yannic Kilcher

How do you learn labels without labels? How do you classify images when you don't know what to classify them into? This paper investigates a new combination of representation learning, clustering, and selflabeling in order to group visually similar images together and achieves surprisingly high accuracy on benchmark datasets.

OUTLINE:
0:00 Intro & Highlevel Overview
2:15 Problem Statement
4:50 Why naive Clustering does not work
9:25 Representation Learning
13:40 Nearestneighborbased Clustering
28:00 SelfLabeling
32:10 Experiments
38:20 ImageNet Experiments
41:00 Overclustering

Paper: https://arxiv.org/abs/2005.12320
Code: https://github.com/wvangansbeke/Unsup...

Abstract:
Is it possible to automatically classify images without the use of groundtruth annotations? Or when even the classes themselves, are not a priori known? These remain important, and open questions in computer vision. Several approaches have tried to tackle this problem in an endtoend fashion. In this paper, we deviate from recent works, and advocate a twostep approach where feature learning and clustering are decoupled. First, a selfsupervised task from representation learning is employed to obtain semantically meaningful features. Second, we use the obtained features as a prior in a learnable clustering approach. In doing so, we remove the ability for cluster learning to depend on lowlevel features, which is present in current endtoend learning approaches. Experimental evaluation shows that we outperform stateoftheart methods by huge margins, in particular +26.9% on CIFAR10, +21.5% on CIFAR10020 and +11.7% on STL10 in terms of classification accuracy. Furthermore, results on ImageNet show that our approach is the first to scale well up to 200 randomly selected classes, obtaining 69.3% top1 and 85.5% top5 accuracy, and marking a difference of less than 7.5% with fullysupervised methods. Finally, we applied our approach to all 1000 classes on ImageNet, and found the results to be very encouraging. The code will be made publicly available.

Authors: Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, Luc Van Gool

Links:
YouTube:    / yannickilcher  
Twitter:   / ykilcher  
BitChute: https://www.bitchute.com/channel/yann...
Minds: https://www.minds.com/ykilcher

posted by smskahr