Rock YouTube channel with real views, likes and subscribers
Get Free YouTube Subscribers, Views and Likes

Yann LeCun - Self-Supervised Learning: The Dark Matter of Intelligence (FAIR Blog Post Explained)

Follow
Yannic Kilcher

#selfsupervisedlearning #yannlecun #facebookai

Deep Learning systems can achieve remarkable, even superhuman performance through supervised learning on large, labeled datasets. However, there are two problems: First, collecting ever more labeled data is expensive in both time and money. Second, these deep neural networks will be high performers on their task, but cannot easily generalize to other, related tasks, or they need large amounts of data to do so. In this blog post, Yann LeCun and Ishan Misra of Facebook AI Research (FAIR) describe the current state of SelfSupervised Learning (SSL) and argue that it is the next step in the development of AI that uses fewer labels and can transfer knowledge faster than current systems. They suggest as a promising direction to build noncontrastive latentvariable predictive models, like VAEs, but ones that also provide highquality latent representations for downstream tasks.

OUTLINE:
0:00 Intro & Overview
1:15 Supervised Learning, SelfSupervised Learning, and Common Sense
7:35 Predicting Hidden Parts from Observed Parts
17:50 SelfSupervised Learning for Language vs Vision
26:50 EnergyBased Models
30:15 JointEmbedding Models
35:45 Contrastive Methods
43:45 LatentVariable Predictive Models and GANs
55:00 Summary & Conclusion

Paper (Blog Post):   / selfsupervisedlearningthedarkmattero...  
My Video on BYOL:    • BYOL: Bootstrap Your Own Latent: A Ne...  

ERRATA:
The difference between loss and energy: Energy is for inference, loss is for training.
The R(z) term is a regularizer that restricts the capacity of the latent variable. I think I said both of those things, but never together.
The way I explain why BERT is contrastive is wrong. I haven't figured out why just yet, though :)

Video approved by Antonio.

Abstract:
We believe that selfsupervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems.

Authors: Yann LeCun, Ishan Misra

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnineyannick
YouTube:    / yannickilcher  
Twitter:   / ykilcher  
Discord:   / discord  
BitChute: https://www.bitchute.com/channel/yann...
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn:   / yannickilcher488534136  
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon:   / yannickilcher  
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

posted by smskahr