Latent Space is how neural networks store information. In this video, we discuss Autoencoders and Variational Autoencoders and how we can explore, interpret, and manipulate images by looking at it's latent space representations. The CelebA dataset and the DFCVAE model (https://arxiv.org/abs/1610.00291) shows some pretty interesting results that I found super enlightening about this mysterious topic of Machine Learning.
To support the channel and access the code and slides used in this video, consider joining us on Patreon. Members get access to code, scripts, slides, animations, and illustrations for most of the videos on my channel!
Patreon / neuralbreakdownwithavb
Follow on Twitter: @neural_avb
Timestamps
0:00: Intro
0:48: Intuition
2:11: Autoencoders
2:50: Nearest Neighbor Search
4:30: VAE and Generative AI
5:41: Latent Space Arithmetic
8:05: Finding patterns and trends