Buy real YouTube subscribers. Best price and warranty.
Get Free YouTube Subscribers, Views and Likes

Inside the LLM: Visualizing the Embeddings Layer of Mistral-7B and Gemma-2B

Follow
Chris Hay

We look deep into the AI and look at how the embeddings layer of a Large Language Model such as Mistral7B and Gemma2B actually works.

You will learn how tokens and embeddings work and even extract out and load the embeddings layer from Gemma and Mistral into your own simple model, which we will use to visualize the model

You will see how an AI clusters terms together and how it can cluster similar words, build connections which cover not just similar words but also grouping of concepts such as colors, hotel chains, programming terms.

If you really want to understand how an LLM's works or even build your own LLM then starting with the first layer of a Generative AI model is the best place to start.

Github

https://github.com/chrishayuk/embeddings

posted by apetitus7d