Join me in this indepth, 55minute walkthrough as I unravel the complexities of RetrievalAugmented Generation (RAG) using fully opensource tools. In this video, I’ll guide you stepbystep through the entire process of setting up a local ChromaDB instance using Docker, configuring the powerful mxbaiembedlarge model for embedding, and leveraging your NVIDIA 3090 rig for efficient indexing and retrieval.
Discover how to maintain privacy while working with cuttingedge technologies—all on your local machine! This video is a crucial part of my ongoing RAG series, where I dive deep into the practical aspects of building robust AI solutions with tools like llama3.1:70b, ChromaDB, and more.
Whether you're an AI enthusiast, data scientist, or just curious about RAG, this video is packed with valuable insights to elevate your understanding and skills.
Tools Used:
ChromaDB: Local Vector Search Engine
Ollama Models: mxbaiembedlarge, llama3.1:70b
Hardware: NVIDIA 3090
GitHub Repo:
https://github.com/Teachings/ragtools
Subscribe and stay tuned for the next videos in this RAG series, where we’ll cover advanced topics and realworld applications. Don’t miss out on mastering the future of AIdriven content generation!