Secret sauce that brings YouTube followers, views, likes
Get Free YouTube Subscribers, Views and Likes

Making Long Context LLMs Usable with Context Caching

Follow
Prompt Engineering

Google's Gemini API now supports context caching, aimed at addressing limitations of long context LLMs by reducing processing time and costs. This video explains how to use the caching feature, its impact on performance, and implementation details with examples.

LINKS:
Context Caching: https://tinyurl.com/4263z4da
Vertex AI: https://tinyurl.com/yex8ua5h
Notebook: https://tinyurl.com/2et8spkf
Pricing: https://ai.google.dev/pricing

RAG Beyond Basics Course:
https://promptssite.thinkific.com/c...

Let's Connect:
Discord:   / discord  
☕ Buy me a Coffee: https://kofi.com/promptengineering
| Patreon:   / promptengineering  
Consulting: https://calendly.com/engineerprompt/c...
Business Contact: [email protected]
Become Member: http://tinyurl.com/y5h28s6h

Preconfigured localGPT VM: https://bit.ly/localGPT (use Code: PromptEngineering for 50% off).

Signup for Newsletter, localgpt:
https://tally.so/r/3y9bb0


TIMESTAMPS
00:00 Introduction to Google's Context Caching
00:48 How Context Caching Works
01:00 Setting Up Your Cache
03:07 Cost and Storage Considerations
04:46 Example Implementation
08:57 Creating and Using the Cache
11:06 Managing Cache Metadata
12:53 Conclusion and Future Prospects

All Interesting Videos:
Everything LangChain:    • LangChain  

Everything LLM:    • Large Language Models  

Everything Midjourney:    • MidJourney Tutorials  

AI Image Generation:    • AI Image Generation Tutorials  

posted by durhoossymx