Get real, active and permanent YouTube subscribers
Get Free YouTube Subscribers, Views and Likes

LangChain’s Harrison Chase on Building the Orchestration Layer for AI Agents | Training Data

Follow
Sequoia Capital

Last year, AutoGPT and Baby AGI captured our imaginations—agents quickly became the buzzword of the day…and then things went quiet. AutoGPT and Baby AGI may have marked a peak in the hype cycle, but this year has seen a wave of agentic breakouts on the product side, from Klarna’s customer support AI to Cognition’s Devin, etc.

Harrison Chase of LangChain is focused on enabling the orchestration layer for agents. In this conversation, he explains what’s changed that’s allowing agents to improve performance and find traction.

Harrison shares what he’s optimistic about, where he sees promise for agents vs. what he thinks will be trained into models themselves, and discusses novel kinds of UX that he imagines might transform how we experience agents in the future.

(01:21) What are agents?
(05:00) What is LangChain’s role in the agent ecosystem?
(11:13) What is a cognitive architecture?
(13:20) Is bespoke and hard coded the way the world is going, or a stop gap?
(18:48) Focus on what makes your beer taste better
(20:37) So what?
(22:20) Where are agents getting traction?
(25:35) Reflection, chain of thought, other techniques?
(30:42) UX can influence the effectiveness of the architecture
(35:30) What’s out of scope?
(38:04) Fine tuning vs prompting?
(42:17) Existing observability tools for LLMs vs needing a new architecture/approach
(45:38) Lightning round

Hosted by: Sonya Huang and Pat Grady, Sequoia Capital

Read the Transcript: https://seq.vc/TDHC
Read the Inference Essay: https://seq.vc/TDHCIA

posted by obiziwe5n