Lance Martin - Architecting and Testing Controllable Agents

Date
Time
Track
Workshops
Room
GGB A

Evals & LLM Ops: LLM-powered autonomous agents combine (1) Tool calling, (2) Memory, and (3) Planning to autonomously perform tasks. While they hold tremendous promise, agent reliability has been a barrier for large-scale deployment and productionisation. We’ll cover ways to design and build reliable agents using LangGraph, which can support diverse self-corrective applications such as RAG and code generation. But, just as critically, we’ll cover ways to use LangSmith to test your agents, examining both agent's final response as well as agent tool use trajectories. Collectively, we’ll talk about three types of testing loops you can incorporate into your agent design process - at run time, pre-production, and for production monitoring.

Lance Martin

Lance has been at LangChain for a year, working across open source python, langgraph, and langgraph projects. Here has two LangChain video series focused on RAG, LangSmith Evaluation and various popular videos on LangGraph (e.g., local agents for RAG, code generation). Prior to LangChain, he spent several years in technical leadership positions focused on applied AI in the self-driving industry (Ike and Uber ATG) and obtained a PhD from Stanford.

Lance Martin
Lance MartinAI Engineer

Buy Tickets

We have now sold out of Early Bird tickets; General Admission has also sold out.
Please join us online for the free livestream.

Buy Tickets SOLD OUT!