Join Arize AI and AWS for a hands-on for a hands-on advanced workshop
on building and evaluating AI agents in production. In this two-hour
session, you’ll go beyond simple agent demos and learn how to
design, instrument, and optimize complex agentic workflows using AWS
Strands Agents and the Arize AX observability platform. Participants
will build a complete end-to-end pipeline: Agent Development: Create a
Strands agent with multiple tools powered by Amazon Bedrock
Instrumentation: Add tracing and observability using OpenTelemetry and
Arize AX Evaluation: Automate quality assessments with LLM-as-a-Judge
techniques Optimization: Experiment with prompts and regression
datasets to improve performance Monitoring: Deploy production-ready
monitoring, dashboards, and alerts Expect to walk away with a deeper
understanding of how to manage nondeterministic AI workflows, detect
hidden failures, optimize agent decisions, and monitor costs and
performance at scale. Who should attend: This workshop is designed for
AI engineers, ML practitioners, and developers with experience in LLMs
who want to advance their skills in building production-ready agentic
systems. Details: Date/Time: Doors open at 4:30 PM; content runs 5:00
– 7:00 PM Format: Lecture + hands-on labs (bring your laptop) Level:
Advanced Food & beverages provided Agenda Overview 4:00 PM – Doors
open, food & beverages 4:30 PM – Welcome & introduction to agentic
AI systems 5:00 PM – Hands-on lab: Build a Strands agent on AWS
Bedrock 5:30 PM – Add tracing & observability with Arize AX 6:00 PM
– Automate evaluations and optimize prompts 6:30 PM – Monitoring,
dashboards, and production best practices 7:00 PM – Wrap-up,
resources, and next steps **We have limited capacity — please apply
only if you plan to join us in person.