We think people are over-complicating AI application development.
With the right abstractions, there's no need to reinvent software engineering for AI applications — existing tools like Pydantic can form the foundation of the AI engineering stack.
The key challenge today is identifying patterns that allow you to build maintainable, AI-powered components within larger software systems.
In this talk, Samuel Colvin, creator of Pydantic, will present a blueprint for the most critical components of new AI applications.
Based on our experience building AI functionality into our commercial platform, Pydantic Logfire, this blueprint includes:
Data Validation, Reflection and Self-Correction: The critical role of enforcing strict data contracts at the API level and validating inputs to and outputs from AI models to ensure reliability — simple and obvious though this might see, it’s easy to get wrong and many people do
Schema Generation: tool calls are critical to leveraging LLMs, but you shouldn’t be hand-writing JSON Schema to define tools, instead Pydantic should be generating schema from the same source of truth used for data validation.
Evaluations and Iterative Improvement: The importance of continuous evaluation and iterative refinement to improve AI models and applications over time.
Observability: Implementing observability to monitor AI systems, detect issues early, and maintain robust performance.
We'll motivate the importance of each of these bullets and, perhaps surprisingly, show through concrete examples and code snippets demonstrating how straightforward they can be to implement using familiar tools like Pydantic.