{"conference":"AI Engineer Europe 2026","dates":"April 8-10, 2026","location":"London, UK","website":"https://ai.engineer/europe","totalSpeakers":162,"speakers":[{"name":"Adrian Bertagnoli","role":"Founding Engineer","company":"Callosum","linkedin":"https://www.linkedin.com/in/adrian-bertagnoli-bb3467178/","photoUrl":"https://ai.engineer/speakers/europe/adrian-bertagnoli.jpg","sessions":[{"title":"Scaling the Next Paradigm of Heterogeneous Intelligence","description":"To date, the dominant trajectory of AI progress has been defined by a homogeneous paradigm: capturing performance gains by scaling uniformity at both the architectural level (monolithic models) and the hardware level (identical GPU clusters). But it is becoming increasingly clear that real-world intelligence demands multi-agent systems, where specialised components must collaborate to solve long-horizon, multi-turn, and multi-task problems. In this talk, we outline how a new paradigm - Heterogeneous Intelligence - is the essential scaling paradigm for this emerging era. We present early findings demonstrating how heterogeneity across workflows, model architectures, and hardware can be exploited as a source of advantage rather than treated as complexity to be minimised. For example, heterogeneous optimisation yields simultaneous improvements in cost, speed, and task performance such as enabling automated payment workflows to execute reliably at reduced compute cost or achieving higher accuracy in deep context reasoning over long documents by specialising sub-tasks across purpose-matched chips and models. We conclude with what we believe to be promising frontiers in this direction, including how new systems of intelligence will be discovered through iterative, symbiotic hardware-algorithm co-evolution.","day":"April 10","time":"12:40-1:00pm","room":"Abbey","type":"talk","track":"GPUs & LLM Infrastructure","speakers":["Adrian Bertagnoli"]}]},{"name":"Adrien Grondin","role":"iOS Developer","company":"Locally AI","companyDescription":"Run AI models locally on your iPhone, iPad, and Mac","twitter":"https://x.com/adrgrondin","sessions":[{"title":"Running Gemma 4 On-Device: 40 Tokens/s on iPhone with MLX","description":"Hallway Track: Attendees propose lightning talks, peers vote, and the best talks get to go onstage as impromptu lightning talks. Check the AIE attendee Slack for signup and voting instructions.","day":"April 10","time":"2:30-2:40pm","room":"Moore","type":"lightning","track":"Hallway Track","speakers":["Adrien Grondin"]}]},{"name":"AIE Welcome","role":"AIE team","company":"AI Engineer","companyDescription":"AI Engineer conference","twitter":"https://x.com/swyx","photoUrl":"https://ai.engineer/speakers/europe/swyx.jpg","sessions":[{"title":"Opening Address","day":"April 9","time":"9:00-9:10am","room":"Keynote","type":"keynote","speakers":["AIE Welcome"]}]},{"name":"Alessandro Cappelli","role":"Co-founder & Chief Customer Officer","company":"Adaptive ML","linkedin":"https://www.linkedin.com/in/alessandro-cappelli-aa8060172","photoUrl":"https://ai.engineer/speakers/europe/alessandro-cappelli.jpg","sessions":[{"title":"Scaling Reinforcement Learning: Lessons from Trillion-Token Deployments at Fortune 500s","description":"Building production-ready agents with Reinforcement Learning (RL) requires significantly more than a standard RFT (Rejection Fine-Tuning) API. To move beyond the \"Assistant\" hype and unlock true agentic impact, enterprises must adopt a holistic RLOps approach.\nIn this session, Alessandro Cappelli (Co-founder & Chief Customer Officer, Adaptive ML) explains what is needed to deploy Enterprise AI at scale. \n* Synthetic Data Generation Strategies: How to bootstrap specialized models using your own proprietary datasets to create high-fidelity training signals.\n* Self-Play & Mock Environments: Building the \"digital playgrounds\" where agents can fail safely and learn exponentially.\n* Aligned AI Judges: Moving past the \"human-in-the-loop\" bottleneck by deploying specialized models to maintain quality and alignment at scale.\nAttendees will learn how a unified RLOps pipeline bridges the gap towards business value, turning generalist models into specialized, high-performance assets for complex organizations.","day":"April 10","time":"11:40am-12:00pm","room":"Abbey","type":"talk","track":"GPUs & LLM Infrastructure","speakers":["Alessandro Cappelli"]}]},{"name":"Alex Cheema","role":"Co-Founder","company":"EXO Labs","companyDescription":"Distributed AI inference across consumer devices","linkedin":"https://www.linkedin.com/in/alex-cheema","github":"https://github.com/alexcheema","photoUrl":"https://ai.engineer/speakers/europe/alex-cheema.jpg","sessions":[{"title":"Frontier AI at Home (literally)","description":"We'll walk you through running frontier AI locally at home. It's not what you think.","day":"April 8","time":"1:15pm-3:15pm","room":"Moore","type":"workshop","speakers":["Alex Cheema"]},{"title":"Frontier AI at Home (literally)","description":"Hallway Track: Attendees propose lightning talks, peers vote, and the best talks get to go onstage as impromptu lightning talks. Check the AIE attendee Slack for signup and voting instructions.","day":"April 10","time":"2:40-2:50pm","room":"Moore","type":"lightning","track":"Hallway Track","speakers":["Alex Cheema"]}]},{"name":"Amy Boyd","role":"Microsoft Foundry Developer Relations Lead","company":"Microsoft","companyDescription":"Cloud and AI platform","twitter":"https://x.com/AmyKateNicho","linkedin":"https://www.linkedin.com/in/amykatenicho/","photoUrl":"https://ai.engineer/speakers/europe/amy-boyd.jpg","sessions":[{"title":"Mind the Gap (In your Agent Observability)","day":"April 8","time":"9:00-10:20am","room":"Moore","type":"workshop","speakers":["Amy Boyd","Nitya Narasimhan"]}]},{"name":"Anant Dole","role":"Head of AI","company":"Take Take Take","linkedin":"https://www.linkedin.com/in/anantdole/","photoUrl":"https://ai.engineer/speakers/europe/anant-dole.jpg","sessions":[{"title":"Building a Chess Coach","description":"Take Take Take is the chess app founded by World Champion Magnus Carlsen. Our goal is to turn play into progress by helping players better understand their games.In this talk, we will share how we built Game Review, a feature that explains what happened in your games and surfaces meaningful playing stats after every game. The challenge is that LLMs can communicate clearly, but they cannot reliably explain chess on their own. We built a system that combines chess engine analysis with tactical and positional detectors that extract chess concepts, while the LLM turns those signals into clear, useful coaching.We will cover the pipeline behind Game Review, including Stockfish evaluations, concept extraction, prompt design, and how we use chess-aware skills and evals to iterate on the system. We will share a short live demo and practical patterns for consumer AI teams building AI features.","day":"April 9","time":"12:20-12:40pm","room":"Westminster","type":"talk","track":"Harness Engineering","speakers":["Anant Dole","Asbjørn Steinskog"]}]},{"name":"Andreas Kollegger","role":"Director of Applied AI Research","company":"Neo4j","companyDescription":"Graph database platform","twitter":"https://x.com/akollegger","linkedin":"https://www.linkedin.com/in/akollegger","github":"https://github.com/akollegger","photoUrl":"https://ai.engineer/speakers/europe/andreas-kollegger.jpg","sessions":[{"title":"Context Graphs for Explainable, Decision-Aware AI Agents","description":"AI agents can follow prompts and use tools, but often lack the institutional context needed to explain why a decision is made. That reasoning: policies, precedents, and past outcomes are usually scattered across systems and human memory.\nContext graphs capture this missing layer by modeling decision traces over time, including causality and context. By giving agents access to just enough historical and organizational knowledge, context graphs enable more explainable, consistent, and auditable decisions.","day":"April 9","time":"2:50-3:10pm","room":"St. James","type":"talk","track":"Context Engineering","speakers":["Andreas Kollegger","Zaid Zaim"]}]},{"name":"Andrew Wilson","role":"Applied AI","company":"Anthropic","companyDescription":"AI safety company, makers of Claude","linkedin":"https://www.linkedin.com/in/anddwilson/","photoUrl":"https://ai.engineer/speakers/europe/andrew-wilson.jpg","sessions":[{"title":"How to Build Agents That Run for Hours (Without Losing the Plot)","description":"We'll cover: why self-evaluation is a trap and adversarial evaluator agents work better; why context compaction doesn't cure coherence drift but structured handoffs do; how to decompose work into testable sprint contracts; how to grade subjective output with rubrics an LLM can actually apply; and how to read traces as your primary debugging loop. Plus the question nobody asks: which parts of your harness should you delete when the next model drops?","day":"April 8","time":"9:00-10:20am","room":"St. James","type":"workshop","speakers":["Ash Prabaker","Andrew Wilson"]}]},{"name":"Angelos Perivolaropoulos","role":"Research Engineer, Speech-to-Text","company":"ElevenLabs","companyDescription":"AI voice technology platform","linkedin":"https://www.linkedin.com/in/angelos-perivolaropoulos/","github":"https://github.com/angelos-p","photoUrl":"https://ai.engineer/speakers/europe/angelos-perivolaropoulos.jpg","sessions":[{"title":"Training an LLM from Scratch, Locally","day":"April 8","time":"3:45pm-5:45pm","room":"Abbey","type":"workshop","speakers":["Angelos Perivolaropoulos"]}]},{"name":"Angus J. McLean","role":"AI Director","company":"Oliver","linkedin":"https://uk.linkedin.com/in/angusjmclean","photoUrl":"https://ai.engineer/speakers/europe/angus-j-mclean.jpg","sessions":[{"title":"Bounded Autonomy: Between Free Will and Determinism","description":"In the world of AI engineering, it always feels like the ground is moving underneath your feet. Every few weeks, a new concept, tool, or framework is released, forcing you to rethink your entire stack.Everything around us has changed, and yet, to most of us, it all feels strangely familiar. Increasingly we are seeing a move away from higher level orchestration, and a return to underlying principles, not just of LLMs, but of the operating system itself. Because, in many ways, there are only two things we can really change: the prompt and the context.As systems become more intelligent, the way we design them starts to shift. As human and machine converge, philosophy starts to matter more than control or implementation.","day":"April 10","time":"12:20-12:40pm","room":"Moore","type":"talk","track":"Generative Media","speakers":["Angus J. McLean"]}]},{"name":"Ara Khan","role":"Founding Engineer","company":"Cline","twitter":"https://x.com/arafatkatze","linkedin":"https://www.linkedin.com/in/arafatkatze/","github":"https://github.com/arafatkatze","photoUrl":"https://ai.engineer/speakers/europe/ara-khan.jpg","sessions":[{"title":"Evals Are Broken, Use Them Anyway","description":"This talk explains why I went from \"evals are useless\" to using them as a core part of my agent improvement loop. I share practical heuristics for interpreting, running, and creating evals, and why doing them anyway is better than pure \"vibes\".","day":"April 9","time":"2:30-2:50pm","room":"Moore","type":"talk","track":"Evals & Observability","speakers":["Ara Khan"]},{"title":"Don't Build Slop (4 Levels of AI Agent Maturity)","description":"Everyone and their grandmother is building an agent, most of them badly. This talk breaks the problem into four levels, from framework prototyping to cloud-native agent fleets that scale to millions of tasks. Each level is harder and one step closer to something that actually works in production.","day":"April 10","time":"3:10-3:30pm","room":"Abbey","type":"talk","track":"GPUs & LLM Infrastructure","speakers":["Ara Khan"]}]},{"name":"Armin Ronacher","role":"Founder","company":"Earendil","companyDescription":"Application monitoring platform; creator of Flask","twitter":"https://x.com/mitsuhiko","linkedin":"https://www.linkedin.com/in/arminronacher/","github":"https://github.com/mitsuhiko","photoUrl":"https://ai.engineer/speakers/europe/armin-ronacher.jpg","sessions":[{"title":"The Friction Is Your Judgment","day":"April 10","time":"10:10-10:30am","room":"Keynote","type":"keynote","speakers":["Armin Ronacher","Cristina Poncela Cubeiro"]}]},{"name":"Asbjørn Steinskog","role":"Lead AI Developer","company":"Take Take Take","linkedin":"https://www.linkedin.com/in/asbj%C3%B8rn-ottesen-steinskog-a8000241/","photoUrl":"https://ai.engineer/speakers/europe/asbjorn-steinskog.jpg","sessions":[{"title":"Building a Chess Coach","description":"Take Take Take is the chess app founded by World Champion Magnus Carlsen. Our goal is to turn play into progress by helping players better understand their games.In this talk, we will share how we built Game Review, a feature that explains what happened in your games and surfaces meaningful playing stats after every game. The challenge is that LLMs can communicate clearly, but they cannot reliably explain chess on their own. We built a system that combines chess engine analysis with tactical and positional detectors that extract chess concepts, while the LLM turns those signals into clear, useful coaching.We will cover the pipeline behind Game Review, including Stockfish evaluations, concept extraction, prompt design, and how we use chess-aware skills and evals to iterate on the system. We will share a short live demo and practical patterns for consumer AI teams building AI features.","day":"April 9","time":"12:20-12:40pm","room":"Westminster","type":"talk","track":"Harness Engineering","speakers":["Anant Dole","Asbjørn Steinskog"]}]},{"name":"Ash Prabaker","role":"Member of Technical Staff","company":"Anthropic","companyDescription":"AI safety company, makers of Claude","twitter":"https://x.com/AshPrabaker","linkedin":"https://www.linkedin.com/in/ash-prabaker/","github":"https://github.com/ashprabaker","photoUrl":"https://ai.engineer/speakers/europe/ash-prabaker.jpg","sessions":[{"title":"How to Build Agents That Run for Hours (Without Losing the Plot)","description":"We'll cover: why self-evaluation is a trap and adversarial evaluator agents work better; why context compaction doesn't cure coherence drift but structured handoffs do; how to decompose work into testable sprint contracts; how to grade subjective output with rubrics an LLM can actually apply; and how to read traces as your primary debugging loop. Plus the question nobody asks: which parts of your harness should you delete when the next model drops?","day":"April 8","time":"9:00-10:20am","room":"St. James","type":"workshop","speakers":["Ash Prabaker","Andrew Wilson"]}]},{"name":"Ben Burtenshaw","role":"ML Engineer","company":"Hugging Face","twitter":"https://x.com/ben_burtenshaw","linkedin":"https://www.linkedin.com/in/ben-burtenshaw/","github":"https://github.com/burtenshaw","photoUrl":"https://ai.engineer/speakers/europe/ben-burtenshaw.jpg","sessions":[{"title":"Your Coding Agent Should Do AI System Engineering","description":"We gave Claude Code, Codex, and Gemini CLI the ability to build CUDA  kernels, fine-tune models, and run full ML experiments. The results were real speedups and performance boosts. An agent-written RMSNorm kernels hit 1.88x speedups on H100s. Fine-tuned Qwen3-0.6B hit 35% on livecodebench.\n\nHugging Face teams are now running 1,000+ ML experiments daily without writing training scripts.\n\nThis talk is a live walkthrough of where Hugging Face has integrated agent skills into the ML stack, so that agentic coders can go deeper than ever. \n\nI'll demo the full loop: giving an agent a task it can't do, watching it fail, loading a skill, and watching it produce a kernel that beats PyTorch's native implementation. I'll share benchmarks across six models on kernel writing, show where agents still fail badly, and give you a concrete playbook for using skills to tackle the hardest systems problems in AI engineering today.\n\nI'll briefly highlight the practical tools that we're using to build skills and release agents on the Hub.\n\nBut the real takeaway isn't the tools. It's what this means for your career as an engineer, and/or engineering team you lead. CUDA programming, ML training pipelines, RL alignment, these were deep specializations that took years to develop. Agent skills compress that timeline from years to hours. \n\nLeave with: the open-source skill files, the upskill CLI, and a framework for knowing when to trust (and when to verify) your agent on hard systems tasks.","day":"April 10","time":"2:30-2:50pm","room":"Fleming","type":"talk","track":"Coding Agents","speakers":["Ben Burtenshaw"]}]},{"name":"Ben Hylak","role":"CTO + Co-Founder","company":"Raindrop.ai","companyDescription":"AI monitoring platform for agent observability","twitter":"https://x.com/benhylak","linkedin":"https://www.linkedin.com/in/benhylak/","photoUrl":"https://ai.engineer/speakers/europe/ben-hylak.jpg","sessions":[{"title":"Everything You Need To Know About Agent Observability","description":"Let your AI agents proactively report their own failures. This workshop introduces Raindrop Self Diagnostics, showing how agents can call home, classify failures, filter noise, and alert teams when critical issues arise. Attendees will learn how to instrument self-reporting, understand the default diagnostic categories, and use agent-reported signals to improve production observability.","day":"April 8","time":"3:45pm-5:45pm","room":"Wordsworth","type":"workshop","speakers":["Danny Gollapalli","Ben Hylak"]}]},{"name":"Bertrand Charpentier","role":"Co-Founder, President & Chief Scientist","company":"Pruna AI","linkedin":"https://www.linkedin.com/in/bertrand-charpentier-76995ab6/","github":"https://github.com/sharpenb","photoUrl":"https://ai.engineer/speakers/europe/bertrand-charpentier.jpg","sessions":[{"title":"What is state-of-the-art AI model?","day":"April 10","time":"12:00-12:20pm","room":"Moore","type":"talk","track":"Generative Media","speakers":["Bertrand Charpentier"]}]},{"name":"Bilge Yücel","role":"Senior DevRel Engineer","company":"deepset GmbH","companyDescription":"NLP and AI platform","twitter":"https://x.com/bilgeycl","linkedin":"https://www.linkedin.com/in/bilge-yucel/","photoUrl":"https://ai.engineer/speakers/europe/bilge-yucel.jpg","sessions":[{"title":"What Breaks When You Build AI Under Sovereignty Constraints","description":"Regulatory and jurisdictional constraints are no longer an edge case in AI system design; they now shape architectural decisions as much as model quality does. From European efforts like “Eurostack” to sovereign cloud offerings by hyperscalers, sovereignty is becoming a practical engineering constraint, pushing teams to design systems that operate within defined boundaries.\nWhat changes when your AI system can’t send data outside a region, rely on external APIs, or depend on infrastructure you don’t control? More importantly, what breaks?\nThis talk explores sovereign AI as a system design problem, focusing on the hidden assumptions in modern AI architectures that fail under real-world constraints. Many production systems rely on external dependencies, from embedding APIs to evaluation tools, that make them difficult to audit, reproduce, or control.\nWe’ll examine what breaks in these architectures and how sovereignty requirements reshape core design decisions: where models run, how data flows, and how systems remain observable, auditable, and replaceable.\nTo make this concrete, we’ll walk through a reference architecture using an open, modular orchestration approach (with Haystack as an example), and show how to:design pipelines that run across cloud, on-prem, and hybrid environmentsswap models without redesigning the systemkeep sensitive data local while integrating external capabilities when allowedmaintain full visibility into data flow and system behaviorThe focus is on building systems that remain flexible under constraints with replaceable components, explicit data flows, and control staying within your boundary.","day":"April 9","time":"12:40-1:00pm","room":"St. James","type":"talk","track":"Context Engineering","speakers":["Bilge Yücel"]}]},{"name":"Brandon Walsenuk","company":"Unblocked","sessions":[{"title":"Stop babysitting your agents: building a context engine for mergeable code","description":"AI coding agents are fast, capable, and context-blind. They generate code that compiles but miss your team's conventions, ignores past decisions, and breaks patterns— because they don't understand how your system actually works. The fix most teams try (more MCP servers, more rules files, team skills, bigger context windows) gives agents access to information without giving them understanding.\n\nThis talk shows what changes when you close that gap with a context engine. I'll reference a real coding task run without and with organizational context: the first took 2.5 hours and 20.9M tokens with multiple rounds of human correction, the other took 25 minutes and 10.8M tokens and produced mergeable code on the first pass. Same agent, same model. Includes a short demo showing the moment organizational context changes the agent's approach.\n\nYou'll see the three problems a context engine solves — quality, efficiency, and autonomy — and the hard lessons we learned getting it wrong before we got it right.","day":"April 9","time":"10:30-10:48am","room":"Wordsworth","type":"expo_session","track":"Expo Sessions (Wordsworth)","speakers":["Brandon Walsenuk"]},{"title":"Stop babysitting your agents: building a context engine for mergeable code","description":"AI coding agents are fast, capable, and context-blind. They generate code that compiles but miss your team's conventions, ignores past decisions, and breaks patterns— because they don't understand how your system actually works. The fix most teams try (more MCP servers, more rules files, team skills, bigger context windows) gives agents access to information without giving them understanding.\nThis talk shows what changes when you close that gap with a context engine. I'll reference a real coding task run without and with organizational context: the first took 2.5 hours and 20.9M tokens with multiple rounds of human correction, the other took 25 minutes and 10.8M tokens and produced mergeable code on the first pass. Same agent, same model. Includes a short demo showing the moment organizational context changes the agent's approach.\nYou'll see the three problems a context engine solves — quality, efficiency, and autonomy — and the hard lessons we learned getting it wrong before we got it right.","day":"April 10","time":"1:25-1:43pm","room":"Wesley","type":"expo_session","track":"Expo Sessions (Wesley)","speakers":["Brandon Walsenuk"]}]},{"name":"Brendon Dillon","role":"Research Scientist","company":"Google DeepMind","companyDescription":"AI research lab by Google","sessions":[{"title":"Text Diffusion","day":"April 9","time":"3:10-3:30pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Brendon Dillon"]}]},{"name":"Brian Scanlan","role":"Principal Systems Engineer","company":"Intercom","companyDescription":"Customer messaging platform","twitter":"https://x.com/brian_scanlan","linkedin":"https://www.linkedin.com/in/scanlanb/","photoUrl":"https://ai.engineer/speakers/europe/brian-scanlan.jpg","sessions":[{"title":"How Building with AI Can Double the Throughput of Your Engineering Team","description":"In 2025, Intercom took on an ambitious goal to double the throughput of their engineering team by going beyond building fancy demos, and instead taking advantage of AI tooling to get real features and large existing SaaS codebase into the hands of paying customers. While this transformation is still a work in progress, Intercom's pace of innovation has always been a competitive strength. In this talk Brian will share his lessons and learnings when building with AI agents – what has and hasn't worked when faced with scaling towards 2x productivity","day":"April 10","time":"2:50-3:10pm","room":"Westminster","type":"talk","track":"AI Architects","speakers":["Brian Scanlan"]}]},{"name":"Cassidy Hardin","role":"Developer Relations Engineer","company":"Google DeepMind","companyDescription":"AI research lab by Google","sessions":[{"title":"Open Models at Google DeepMind","day":"April 10","time":"11:15-11:40am","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Cassidy Hardin"]},{"title":"Q&A (continuation of Open Models at Google DeepMind session)","day":"April 10","time":"11:40am-12:00pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Cassidy Hardin"]}]},{"name":"Chintan Parikh","role":"Senior Product Manager","company":"Google DeepMind","companyDescription":"AI research lab by Google","linkedin":"https://www.linkedin.com/in/chintansparikh","github":"https://github.com/chintanparikh","photoUrl":"https://ai.engineer/speakers/europe/chintan-parikh.jpg","sessions":[{"title":"Accelerating AI on Edge","day":"April 10","time":"2:30-2:50pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Chintan Parikh","Weiyi Wang"]}]},{"name":"Chris Lovejoy","role":"Founder","company":"Notius Labs","twitter":"https://x.com/ChrisLovejoy_","linkedin":"https://www.linkedin.com/in/dr-christopher-lovejoy/","github":"https://github.com/chris-lovejoy","photoUrl":"https://ai.engineer/speakers/europe/chris-lovejoy.jpg","sessions":[{"title":"The Domain-Native AI Organization: How to Leverage Domain Expertise","description":"Vertical AI is a multi-trillion-dollar opportunity. But you can't win by grabbing the latest LLMs off-the-shelf: you need to embed domain expertise into your organisation, and use it to build a domain-native application. Most teams get this wrong: they either don't hire the right domain experts or don't leverage them correctly to build a differentiated product.\n\nIn this talk, I'll share a framework for building a domain-native AI organisation, drawing on case studies from healthcare, productivity, and legal. I'll share:\n\nthree models for embedding domain expertise (the Oracle, the Evaluator, and the Architect) - and how to choose which fits your stage and use case\n\nwho to hire and how to evolve their role over time\n\nthe most common org-building failure modes (and how to avoid them)","day":"April 10","time":"12:20-12:40pm","room":"Westminster","type":"talk","track":"AI Architects","speakers":["Chris Lovejoy"]}]},{"name":"Chris Parsons","role":"AI Consultant & CTO","company":"Cherrypick","twitter":"https://x.com/chrismdp","linkedin":"https://www.linkedin.com/in/chrisparsons/","github":"https://github.com/chrismdp","photoUrl":"https://ai.engineer/speakers/europe/chris-parsons.jpg","sessions":[{"title":"Ralph Loops: Build Dumb AI Loops That Ship","description":"Dumb loops beat clever workflows. Most teams building with AI agents reach for multi-agent orchestration, planning graphs, and elaborate tool chains. Then they spend months debugging\n  them. A single loop that processes one ticket at a time, evaluates its own output, and improves on the next run will outperform all of it.\n\nIn this hands-on workshop you will build three things. First, a working Ralph Loop that processes real tickets end-to-end. Second, a synthetic feedback loop so you can test and iterate locally without waiting on production data. Third, a self-improving cycle where the loop's output quality gets better with every run without you touching the prompt.\n\nWe will also dig into what this means for teams: when one loop can clear a backlog that used to need five people, the interesting questions stop being technical.\n\nBring your laptop. You will leave with working loops you can take back to your team.","day":"April 8","time":"1:15pm-3:15pm","room":"Abbey","type":"workshop","speakers":["Chris Parsons"]}]},{"name":"Connor Adams","role":"AI & Software Engineer","company":"Independent","twitter":"https://x.com/ConnorAds","linkedin":"https://www.linkedin.com/in/connoradams","github":"https://github.com/connorads","sessions":[{"title":"remobi.app: Don't change your terminal workflow for mobile. Swipe between agents, unblock when stuck.","description":"Hallway Track: Attendees propose lightning talks, peers vote, and the best talks get to go onstage as impromptu lightning talks. Check the AIE attendee Slack for signup and voting instructions.","day":"April 10","time":"3:30-3:40pm","room":"Moore","type":"lightning","track":"Hallway Track","speakers":["Connor Adams"]}]},{"name":"Cormac Brick","role":"Principal Engineer, On Device Machine Learning","company":"Google","companyDescription":"Google AI research lab","linkedin":"https://www.linkedin.com/in/cbrick/","photoUrl":"https://ai.engineer/speakers/europe/cormac-brick.jpg","sessions":[{"title":"TLMs: Tiny LLMs and Agents on Edge Devices with LiteRT-LM","day":"April 8","time":"10:40am-12:00pm","room":"Moore","type":"workshop","speakers":["Cormac Brick"]},{"title":"TLMs: Tiny LLMs and Agents on Edge Devices with LiteRT-LM","day":"April 9","time":"12:40-1:00pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Cormac Brick"]}]},{"name":"Cristina Poncela Cubeiro","role":"Software Engineer","company":"Earendil","companyDescription":"Public Benefit Corporation building Lefos, an AI email assistant","linkedin":"https://www.linkedin.com/in/cristinaponcela/","photoUrl":"https://ai.engineer/speakers/europe/cristina-poncela-cubeiro.jpg","sessions":[{"title":"The Friction Is Your Judgment","day":"April 10","time":"10:10-10:30am","room":"Keynote","type":"keynote","speakers":["Armin Ronacher","Cristina Poncela Cubeiro"]}]},{"name":"Daniel Szoke","role":"Software Engineer","company":"Sentry","sessions":[{"title":"Why Rust is the Ideal Language for Vibe-Coding","description":"What language would you use to vibe-code a new app? If you ask ChatGPT this very question, it will probably enthusiastically suggest TypeScript and possibly Python; if Rust is mentioned, there might be a note saying to use it \"if you enjoy suffering beautifully\" (yes, I actually got this response).\n\nIn fact, Rust is the ideal programming language for vibe-coding. The language itself deterministically enforces rules to ensure correctness (e.g. memory safety and safe concurrency) and best practice. When agents code in other languages, often all that prevents them from shipping broken code is some easily-forgotten agent instructions; possibly another independent, but fallible, review agent; and in the worst-case scenario, nothing but hope. When violating a rule in Rust, agents reliably receive error messages that explain exactly what is wrong and point them in the right direction. Rust's tooling-enforced invariants provide a natural agentic feedback loop, lowering reliance on model skill and instead introducing guardrails for your code.","day":"April 9","time":"3:45-4:03pm","room":"Wordsworth","type":"expo_session","track":"Expo Sessions (Wordsworth)","speakers":["Daniel Szoke"]}]},{"name":"Danielle An","role":"Principal Engineer / GenAI architect","company":"Meta","companyDescription":"Social technology company","linkedin":"https://www.linkedin.com/in/danielle-an-07063217/","photoUrl":"https://ai.engineer/speakers/europe/danielle-an.jpg","sessions":[{"title":"Think You Can Build a Game with AI? Think Again! The New Games Are Just Being Invented!","description":"With the recent development of AI, either you or your friend probably vibe coded a game using Gemini, on Three.js. But that is old news now. If everyone can do that, what is next? The next massive hit, the one that millions of people across the world will play, is just about to be born. Wanna know more? Come see this talk!","day":"April 10","time":"12:40-1:00pm","room":"Moore","type":"talk","track":"Generative Media","speakers":["Danielle An","David Hoe"]}]},{"name":"Danilo Campos","role":"Robopsychologist","company":"PostHog","twitter":"https://x.com/daniloc","linkedin":"https://www.linkedin.com/in/danilocampos/","photoUrl":"https://ai.engineer/speakers/europe/danilo-campos.jpg","sessions":[{"title":"LLM codegen fails and how to stop 'em","description":"Let's talk about the most common pitfalls of generating code with autonomous LLM agents, and the strategies we use to erase them altogether. This is battle-tested wisdom from a system operating at scale, earning constant accolades. But you can absolutely match our performance, as long as you've got the playbook.\n\nHere's the story: The PostHog wizard helps over 5,000 users monthly to integrate our tools into their project code, giving them the best possible start on their journey into product engineering. From a brand perspective, this is a high-risk, high-reward exercise: if our LLM code generation is bad or stupid, *we* look bad AND stupid. But if it works, we look like miracle workers who protect our customers' time.\n\nGood news: it works really well. Why? Because we've isolated multiple categories of failure modes for LLM tools, and designed strategies over hundreds of carefully-observed iterations to mitigate them. I'll walk you through the failure modes, and the path to maximal correctness.","day":"April 9","time":"11:40am-12:00pm","room":"Moore","type":"talk","track":"Evals & Observability","speakers":["Danilo Campos"]}]},{"name":"Danny Gollapalli","role":"Backend Engineer","company":"Raindrop.ai","companyDescription":"AI monitoring platform for agent observability","linkedin":"https://linkedin.com/in/joseph-daniel-gollapalli-a371a4138","photoUrl":"https://ai.engineer/speakers/europe/danny-gollapalli.jpg","sessions":[{"title":"Everything You Need To Know About Agent Observability","description":"Let your AI agents proactively report their own failures. This workshop introduces Raindrop Self Diagnostics, showing how agents can call home, classify failures, filter noise, and alert teams when critical issues arise. Attendees will learn how to instrument self-reporting, understand the default diagnostic categories, and use agent-reported signals to improve production observability.","day":"April 8","time":"3:45pm-5:45pm","room":"Wordsworth","type":"workshop","speakers":["Danny Gollapalli","Ben Hylak"]}]},{"name":"David Gomes","role":"Engineer","company":"Cursor","companyDescription":"AI-powered code editor","twitter":"https://x.com/davidgomes","linkedin":"https://www.linkedin.com/in/davidrfgomes/","github":"https://github.com/dmgomes","photoUrl":"https://ai.engineer/speakers/europe/david-gomes.jpg","sessions":[{"title":"Replacing 12K LoC with a 200 LoC Skill","description":"TODO","day":"April 10","time":"11:15-11:40am","room":"Fleming","type":"track_keynote","track":"Coding Agents","speakers":["David Gomes"]}]},{"name":"David Hoe","role":"GenAI Product Design & Prototyping Lead","company":"Meta","companyDescription":"Social technology company","sessions":[{"title":"Think You Can Build a Game with AI? Think Again! The New Games Are Just Being Invented!","description":"With the recent development of AI, either you or your friend probably vibe coded a game using Gemini, on Three.js. But that is old news now. If everyone can do that, what is next? The next massive hit, the one that millions of people across the world will play, is just about to be born. Wanna know more? Come see this talk!","day":"April 10","time":"12:40-1:00pm","room":"Moore","type":"talk","track":"Generative Media","speakers":["Danielle An","David Hoe"]}]},{"name":"David Soria Parra","role":"Creator of MCP","company":"Anthropic","companyDescription":"AI safety company; creator of MCP","twitter":"https://x.com/dsp_","linkedin":"https://www.linkedin.com/in/david-soria-parra-4a78b3a/","github":"https://github.com/dsp","photoUrl":"https://ai.engineer/speakers/europe/david-soria-parra.jpg","sessions":[{"title":"The Future of MCP","description":"In this Keynote, I will lay out what I believe will be true for agents in 2026 and how MCP plays a part in this. Let's take a look what connectivity for agents might look like.","day":"April 10","time":"9:20-9:40am","room":"Keynote","type":"keynote","speakers":["David Soria Parra"]}]},{"name":"Eoin Mulgrew","role":"Head of Digital Transformation","company":"10 Downing Street","linkedin":"https://www.linkedin.com/in/eoinmulgrew/","photoUrl":"https://ai.engineer/speakers/europe/eoin-mulgrew.jpg","sessions":[{"title":"Rewiring the State","description":"In No10 we're building what should be one of the most elite technical teams of any central government. \n\nWe are taking engineers and devs from AI labs, big tech, YC founder etc. and deploying them to rewire the UK state.\n\nThis session would include demos and act as a call to arms for people to join us.","day":"April 10","time":"11:40am-12:00pm","room":"Westminster","type":"talk","track":"AI Architects","speakers":["Eoin Mulgrew"]}]},{"name":"Eric Allam","role":"CTO","company":"Trigger.dev","twitter":"https://x.com/maverickdotdev","linkedin":"https://www.linkedin.com/in/eric-allam/","github":"https://github.com/ericallam","photoUrl":"https://ai.engineer/speakers/europe/eric-allam.jpg","sessions":[{"title":"Two Roads to Durable Agents: Replay vs. Snapshot","description":"AI agents that run for hours, wait on humans, and call dozens of tools need durability. Today the default answer is application-level replay: record every step, replay the log on recovery, require your code to be deterministic. It works, but it constrains how you write agents and can't capture everything a running process actually holds in memory.\nThere's a second approach that almost nobody in the AI engineering world is using yet: OS-level snapshot/restore. Freeze the entire process. Free all resources. Restore it exactly where it left off, with no replay log, no determinism requirements, and no step boundaries in your code.\nIn this talk I'll compare both approaches honestly, where each one wins and where each one breaks, then demo a live agent on Trigger.dev that checkpoints mid-execution, suspends at zero cost while waiting on a human, and resumes from snapshot. You'll leave with a clear framework for choosing the right durability model for your agents.","day":"April 10","time":"10:50-11:08am","room":"Wesley","type":"expo_session","track":"Expo Sessions (Wesley)","speakers":["Eric Allam"]}]},{"name":"Eric Zakariasson","role":"Engineer","company":"Cursor","companyDescription":"AI-powered code editor","twitter":"https://x.com/ericzakariasson","linkedin":"https://www.linkedin.com/in/ericzakariasson/","github":"https://github.com/ericzakariasson","photoUrl":"https://ai.engineer/speakers/europe/eric-zakariasson.jpg","sessions":[{"title":"Building your own software factory","description":"Most of us are pair-programming with one agent and stopping there. There's a lot more on the table.This workshop is about going from one agent to many. We'll start with codebase setup, the foundational work that makes agents effective on their own. Then we'll scale up to running agents in parallel, kicking off async work that keeps going while you context-switch to something else, and setting up automations for the things you're still doing by hand.","day":"April 8","time":"10:40am-12:00pm","room":"St. James","type":"workshop","speakers":["Eric Zakariasson"]}]},{"name":"Filip Makraduli","role":"Founding ML Engineer and Dev Rel","company":"Superlinked","linkedin":"https://www.linkedin.com/in/filipmakraduli/","github":"https://github.com/fm1320","photoUrl":"https://ai.engineer/speakers/europe/filip-makraduli.jpg","sessions":[{"title":"The Embedding Infrastructure Nobody Built (So We Did)","description":"I wrote an article about making embedding inference fast. Flash Attention, memory hierarchy, all the usual advice. Then I posted it, and the internet told me I was wrong attention is 6% of the workload, my optimization advice was backwards, and I'd never actually profiled the thing I was writing about.\n\nHere's the thing: we were building an embedding inference engine when this happened. And the corrections didn't just fix my article — they exposed gaps in everything we'd seen in the ecosystem. TEI gives you one model per container, so five models means five deployments. Infinity crashes under load. Everyone assumes you know exactly which model you want before you start the server. Nobody handles what happens when you're running three models and memory fills up.\n\nSo we built what we actually needed: an inference engine where you load models at query time, not deploy time. Where you can hot-swap between dozens of models on one GPU without restarting anything. Where memory pressure triggers LRU eviction instead of an OOM crash. Where trying a new embedding model doesn't mean a rebuild. This talk is the story of how internet feedback reshaped both an article and a product. I'll show you the profiler traces that proved the commenters right, the infrastructure gaps we found when we looked harder, and how we filled them. Small-model inference is the infrastructure nobody built. So we did.","day":"April 10","time":"12:00-12:20pm","room":"Abbey","type":"talk","track":"GPUs & LLM Infrastructure","speakers":["Filip Makraduli"]}]},{"name":"Florina Muntenescu","role":"Developer Relations Manager","company":"Google DeepMind","companyDescription":"AI research lab by Google","twitter":"https://x.com/FMuntenescu","linkedin":"https://www.linkedin.com/in/florina-muntenescu-314b8921","github":"https://github.com/florina-muntenescu","photoUrl":"https://ai.engineer/speakers/europe/florina-muntenescu.jpg","sessions":[{"title":"AI on Android: Ask me Anything","day":"April 9","time":"12:20pm-12:40pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Florina Muntenescu","Oli Gaymond"]}]},{"name":"Frédéric Barthelet","role":"CTO","company":"Alpic","twitter":"https://x.com/bartheletf","linkedin":"https://www.linkedin.com/in/frederic-barthelet/","github":"https://github.com/fredericbarthelet","photoUrl":"https://ai.engineer/speakers/europe/frederic-barthelet.jpg","sessions":[{"title":"Why MCP and ChatGPT Apps Use Double Iframes (And What That Means for Your App)","description":"Always wanted to understand why both ChatGPT Apps and MCP Apps require a double iframe architecture? This talk is for you.\n\nWe'll dive into the whys of this sandboxing architecture.\n\nWe'll review the existing security policy options and sandbox attributes you can tune today to unlock features like nested iframes or external API calls\n\nWe'll then dive into what additional Policy allow attribute enables (camera, microphone, geolocation, clipboard, payment) that isn't yet exposed in the spec. We'll focus on use cases they'd unlock (voice apps, location-aware widgets, checkout flows)\n\nI'd also make sure people learn about what could go wrong: Why allow-scripts + allow-same-origin is the classic sandbox escape, and what happens if hosts open permissions too liberally","day":"April 10","time":"12:20-12:40pm","room":"St. James","type":"talk","track":"MCP","speakers":["Frédéric Barthelet"]}]},{"name":"Fryderyk Wiatrowski","role":"CEO","company":"Viktor","companyDescription":"AI coworker platform with persistent Slack-native agents","twitter":"https://x.com/fawiatrowski","photoUrl":"https://ai.engineer/speakers/europe/fryderyk-wiatrowski.jpg","sessions":[{"title":"Viktor — AI Coworker That Lives in Slack","day":"April 9","time":"3:10-3:30pm","room":"Fleming","type":"talk","track":"Claws & Personal Agents","speakers":["Fryderyk Wiatrowski"]}]},{"name":"Garrett Galow","role":"Product","company":"WorkOS","companyDescription":"Enterprise identity and access management","linkedin":"https://www.linkedin.com/in/garrett-galow/","photoUrl":"https://ai.engineer/speakers/europe/garrett-galow.jpg","sessions":[{"title":"One Login to Rule Them All: Cross-App Access for MCP","description":"Imagine connecting your coding agent to a dozen services you use every day. That's a dozen OAuth consent screens, a dozen token lifecycles, a dozen chances for something to break. If we already have Single Sign-On, why are users signing in so many times?\n\n\nCross-App Access solves this by leveraging the three-way trust between the MCP client, the MCP server, and the organization's Identity Provider. The IdP brokers a token exchange from the user's initial login. One sign-in, access to all of their applications.\n\n\nI'll demo the Identity Assertion Authorization Grant flow end to end, showing how a single SSO login turns into access tokens across every MCP server. I'll also cover what this pattern unlocks for agent identity beyond MCP.","day":"April 10","time":"11:15-11:40am","room":"Abbey","type":"track_keynote","track":"GPUs & LLM Infra","speakers":["Garrett Galow"]}]},{"name":"Gergely Orosz","role":"Moderator","company":"The Pragmatic Engineer","companyDescription":"#1 technology newsletter on Substack","twitter":"https://x.com/GergelyOrosz","linkedin":"https://www.linkedin.com/in/gergelyorosz/","photoUrl":"https://ai.engineer/speakers/europe/gergely-orosz.jpg","sessions":[{"title":"Fireside Chat with Gergely Orosz and Linear's Tuomas Artman","day":"April 10","time":"4:30-5:00pm","room":"Keynote","type":"keynote","speakers":["Gergely Orosz","Tuomas Artman"]},{"title":"Software Engineering + AI = ?","day":"April 9","time":"4:30-5:00pm","room":"Keynote","type":"keynote","speakers":["Gergely Orosz","swyx"]}]},{"name":"Giran Moodley","role":"Member of Technical Staff, Field Engineering","company":"Braintrust","photoUrl":"https://ai.engineer/speakers/europe/giran-moodley.jpg","sessions":[{"title":"Shipping complex AI applications | Braintrust & Trainline","description":"Getting a prototype working is straightforward. Making it reliable in production, especially with multi-step agents, tool use, and real users is the hard part. In this hands-on workshop, you'll work through the core parts of building production-grade AI applications with Braintrust.","day":"April 8","time":"1:15pm-3:15pm","room":"Westminster","type":"workshop","speakers":["Giran Moodley","Mayan Soni","Oussama Hafferssas"]}]},{"name":"Google DeepMind team","company":"Google DeepMind","companyDescription":"AI research lab by Google","sessions":[{"title":"Open Office Hours with DeepMind","day":"April 10","time":"2:50-3:30pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Google DeepMind team"]}]},{"name":"Guillaume Vernade","role":"Developer Relations Engineer","company":"Google DeepMind","companyDescription":"AI research lab by Google","twitter":"https://x.com/Giom_V","linkedin":"https://www.linkedin.com/in/guillaumevernade","github":"https://github.com/Giom-V","photoUrl":"https://ai.engineer/speakers/europe/guillaume-vernade.jpg","sessions":[{"title":"Let's go Bananas with GenMedia","day":"April 8","time":"9:00-10:20am","room":"Rutherford","type":"workshop","speakers":["Guillaume Vernade"]}]},{"name":"Gus Martins","role":"Product Manager","company":"Google DeepMind","companyDescription":"AI research lab by Google","twitter":"https://x.com/gusthema","linkedin":"https://www.linkedin.com/in/gus-martins-64ab5891","github":"https://github.com/gusthema","photoUrl":"https://ai.engineer/speakers/europe/gus-martins.jpg","sessions":[{"title":"Sovereign Escape Velocity: AI Ownership with Open Models","day":"April 9","time":"2:30-2:50pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Ian Ballantyne","Gus Martins"]}]},{"name":"Hervé Bredin","role":"Chief Science Officer","company":"pyannoteAI","twitter":"https://x.com/hbredin","linkedin":"https://www.linkedin.com/in/herve-bredin/","github":"https://github.com/hbredin","photoUrl":"https://ai.engineer/speakers/europe/herve-bredin.jpg","sessions":[{"title":"Beyond Transcription: Building Voice AI That Actually Understands Conversations","description":"Most Voice AI pipelines stop at transcription. That's the wrong place to stop. Without knowing who's speaking, when they spoke, and how turns relate to each other, even the best STT model produces output that downstream LLMs can't reliably use.\nThis talk covers what's missing in most Voice AI stacks: speaker diarization, conversation structure, and context-aware processing. Hervé will show why raw transcripts fail in real-world scenarios: multi-speaker calls, interruptions, overlapping speech, and what it takes to build a pipeline that performs at production quality.","day":"April 9","time":"11:15-11:40am","room":"Abbey","type":"track_keynote","track":"Voice & Vision","speakers":["Hervé Bredin"]}]},{"name":"Hugo Santos","role":"CEO","company":"Namespace","companyDescription":"Agent-ready CI/CD infrastructure","linkedin":"https://www.linkedin.com/in/hugomgsantos/","photoUrl":"https://ai.engineer/speakers/europe/hugo-santos.jpg","sessions":[{"title":"CI/CD Is Dead, Agents Need Continuous Compute and Computers","description":"What happens when thousands of agents try to edit source code at once? Merge chaos, slow builds, and stacks of pull requests for engineers to review. While agentic software development has never been so promising, traditional CI/CD solutions threaten to constrain the agentic software transformation. In this session, we’ll unpack what happens when autonomous coding agents continuously open PRs, modify infrastructure, and trigger workflows across hundreds of repos: traditional CI/CD systems, tuned for infrequent human changes, become the latency and cost bottleneck in the SDLC. We’ll discuss concrete failure modes including runner saturation, cache thrash, cold Docker builds, test explosion, and opaque flakiness and show why treating CI/CD as a high-performance system (with specialized hardware, incremental execution, and fine-grained observability) is now a core AI infra problem, not a DevOps afterthought.\n\nUsing Namespace as a case study, we’ll go deep on how to architect an agent-ready pipeline layer: intelligent execution over GitHub Actions, remote caching and Turbo-style Docker builds, Git-aware incrementality, and workflow analytics that tie time and spend directly to specific jobs, repos, and agents. We’ll also cover operational requirements including ephemeral, high-performance clusters; private registries optimized for build workloads; and interactive debugging for both human and agent-authored changes—and how these design choices emerged from running microservices-scale infra at Google and beyond. Attendees will leave with a concrete blueprint for turning CI/CD into a throughput- and reliability-optimized substrate that can safely sustain 5–10x more changes from humans and agents without blowing up latency or cloud budgets.","day":"April 10","time":"12:40-1:00pm","room":"Westminster","type":"talk","track":"AI Architects","speakers":["Hugo Santos","Madison Faulkner"]}]},{"name":"Ian Ballantyne","role":"Developer Advocate","company":"Google DeepMind","companyDescription":"AI research lab by Google","linkedin":"https://linkedin.com/in/ianballantyne","github":"https://github.com/irbg","photoUrl":"https://ai.engineer/speakers/europe/ian-ballantyne.jpg","sessions":[{"title":"Sovereign Escape Velocity: AI Ownership with Open Models","day":"April 9","time":"2:30-2:50pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Ian Ballantyne","Gus Martins"]},{"title":"Agentic Panel with KP Sawhney","day":"April 10","time":"12:40-1:00pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["KP Sawhney","Ian Ballantyne"]}]},{"name":"Ibragim Badertdinov","role":"Research Engineer","company":"Nebius","twitter":"https://x.com/ibragim_bad","linkedin":"https://www.linkedin.com/in/ibragim-badertdinov/","github":"https://github.com/ibragim-bad","photoUrl":"https://ai.engineer/speakers/europe/ibragim-badertdinov.jpg","sessions":[{"title":"SWE-rebench: Lessons from Evaluating Coding Agents on Real Software Engineering Tasks","description":"Coding agents are evolving fast, and simple vibe checks are no longer enough to evaluate them. In this talk, I will share how we built the SWE-rebench leaderboard, which we update every month with fresh, real-world software engineering tasks and use to evaluate 30+ leading open and closed models. I will also share practical lessons on how to build your own benchmark, along with common ways models can cheat during evaluation.","day":"April 9","time":"3:10-3:30pm","room":"Moore","type":"talk","track":"Evals & Observability","speakers":["Ibragim Badertdinov"]}]},{"name":"Ido Salomon","role":"Creator of AgentCraft, Creator of MCP-UI","company":"MCP Apps","twitter":"https://x.com/idosal1","linkedin":"https://www.linkedin.com/in/ido-salomon/","github":"https://github.com/idosal","photoUrl":"https://ai.engineer/speakers/europe/ido-salomon.jpg","sessions":[{"title":"AgentCraft: Putting the Orc in Agent Orchestration","description":"As we run more agents in parallel, it becomes clear: we are the bottleneck.Luckily, the skills we need for effective multi-agent orchestration aren’t entirely new, they’ve just been hiding in unexpected places.Through AgentCraft, the game-inspired agent orchestrator, I’ll explore how we can raise the ceiling of human-agent collaboration without burning out in the process.","day":"April 10","time":"9:40-9:50am","room":"Keynote","type":"keynote","speakers":["Ido Salomon"]},{"title":"MCP Apps - Extending the frontier","description":"Session title and abstract to be finalized by participating speaker","day":"April 10","time":"12:00-12:20pm","room":"St. James","type":"talk","track":"MCP","speakers":["Liad Yosef","Ido Salomon"]}]},{"name":"Igor Karpovich","role":"Senior Principal Engineer","company":"Skyscanner","twitter":"https://x.com/ikarpovich","linkedin":"https://www.linkedin.com/in/ikarpovich/","photoUrl":"https://ai.engineer/speakers/europe/igor-karpovich.jpg","sessions":[{"title":"Leadership Lunch","description":"Leadership Addon and Max attendees: Moderated by swyx. swyx will facilitate the discussion with Igor Karpovich on Skyscanner's journey making agentic development work at scale, followed by peer-to-peer chats with leadership attendees.","day":"April 9","time":"1:00-2:00pm","room":"Keynote","type":"keynote","track":"Leadership Lunch","speakers":["swyx","Igor Karpovich"]}]},{"name":"Isaac Robinson","role":"Machine Learning Engineer","company":"Roboflow","companyDescription":"Computer vision platform for building and deploying models","linkedin":"https://www.linkedin.com/in/robinsonish/","photoUrl":"https://ai.engineer/speakers/europe/isaac-robinson.jpg","sessions":[{"title":"How Transformers Finally Ate Vision","description":"Transformers have dominated language for years. Vision transformers were introduced in 2020. But only now are transformers beating CNNs in vision approaches. Why gives? We'll discuss why transformer-based architectures are, only now, beating prior approaches in vision. We'll use RF-DETR, the first realtime instance segmentation transformer that beats CNNs like ResNet and YOLO for object detection and instance segmentation introduced in 2025, as example. Learn what's coming in for VLMs, \"physical AI,\" and VLAs.","day":"April 9","time":"12:00-12:20pm","room":"Abbey","type":"talk","track":"Voice & Vision","speakers":["Isaac Robinson"]}]},{"name":"Jack Wang","role":"GenAI Lead","company":"Accenture","companyDescription":"Global consulting firm","linkedin":"https://www.linkedin.com/in/jackxwang","photoUrl":"https://ai.engineer/speakers/europe/jack-wang.jpg","sessions":[{"title":"Most Enterprise Agentic Projects Are Doomed — Here’s Why","description":"Explore real-life blood and tears in agentic delivery experience for large enterprises, where most of the normal people live and breath outside of Silicon Valley.","day":"April 10","time":"12:00-12:20pm","room":"Westminster","type":"talk","track":"AI Architects","speakers":["Jess Grogan-Avignon","Jack Wang"]}]},{"name":"Jacob Lauritzen","role":"CTO","company":"Legora","companyDescription":"AI for legal","linkedin":"https://www.linkedin.com/in/jacob-lauritzen/","github":"https://github.com/Jacse","photoUrl":"https://ai.engineer/speakers/europe/jacob-lauritzen.jpg","sessions":[{"title":"Agents need more than a chat","day":"April 10","time":"5:10-5:30pm","room":"Keynote","type":"keynote","speakers":["Jacob Lauritzen"]}]},{"name":"Jess Grogan-Avignon","role":"Agentic Transformation Lead","company":"Accenture","companyDescription":"Global consulting firm","linkedin":"https://www.linkedin.com/in/jessicaannbiggs","photoUrl":"https://ai.engineer/speakers/europe/jess-grogan-avignon.jpg","sessions":[{"title":"Most Enterprise Agentic Projects Are Doomed — Here’s Why","description":"Explore real-life blood and tears in agentic delivery experience for large enterprises, where most of the normal people live and breath outside of Silicon Valley.","day":"April 10","time":"12:00-12:20pm","room":"Westminster","type":"talk","track":"AI Architects","speakers":["Jess Grogan-Avignon","Jack Wang"]}]},{"name":"Johan Lajili","role":"Member of Engineering, Full-Stack","company":"Poolside","companyDescription":"AGI for the enterprise, starting with software agents","twitter":"https://x.com/Johan_Lajili","linkedin":"https://www.linkedin.com/in/johanlajili","github":"https://github.com/johanlajili","sessions":[{"title":"Your agent is blindfolded. How giving it (good) eyes multiplies performance and trust","description":"Hallway Track: Attendees propose lightning talks, peers vote, and the best talks get to go onstage as impromptu lightning talks. Check the AIE attendee Slack for signup and voting instructions.","day":"April 10","time":"2:50-3:00pm","room":"Moore","type":"lightning","track":"Hallway Track","speakers":["Johan Lajili"]}]},{"name":"Jonas Templestein","role":"CEO","company":"Iterate","companyDescription":"AI agent platform for automating business processes","twitter":"https://x.com/jonas","linkedin":"https://www.linkedin.com/in/jonashuckestein","github":"https://github.com/jonastemplestein","photoUrl":"https://ai.engineer/speakers/europe/jonas-templestein.jpg","sessions":[{"title":"Make your own event-sourced agent harness using stream processors","description":"https://github.com/iterate/ai-engineer-workshop\n\nThis workshop is in two parts:\n\nFirst, we'll show you for 10 minutes how to make a basic agent harness on top of our humble durable streams ish API . The only abstraction we'll need is a simple \"stream processor\", which consists of\n1) Some state (e.g. \"message history\" and \"is an llm request in progress\")\n\n2) A reducer that takes events and sometimes updates the state\n\n3) A hook / reactor that causes side effects (such as LLM requests) after an event is added to the stream\n\nThen we'll give you an hour to build the coolest possible thing with this. You can implement from scratch\n- an agent that calls tools via codemode\n\n- an agent with multiple LLM calls in progress at once\n\n- agent-to-agent message capability\n\n- an agent that responds to slack messages\n\n- an agent that is deployed as a serverless function and \"wakes up\" when new events arrive\n\nAnd much more!\n\nIt's kind of a wild abstraction that we started hacking on last week. Hitting all the buzzwords\n- event sourced\n\n- serverless\n\n- agent harness\n\nCheck\n\nSecond, you'll get to implement an agent harness of your dreams. You can\n\n- Implement code mode from scratch\n\n- Experiment with safety\n\nIt's a bit like making pi extensions, without trying to run them all in a single process.\n\nIn this workshop we'll show you how to build an AI agent like claude or pi or codex from scratch on top of this very simple abstraction.","day":"April 8","time":"10:40am-12:00pm","room":"Shelley","type":"workshop","speakers":["Misha Kaletsky","Jonas Templestein"]}]},{"name":"Joshua Snyder","role":"Team Lead","company":"PostHog","twitter":"https://x.com/joshsny","linkedin":"https://www.linkedin.com/in/joshsny/","github":"https://github.com/joshsny","photoUrl":"https://ai.engineer/speakers/europe/joshua-snyder.jpg","sessions":[{"title":"Self Driving Products: Engineering the Pipeline from Product Signals to Pull Requests","description":"Every product generates a firehose of signals; users rage-clicking through broken flows, error spikes at 2am, experiments that quietly tank a metric, a customer complaining in Slack. Today, a human has to notice, triage and eventually write the fix. We're building a system at PostHog that automatically collapses that entire chain.\n\nI'll cover how we ingest and normalize signals across very different sources; session replays, error tracking, analytics, logs, experiments, and third-party tools like Slack. How we convert noisy, unstructured signals into well-scoped coding tasks with enough context for an agent to act on. How we orchestrate agents against real codebases, run them in secure sandboxed environments, and decide what's worth shipping.","day":"April 9","time":"2:50-3:10pm","room":"Moore","type":"talk","track":"Evals & Observability","speakers":["Joshua Snyder"]}]},{"name":"Karan Sampath","role":"Member of Technical Staff","company":"Anthropic","twitter":"https://x.com/karan_sampath","linkedin":"https://www.linkedin.com/in/karansampath/","github":"https://github.com/karansampath","photoUrl":"https://ai.engineer/speakers/europe/karan-sampath.jpg","sessions":[{"title":"Bringing MCPs to the Enterprise","description":"MCPs are often flaky, face multiple security vulnerabilities, and are generally hard to scale. Most enterprises struggle to use more than single digit numbers of MCPs due to issues with security, observability, and access control. In this talk, we'll explore the approaches and learnings we at Anthropic have been taking to solve this, and make MCPs more enterprise ready.","day":"April 10","time":"3:10-3:30pm","room":"St. James","type":"talk","track":"MCP","speakers":["Karan Sampath"]}]},{"name":"Kitze","role":"AIE Top Speaker","company":"Sizzy.co","companyDescription":"Browser for developers","twitter":"https://x.com/thekitze","linkedin":"https://www.linkedin.com/in/kitaborovskis/","photoUrl":"https://ai.engineer/speakers/europe/kitze.jpg","sessions":[{"title":"Kitze - Keynote","day":"April 9","time":"5:00-5:20pm","room":"Keynote","type":"keynote","speakers":["Kitze"]}]},{"name":"KP Sawhney","role":"Research Scientist","company":"Google DeepMind","companyDescription":"AI research lab by Google","linkedin":"https://linkedin.com/in/kyle-sawhney","github":"https://github.com/KPSawhney","photoUrl":"https://ai.engineer/speakers/europe/kp-sawhney.jpg","sessions":[{"title":"Agentic Panel with KP Sawhney","day":"April 10","time":"12:40-1:00pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["KP Sawhney","Ian Ballantyne"]}]},{"name":"Laurie Voss","role":"Head of DX","company":"Arize","companyDescription":"AI observability platform","twitter":"https://x.com/seldo","linkedin":"https://www.linkedin.com/in/seldo/","github":"https://github.com/seldo","photoUrl":"https://ai.engineer/speakers/europe/laurie-voss.jpg","sessions":[{"title":"Ship Real Agents: Hands-On Evals for Agentic Applications","description":"You shipped an AI agent. It works great on your test queries. But how do you know it works on your users' queries? How do you know a prompt change that fixes one thing didn't break three others? And when a new model drops next quarter, how do you know whether to upgrade? If your answer to any of these is \"I run it a few times and check,\" this workshop is for you.\n\nIn this hands-on session, you'll build a complete evaluation pipeline for an AI agent from scratch. Starting with a financial analysis chatbot built on the Claude Agent SDK, you'll instrument it with tracing, run it against diverse test cases, and then do the thing most tutorials skip: actually read your data. You'll categorize failures by root cause, define success criteria, and use that analysis to decide what to measure before writing a single eval. Then you'll build three types of evaluators: fast deterministic code checks, built-in LLM-as-a-judge evals like faithfulness, and a custom rubric you design yourself. You'll validate your judges against your own human labels, because an eval you haven't tested is just a fancy way of being wrong at scale.\n\nThe workshop doesn't stop at scoring. You'll save failure cases as reusable datasets, improve your agent's prompts based on what the evals told you, and run controlled experiments to prove the improvement with numbers instead of gut feel. Along the way, you'll learn practical frameworks: the impact hierarchy for prioritizing fixes, the Swiss Cheese model for layering defenses, and the eval-iterate cycle that turns a prototype into a production system. You'll leave with a working notebook, a free Phoenix Cloud account with your traces and eval results, and a repeatable process you can apply to any AI application the day you get home.","day":"April 8","time":"3:45pm-5:45pm","room":"Westminster","type":"workshop","speakers":["Laurie Voss"]}]},{"name":"Lawrence Jones","role":"Founding Engineer","company":"Incident.io","companyDescription":"Incident management platform","twitter":"https://x.com/lawrjones","linkedin":"https://www.linkedin.com/in/lawrence2jones/","photoUrl":"https://ai.engineer/speakers/europe/lawrence-jones.jpg","sessions":[{"title":"Lawrence Jones - Fighting AI with AI","description":"At incident.io we're building AI SRE: a system that investigates production incidents autonomously, digging through logs, metrics, traces and code changes to tell you what's gone wrong and how to fix it. It's one of the most complex AI products out there: a deeply nested tree of agents, extremely ambiguous problems, integrating with nondeterministic telemetry systems. When something breaks, you can't just look at a prompt and its output — you need to trace through the entire chain to find where reasoning went sideways.\n\nOur answer to this complexity has been to fight AI with AI — building internal tools where AI agents help us understand, debug, and improve our own AI systems. This talk walks through the specific tools and workflows we've built:\n\nEvals that actually work: a CLI and red-green runbook that turns \"this interaction went wrong\" into a proven fix, structured so AI agents can follow the process end-to-end\n\nFilesystem downloads: serialising complex agent traces as markdown that AI agents can read and reason about, turning hour-long debugging sessions into 5-minute conversations\n\nAnalysis at scale: a pipeline where 25 parallel AI agents each analyse an investigation, then cluster results to surface systemic patterns across a customer's incidents\n\nAI-powered feedback loops: using AI agents to dogfood our own tools, submitting structured feedback that feeds directly into what we build next\n\nNone of this requires exotic infrastructure. The patterns are straightforward: give AI agents access to the same debugging information your engineers use, but in a format they can read. Write runbooks they can follow. Build pipelines where AI does the repetitive analysis and surfaces the patterns for humans to act on.\n\nIf you're building AI products and finding the complexity is outpacing your ability to debug and improve them, this talk will give you concrete strategies to close that gap.","day":"April 10","time":"12:20-12:40pm","room":"Fleming","type":"talk","track":"Coding Agents","speakers":["Lawrence Jones"]}]},{"name":"Leonie Monigatti","role":"Sr. Developer Advocate","company":"Elastic","companyDescription":"Search and observability platform","twitter":"https://x.com/helloiamleonie","linkedin":"https://www.linkedin.com/in/804250ab/","photoUrl":"https://ai.engineer/speakers/europe/leonie-monigatti.jpg","sessions":[{"title":"Agentic Search for Context Engineering","description":"1. From RAG to Agentic Search How retrieval evolved from RAG to agentic RAG, and why agents building their own context changes everything.2. Choosing the Right Search Interface: Shell tools, dedicated database tools, file search. When each works, when each hits a ceiling, and how to combine them. 3. Building Effective Database Retrieval Tools Many search tools are usually built-in. Database tools are powerful when customized. Practical guide on how to build effective database retrieval tools, including tool scoping, descriptions, output engineering, and error handling.4. Wrap-up + Q&A","day":"April 8","time":"9:00-10:20am","room":"Wordsworth","type":"workshop","speakers":["Leonie Monigatti"]}]},{"name":"Liad Yosef","role":"Co-creator","company":"MCP Apps","twitter":"https://x.com/liadyosef","linkedin":"https://www.linkedin.com/in/liadyosef/","github":"https://github.com/liady","photoUrl":"https://ai.engineer/speakers/europe/liad-yosef.jpg","sessions":[{"title":"MCP Apps - Extending the frontier","description":"Session title and abstract to be finalized by participating speaker","day":"April 10","time":"12:00-12:20pm","room":"St. James","type":"talk","track":"MCP","speakers":["Liad Yosef","Ido Salomon"]}]},{"name":"Liam Hampton","role":"Senior Cloud Advocate","company":"Microsoft","twitter":"https://x.com/liamchampton","linkedin":"https://www.linkedin.com/in/liam-conroy-hampton/","github":"https://github.com/liamchampton","photoUrl":"https://ai.engineer/speakers/europe/liam-hampton.jpg","sessions":[{"title":"Cooking with Agents in VS Code","description":"This is a demo driven talk that explores how AI agents are shaping modern developer workflows inside VS Code. This session introduces attendees to local, remote, and worktree based agents, explaining how each agent path works, the trade offs involved, and when to use each of them effectively. Attendees will leave with a clear mental model for agent based development in VS Code, an understanding of emerging agent patterns","day":"April 10","time":"3:10-3:30pm","room":"Fleming","type":"talk","track":"Coding Agents","speakers":["Liam Hampton"]}]},{"name":"Liam McGarrigle","role":"n8n Ambassador","company":"n8n","companyDescription":"Workflow automation platform","linkedin":"https://www.linkedin.com/in/liam-mcgarrigle-37571b291","github":"https://github.com/liamdmcgarrigle","photoUrl":"https://ai.engineer/speakers/europe/liam-mcgarrigle.jpg","sessions":[{"title":"Building Your Own Secure AI Workflows: Human-in-the-Loop Automation with n8n","description":"Session title and abstract to be finalized by participating speaker","day":"April 8","time":"10:40am-12:00pm","room":"Wordsworth","type":"workshop","speakers":["Liam McGarrigle"]}]},{"name":"Lou Bichard","role":"Field CTO","company":"Ona","twitter":"https://x.com/loujaybee","linkedin":"https://www.linkedin.com/in/loujaybee","photoUrl":"https://ai.engineer/speakers/europe/lou-bichard.jpg","sessions":[{"title":"The Missing Primitive for Agent Swarms","description":"Everyone's building swarms. Few are asking what infrastructure they actually need.\n\nMost agent frameworks treat multi-agent coordination as an application-level concern - spawn threads, manage state, hope for the best. This works on demos. It falls apart when you need ten agents exploring a codebase in parallel, each with their own environment, tools, and context.\n\nThe problem is that we're missing a platform primitive: the ability to spawn isolated agent execution contexts on demand, with their own identity, resource limits, and coordination semantics. Without this, every swarm implementation reinvents the same brittle scaffolding.\n\nIn this talk, I'll cover:\n- Why current approaches (threads, worktrees, container pools) break down at scale\n- What a proper swarm primitive looks like: identity, isolation, lifecycle, coordination\n- How we're building this at Ona and what we've learned shipping it to enterprises\n- The UX implications: how users interact with fleets of agents, not just one\n\nSwarms aren't a feature. They're an infrastructure category. It's time we built them that way.","day":"April 10","time":"12:20-12:40pm","room":"Abbey","type":"talk","track":"GPUs & LLM Infrastructure","speakers":["Lou Bichard"]}]},{"name":"Louis Knight-Webb","role":"CEO","company":"Vibe Kanban","twitter":"https://x.com/tokengobbler","linkedin":"https://www.linkedin.com/in/knightwebb/","github":"https://github.com/stunningpixels","photoUrl":"https://ai.engineer/speakers/europe/louis-knight-webb.jpg","sessions":[{"title":"Software Engineering Is Becoming Plan and Review","description":"AI eats the middle, software engineers are spending all their time planning and reviewing the work of AI.\n\nIf all humans are going to do is plan and review the work of AI, the biggest lever you have to ship more is to speed up planning and review.\n\nAnd some examples of how teams and individuals are adapting:\n- What tools are people spending their time in\n- How much time are teams spending reviewing code, how has this changed since AI\n- What are different approaches to planning work\n- Is agile and scrum dead? Are most product teams moving faster","day":"April 10","time":"2:30-2:50pm","room":"Westminster","type":"talk","track":"AI Architects","speakers":["Louis Knight-Webb"]}]},{"name":"Louis-François Bouchard","role":"Founder, Educator, AI Engineer","company":"Towards AI","companyDescription":"AI education and research community","twitter":"https://x.com/Whats_AI","linkedin":"https://www.linkedin.com/in/whats-ai/","github":"https://github.com/louisfb01","photoUrl":"https://ai.engineer/speakers/europe/louis-francois-bouchard.jpg","sessions":[{"title":"Build Your Own Deep Research Agent + Technical Writer","description":"Deep research is one of the best ways to learn how to build real AI systems because it forces you to combine reasoning, planning, autonomy, tools, grounding, and feedback loops in a single end-to-end workflow. In this hands-on workshop, you will build an MCP-powered deep research agent that can plan a research strategy, search the web, analyze YouTube videos, gather grounded evidence, filter for relevance and trustworthiness, and synthesize its findings into a cited research artifact. Rather than treating research as just another chatbot interaction, we will frame it as a goal-directed research loop: one that can search, inspect, pivot, and progressively refine its understanding of a topic.\n\nFrom there, we will connect that research artifact to a lightweight technical writing workflow that turns raw findings into polished, non-sloppy technical multimodal content. This second part of the system is deliberately more constrained: you will see how research and writing require much different architectures, why exploratory work benefits from agentic behavior, and why writing quality often improves with tighter workflows, review loops, and explicit guidance. Along the way, we will show how to choose between prompts, workflows, and agents depending on the task, and how to keep the overall system practical rather than over-engineered.\n\nWe will also cover observability and evaluation so the system is not only impressive in a demo, but measurable and improvable in practice. Most importantly, the workshop is grounded in experience: it distills what we learned over the past year building and using this research-and-writing pipeline internally. Attendees will leave with their own deep research agent, connecting it to a reliable technical writing workflow, and understanding the engineering tradeoffs behind both.","day":"April 8","time":"1:15pm-3:15pm","room":"St. James","type":"workshop","speakers":["Louis-François Bouchard","Paul Iusztin","Samridhi Vaid"]}]},{"name":"Luke Alvoeiro","role":"Product/Tech Lead","company":"Factory","linkedin":"https://www.linkedin.com/in/lukealvoeiro","github":"https://github.com/lukealvoeiro","photoUrl":"https://ai.engineer/speakers/europe/luke-alvoeiro.jpg","sessions":[{"title":"Factory Missions - Multi-Agent Systems That Ship for Days","description":"Everyone's building multi-agent systems, but nobody agrees on how. This talk proposes a taxonomy of five frontier multi-agent strategies and shows what happens when you compose them into a single architecture. Drawing from production data at Factory, we walk through a three-role system (orchestrator, workers, validators) that uses validation contracts, structured agent handoffs, and adversarial verification. We cover the case for serial over parallel execution, why model selection per role is a compounding advantage, and how to design systems that get better with each model generation instead of being made obsolete by them.","day":"April 10","time":"12:40-1:00pm","room":"Fleming","type":"talk","track":"Coding Agents","speakers":["Matan Grinberg","Luke Alvoeiro"]}]},{"name":"Luke Harries","role":"Growth Lead","company":"ElevenLabs","companyDescription":"AI voice technology platform","linkedin":"https://www.linkedin.com/in/luke-harries","photoUrl":"https://ai.engineer/speakers/europe/luke-harries.jpg","sessions":[{"title":"Give Your Chat Agent a Voice","day":"April 9","time":"12:40-1:00pm","room":"Abbey","type":"talk","track":"Voice & Vision","speakers":["Luke Harries"]}]},{"name":"Madison Faulkner","role":"Partner","company":"NEA","companyDescription":"Venture capital firm","twitter":"https://x.com/madsfaulkner","linkedin":"https://www.linkedin.com/in/madisonhfaulkner/","photoUrl":"https://ai.engineer/speakers/europe/madison-faulkner.jpg","sessions":[{"title":"CI/CD Is Dead, Agents Need Continuous Compute and Computers","description":"What happens when thousands of agents try to edit source code at once? Merge chaos, slow builds, and stacks of pull requests for engineers to review. While agentic software development has never been so promising, traditional CI/CD solutions threaten to constrain the agentic software transformation. In this session, we’ll unpack what happens when autonomous coding agents continuously open PRs, modify infrastructure, and trigger workflows across hundreds of repos: traditional CI/CD systems, tuned for infrequent human changes, become the latency and cost bottleneck in the SDLC. We’ll discuss concrete failure modes including runner saturation, cache thrash, cold Docker builds, test explosion, and opaque flakiness and show why treating CI/CD as a high-performance system (with specialized hardware, incremental execution, and fine-grained observability) is now a core AI infra problem, not a DevOps afterthought.\n\nUsing Namespace as a case study, we’ll go deep on how to architect an agent-ready pipeline layer: intelligent execution over GitHub Actions, remote caching and Turbo-style Docker builds, Git-aware incrementality, and workflow analytics that tie time and spend directly to specific jobs, repos, and agents. We’ll also cover operational requirements including ephemeral, high-performance clusters; private registries optimized for build workloads; and interactive debugging for both human and agent-authored changes—and how these design choices emerged from running microservices-scale infra at Google and beyond. Attendees will leave with a concrete blueprint for turning CI/CD into a throughput- and reliability-optimized substrate that can safely sustain 5–10x more changes from humans and agents without blowing up latency or cloud budgets.","day":"April 10","time":"12:40-1:00pm","room":"Westminster","type":"talk","track":"AI Architects","speakers":["Hugo Santos","Madison Faulkner"]}]},{"name":"Maggie Appleton","role":"Design Engineer","company":"GitHub Next","companyDescription":"GitHub's research and prototyping team","twitter":"https://x.com/Mappletons","photoUrl":"https://ai.engineer/speakers/europe/maggie-appleton.jpg","sessions":[{"title":"One Developer, Two Dozen Agents, Zero Alignment: Why we Need Collaborative AI Engineering","description":"Agentic engineering so far has been a solo story: one developer and a dozen agents moving at warp speed. But speed without thoughtful planning and team alignment is just wasting tokens. When everyone on a team is directing agents alone in their personal CLI tools with no shared context, you get duplicate work, conflicting changes, poorly-designed solutions, surprise features nobody else agreed to build, and everyone pulling in different directions.\n\nSerious software still requires serious collaboration. You need multiple perspectives and types of expertise to build great things. We need agentic environments where people can plan together, think critically together, and share the same context. In this talk I'll demo how we've tackled these design problems in Ace, a multiplayer agent environment from GitHub Next that uses real-time collaboration, proactive agents, and sandboxed micro VMs for rapid prototyping and exploration.","day":"April 9","time":"11:15-11:40am","room":"St. James","type":"track_keynote","track":"Context Engineering","speakers":["Maggie Appleton"]}]},{"name":"Malte Ubl","role":"CTO","company":"Vercel","companyDescription":"Frontend cloud platform","twitter":"https://x.com/cramforce","linkedin":"https://www.linkedin.com/in/malteubl/","photoUrl":"https://ai.engineer/speakers/europe/malte-ubl.jpg","sessions":[{"title":"The New Application Layer","description":"AI engineering is the legitimate successor to web development and the mainstream discipline that will define the next decade. Drawing on Vercel's own experience, Malte explores what it means to build infrastructure and applications in a world where agents are both the builders and users of software. In a future where the major AI labs commoditize, the real value will sit with the engineers building on top. The application layer is where the innovation happens, and AI engineers are the ones who will shape it.","day":"April 9","time":"9:10-9:30am","room":"Keynote","type":"keynote","speakers":["Malte Ubl"]}]},{"name":"Marc Klingen","role":"Co-founder","company":"Langfuse (part of Clickhouse)","twitter":"https://x.com/marcklingen","linkedin":"https://www.linkedin.com/in/marcklingen/","photoUrl":"https://ai.engineer/speakers/europe/marc-klingen.jpg","sessions":[{"title":"Skill issue: Lessons from skilling up coding agents to use Langfuse","description":"We built a Langfuse Skill, a reusable capability that enables coding agents to implement Langfuse tracing, prompt management and more autonomously.\nIn this talk, we share our journey from a simple baseline all the way to attempting a full auto-research/improvement loop, powered by the very traces the Skill generates.\nThe core insight: you can't improve what you don't trace and the Langfuse Skill is the proof of concept. We share honest learnings from what worked, what didn't, and what it reveals about the future of Skills as a foundation for scalable, self-improving agents.","day":"April 9","time":"12:20-12:40pm","room":"Moore","type":"talk","track":"Evals & Observability","speakers":["Marc Klingen"]}]},{"name":"Mardu Swanepoel","role":"Head of AI Engineering","company":"Flinn AI","linkedin":"https://www.linkedin.com/in/mardu-swanepoel-000/","photoUrl":"https://ai.engineer/speakers/europe/mardu-swanepoel.jpg","sessions":[{"title":"What the Best Agents Share","description":"What do AI agents from Harvey, Cursor, Manus, and Anthropic have in common despite serving drastically different domains? They all incorporate 4 core patterns that support user trust and adoption, whether in high-stakes environments like law and engineering or everyday workflows.\n\nIn this session we'll explore these four core patterns to better understand what each entails, when to apply each, and how they drive impact. We'll examine specific examples from Harvey, Cursor, Manus, and Claude Cowork to see how each company applies these patterns and what tangible outcomes they deliver for usage and adoption.","day":"April 9","time":"12:00-12:10pm","room":"Westminster","type":"lightning","track":"Harness Engineering","speakers":["Mardu Swanepoel"]}]},{"name":"Mario Zechner","role":"Baker","company":"Pi","twitter":"https://x.com/badlogicgames","linkedin":"https://www.linkedin.com/in/mariozechner/","github":"https://github.com/badlogic","photoUrl":"https://ai.engineer/speakers/europe/mario-zechner.jpg","sessions":[{"title":"Building pi in a World of Slop","description":"All I wanted was a shitty coding agent that is truly mine. And I’d have loved to just tell you why and how I built pi. But then Peter decided to make it the agentic core of OpenClaw. And now pi is collateral. So yes, this is a talk about pi. But it is also a talk about how agents are destroying OSS, how I deal with that, and a plea to slow the fuck down.","day":"April 10","time":"9:50-10:10am","room":"Keynote","type":"keynote","speakers":["Mario Zechner"]}]},{"name":"Marlene Mhangami","role":"Senior Developer Advocate","company":"Microsoft","twitter":"https://x.com/marlene_zw","linkedin":"https://www.linkedin.com/in/marlenemhangami/","github":"https://github.com/marlenemhangami","photoUrl":"https://ai.engineer/speakers/europe/marlene-mhangami.jpg","sessions":[{"title":"Beyond Code Coverage: Functionality Testing with Playwright","description":"AI has gotten faster at writing application code and to keep up many developers have let AI write their tests as well. Those tests might pass from a code coverage perspective but the final test is whether or not the application works functionally as expected. Now more than ever AI generated projects require end to end testings to help verify that applications work beyond self affirming unit tests created by LLMs. In this talk we'll understand how to build tests with Playwright. We'll discuss what Playwright is, how to set it up in Agentic workflows and walkthrough some examples of how developers are using it locally and in CI. We'll also look at best practices for using Playwright and accessing it through the new MCP server.","day":"April 10","time":"2:50-3:10pm","room":"Abbey","type":"talk","track":"GPUs & LLM Infrastructure","speakers":["Marlene Mhangami"]}]},{"name":"Matan Grinberg","role":"CEO","company":"Factory","twitter":"https://x.com/matangrinberg","linkedin":"https://www.linkedin.com/in/matan-grinberg","github":"https://github.com/matangrinberg","photoUrl":"https://ai.engineer/speakers/europe/matan-grinberg.jpg","sessions":[{"title":"Factory Missions - Multi-Agent Systems That Ship for Days","description":"Everyone's building multi-agent systems, but nobody agrees on how. This talk proposes a taxonomy of five frontier multi-agent strategies and shows what happens when you compose them into a single architecture. Drawing from production data at Factory, we walk through a three-role system (orchestrator, workers, validators) that uses validation contracts, structured agent handoffs, and adversarial verification. We cover the case for serial over parallel execution, why model selection per role is a compounding advantage, and how to design systems that get better with each model generation instead of being made obsolete by them.","day":"April 10","time":"12:40-1:00pm","room":"Fleming","type":"talk","track":"Coding Agents","speakers":["Matan Grinberg","Luke Alvoeiro"]}]},{"name":"Matt Carey","role":"Senior Systems Engineer","company":"Cloudflare","twitter":"https://x.com/mattzcarey","linkedin":"https://www.linkedin.com/in/mattzcarey/","github":"https://github.com/mattzcarey","photoUrl":"https://ai.engineer/speakers/europe/matt-carey.jpg","sessions":[{"title":"Every API Is a Tool for Agents","description":"The best MCP server is the one you didn't have to build.\n\nAt Cloudflare we have a lot of products. Our REST OpenAPI spec is over 2.3 million tokens. When teams started building MCP servers, they did what everyone does: cherry-picked important endpoints for their product, wrote some tool definitions and shipped a separate service that covered a small fraction of their API.\n\nThis was driven by a fundamental context limit of the end users' agent. And tools use a bunch of context just to describe themselves. MCP felt like a Mega Context Problem (and a separate service to maintain).\n\nI think we got it all wrong.\n\nThe context limit is not an MCP problem. It's an agent problem. Tools should probably be discovered on demand and clients are coming around to this. But maybe we can also do it on the server?\n\nCLIs get this for free, self-discoverable and documented by design. APIs just need a little help.\n\nThis talk will cover some of the techniques we've been exploring at Cloudflare, such as codemode and tool search, to make complete APIs accessible to agents through MCP.\n\nI'll also cover some of the work we are doing with the MCP Typescript SDK to make stateless servers the default.","day":"April 10","time":"11:15-11:40am","room":"St. James","type":"track_keynote","track":"MCP","speakers":["Matt Carey"]}]},{"name":"Matt Pocock","role":"Author","company":"AI Hero","twitter":"https://x.com/mattpocockuk","linkedin":"https://www.linkedin.com/in/mapocock/","photoUrl":"https://ai.engineer/speakers/europe/matt-pocock.jpg","sessions":[{"title":"AI Coding For Real Engineers","description":"A hands-on workshop covering the full lifecycle of AI-assisted development, from turning ambiguous requirements into agent-ready plans to running autonomous coding agents that ship production features.\n\nYou'll learn to stress-test vague briefs into structured PRDs, slice work into thin \"tracer bullet\" vertical slices, and run an AI agent with TDD. You'll watch it select tasks, write tests, implement code, and commit. You'll then refine your prompts based on where it struggles, graduate to fully autonomous (AFK) runs, and learn to design codebases that maximize agent effectiveness.\n\nYou'll walk away knowing how to:\n\n- Turn ambiguous requirements into agent-ready issues\n\n- Slice work into vertical tracer bullets an agent can grab independently\n\n- Run AI agents human-in-the-loop and autonomously with TDD\n\n- Design codebase architectures that AI agents love to work in\n\nFor: Engineers ready to move beyond chat-based AI assistance and build a real workflow for shipping features with autonomous coding agents.","day":"April 8","time":"3:45pm-5:45pm","room":"St. James","type":"workshop","speakers":["Matt Pocock"]},{"title":"It Ain't Broke: Why Software Fundamentals Matter More Than Ever","description":"AI coding tools are overhyped and powerful at the same time. Used well, they're extraordinary. Used badly, they'll bury you in spaghetti code faster than any human team could. The difference isn't the tool. It's the process. After 18 months of teaching developers to build with AI agents, Matt Pocock has watched the same patterns emerge: the devs who succeed aren't the ones who delegate everything or nothing. They're the ones who fall back on engineering fundamentals. In this talk, he shares the iterative process his students use to ship high-quality applications with AI agent swarms, and why the principles that make it work (ubiquitous language, vertical slices, TDD, deep modules) are decades-old ideas that didn't break. They got more important.","day":"April 9","time":"5:20-5:40pm","room":"Keynote","type":"keynote","speakers":["Matt Pocock"]}]},{"name":"Matthias Luebken","role":"Founder","company":"TAVON.ai","companyDescription":"OpenClaw coding agent","twitter":"https://x.com/luebken","linkedin":"https://www.linkedin.com/in/luebken/","github":"https://github.com/luebken","photoUrl":"https://ai.engineer/speakers/europe/matthias-luebken.jpg","sessions":[{"title":"A Piece of PI – Embedding The OpenClaw Coding Agent In Your Product","description":"When people use OpenClaw, they're amazed. It auto-discovers new capabilities, explores available data sources, stitches components together, and dynamically builds new solutions. It feels like the system is learning. It feels magical.\n\nAt its core, OpenClaw is powered by pi.dev: a deliberately simple coding agent built on a small set of powerful primitives. PI's \"radical extensibility\" turns out to be a strong architectural fit for the kinds of composable, evolving use cases OpenClaw is designed to support.\n\nIn this talk, we'll take a closer look at what's actually happening under the hood at the agent layer. This session is aimed at a technically curious audience — especially those who want to look beyond the surface and consider working with OpenClaw seriously.","day":"April 10","time":"11:40am-12:00pm","room":"Fleming","type":"talk","track":"Coding Agents","speakers":["Matthias Luebken"]}]},{"name":"Maxime Labonne","role":"Head of Post-Training","company":"Liquid AI","companyDescription":"AI research company building capable and efficient foundation models","twitter":"https://x.com/maximelabonne","linkedin":"https://www.linkedin.com/in/maxime-labonne/","github":"https://github.com/mlabonne","photoUrl":"https://ai.engineer/speakers/europe/maxime-labonne.jpg","sessions":[{"title":"Everything I Learned Training Frontier Small Models","description":"A new class of small models is emerging with the ability to reliably follow instructions and call tools while running on-device under 1 GB of memory. In this talk, we'll break down how to post-train frontier small models using the LFM2.5 recipe: on-policy preference alignment, agentic reinforcement learning, and curriculum training with iterative model merging. We'll cover training challenges unique to the 1B scale, like doom loops, capability interference, and how to fix them. The goal is to give you a concrete playbook to fine-tune and deploy small models for your own use cases, from structured data extraction to multi-turn tool use.","day":"April 9","time":"12:00-12:20pm","room":"Moore","type":"talk","track":"Evals & Observability","speakers":["Maxime Labonne"]}]},{"name":"Mayan Soni","company":"Trainline","sessions":[{"title":"Shipping complex AI applications | Braintrust & Trainline","description":"Getting a prototype working is straightforward. Making it reliable in production, especially with multi-step agents, tool use, and real users is the hard part. In this hands-on workshop, you'll work through the core parts of building production-grade AI applications with Braintrust.","day":"April 8","time":"1:15pm-3:15pm","room":"Westminster","type":"workshop","speakers":["Giran Moodley","Mayan Soni","Oussama Hafferssas"]}]},{"name":"Mayank Pant","role":"SA Global Revenue Specialist","company":"Stripe","sessions":[{"title":"Mastering AI Pricing: Flexible & Agile Monetization for your AI solutions","description":"Monetizing AI is hard. Rising GPU and inference costs are squeezing margins, and traditional SaaS pricing simply does not work for the unpredictable compute demands of new-age AI companies. With models constantly shifting across credits, tokens, and seats, a new challenge emerges: how do we charge for AI without stalling growth? This talk presents a framework for solving the dual problems of aligning charge metrics with true customer value and balancing predictable revenue with rapid adoption. Through real-world examples, we'll explore how to build guardrails that protect your margins and see how Stripe’s world-class usage-based billing solution helps AI companies launch quickly and monetize with ultimate agility. Whether you're launching your first AI product or revamping your current model, you'll learn how to make your pricing strategy both profitable and adaptable.","day":"April 10","time":"1:45-2:03pm","room":"Wesley","type":"expo_session","track":"Expo Sessions (Wesley)","speakers":["Mayank Pant"]}]},{"name":"Mehedi Hassan","role":"Product Engineer","company":"Granola","twitter":"https://x.com/mehedih_","linkedin":"https://www.linkedin.com/in/meh-hassan/","github":"https://github.com/MehediH","photoUrl":"https://ai.engineer/speakers/europe/mehedi-hassan.jpg","sessions":[{"title":"You can't just one shot it","day":"April 9","time":"2:50-3:10pm","room":"Abbey","type":"talk","track":"Voice & Vision","speakers":["Mehedi Hassan"]}]},{"name":"Merve Noyan","role":"Machine Learning Engineer","company":"Hugging Face","companyDescription":"AI model hub and platform","twitter":"https://x.com/mervenoyann","linkedin":"https://www.linkedin.com/in/merve-noyan-28b1a113a/","github":"https://github.com/merveenoyan","photoUrl":"https://ai.engineer/speakers/europe/merve-noyan.jpg","sessions":[{"title":"Open-Source Agents Ecosystem","description":"In this session I will talk about anything and everything open-source around open agents and tools. We will not only touch on agentic coding, but also vision language agents, building gaming environments and agent fine-tuning!","day":"April 9","time":"2:50-3:10pm","room":"Fleming","type":"talk","track":"Claws & Personal Agents","speakers":["Merve Noyan"]}]},{"name":"Michael Aaron","role":"Software Engineer","company":"Google DeepMind","companyDescription":"AI research lab by Google","photoUrl":"https://ai.engineer/speakers/europe/michael-aaron.jpg","sessions":[{"title":"Agentic Evaluations at Scale — For Everybody","description":"AI evaluations today are broken. They're decentralized, opaque, and in the hands of a few.\nBut AI should benefit all of humanity, and therefore evals should reflect the diversity of work that AI needs to undertake - and be transparent and reproducible so we can trust them.\nJoin us as we talk about how we're trying to solve these issues and the problems we're facing.","day":"April 9","time":"12:00-12:20pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Nicholas Kang","Michael Aaron"]}]},{"name":"Michael Arnaldi","role":"CEO & Creator of Effect","company":"Effectful Technologies Inc","companyDescription":"Effect TypeScript framework","twitter":"https://x.com/MichaelArnaldi","linkedin":"https://www.linkedin.com/in/michael-arnaldi-52858114a/","github":"https://github.com/mikearnaldi","photoUrl":"https://ai.engineer/speakers/europe/michael-arnaldi.jpg","sessions":[{"title":"Vibe Engineering Effect Apps","description":"The workshop will focus on practical coding with LLMs when using Effect","day":"April 8","time":"1:15pm-3:15pm","room":"Shelley","type":"workshop","speakers":["Michael Arnaldi"]}]},{"name":"Michael Hablich","role":"Product Manager","company":"Google","companyDescription":"Chrome DevTools","twitter":"https://x.com/MHablich","linkedin":"https://www.linkedin.com/in/michael-hablich/","photoUrl":"https://ai.engineer/speakers/europe/michael-hablich.jpg","sessions":[{"title":"Building Agent Interfaces: Lessons from Chrome DevTools (MCP) for Agents","description":"Last year, my team shipped Chrome DevTools MCP—and immediately learned we'd built it wrong.\n\nOur first version had one giant \"debug_webpage\" tool that tried to do everything. Agents couldn't compose behaviors. They failed silently when parts worked but others didn't. We had to rethink our entire architecture mid-project, decomposing that single tool into 26 focused, composable tools (click, screenshot, evaluate_script, get_network_requests, etc.).\n\nThat wasn't our only mistake. Our error messages were written for humans: \"Unable to navigate back in currently selected page.\" An agent reads that and... what? Does it retry? Give up? We rewrote them three times before agents could self-recover: \"Cannot navigate back, no previous page in history.\" Explicit. Actionable. Machine-parseable.\n\nThen came production reality. Real web pages make hundreds of network requests. We returned them all. Agents hit context limits and failed silently. We added pagination we hadn't planned for, learned token costs the hard way, and realized that token efficiency isn't an optimization—it's a core requirement.\n\nEvery agent developer hits these problems. The architecture patterns that work for human APIs break for agents.\n\nIf you're building MCP servers, REST APIs, or any interface agents will use, you'll face similar challenges:\n\n- **Architecture decisions**: Monolithic vs. composable tools, and when granularity becomes overhead\n- **Error recovery**: How to write error messages that enable agents to self-heal without human intervention\n- **Token efficiency**: Real costs, pagination strategies, and when to truncate vs. summarize\n- **Testing without user research**: How we learned from telemetry, failure patterns, and developer proxies\n- **MCP protocol choices**: Why we chose MCP, what it enabled, and where it constrained us\n\nThis talk shares specific implementation patterns from Chrome DevTools MCP—including the mistakes. I'll show code examples, error message transformations, and architecture decisions from our actual production system. Whether you're building MCP servers, or any agent-facing API, these patterns apply.","day":"April 10","time":"11:40am-12:00pm","room":"St. James","type":"talk","track":"MCP","speakers":["Michael Hablich"]}]},{"name":"Michael Richman","role":"Founder","company":"Cmd+Ctrl","twitter":"https://x.com/mrwoofster","linkedin":"https://www.linkedin.com/in/michael-richman-b7807b2/","github":"https://github.com/mrwoof","photoUrl":"https://ai.engineer/speakers/europe/michael-richman.jpg","sessions":[{"title":"Let’s Talk About FOMAT – Fear of Missing Agent Time","description":"You know FOMO – you also know FOMAT, you just didn’t have a name for it. Fear of Missing Agent Time. That nagging suspicion that the agent you kicked off 30 minutes ago is sitting there blocked, waiting for your input, while you’re off grabbing coffee.\n\nWe want our agents working and unblocked around the clock, but today we’re still chained to our desks to keep them humming along.\n\nThis talk introduces Cmd+Ctrl – a system that lets you unblock, monitor, and launch coding sessions from your phone, from your watch, from anywhere.\n\nLive demo included. Cmd+Ctrl is coding tool agnostic – it works with Claude Code, Cursor, Codex, Gemini CLI, GitHub Copilot, OpenCode, and more.","day":"April 10","time":"2:50-3:10pm","room":"Fleming","type":"talk","track":"Coding Agents","speakers":["Michael Richman"]}]},{"name":"Mike Christensen","role":"Staff Software Engineer","company":"Ably","twitter":"https://x.com/christensencode","linkedin":"https://www.linkedin.com/in/mikescottchristensen/","photoUrl":"https://ai.engineer/speakers/europe/mike-christensen.jpg","sessions":[{"title":"Why Your AI UX Is Broken (and It's Not the Model's Fault)","description":"AI interfaces are moving fast. The single-shot prompt box is already feeling dated. The products pulling ahead let users steer responses mid-stream, interrupt and redirect when the agent goes off track, pick up conversations across devices, send follow-up messages without waiting for the current response to finish, and hand off seamlessly between AI and human support. These aren't speculative features. They're already shipping in the best AI products, and users are starting to expect them everywhere.\n\nBuilding these experiences is hard, and not for the reasons you might think. The problem is not model intelligence or agent capabilities. It is that most AI sessions are ephemeral, tied to a single connection, device, or agent instance.\n\nMost AI apps stream responses over HTTP or SSE, a one way pipe from server to client tied to a single connection. That works for streaming a single response to a single client on a single device. But the moment you want to interrupt a response, resume after a disconnect, or sync across devices, you are fighting the transport at every step. A user switches tabs, refreshes the page, or hits a network blip, and the in-progress response disappears.\n\nTeams end up building their own fragile plumbing - message buffering, replay logic, and state recovery - instead of shipping their actual product.\n\nAs teams push the limits of AI UX, a pattern is starting to emerge. The most advanced AI products treat the session itself as a durable, shared resource, independent of any single connection, device, or agent instance. Connections break. Devices come and go. The session persists, and anyone who joins catches up automatically.\n\nThis is what durable execution did for backend workflows. A similar shift is now happening at the experience layer.\n\nThis talk explores the UX capabilities defining the next generation of AI products: resumable streaming, live steering and interruption, multi-device continuity, concurrent interactions, and human-in-the-loop handoff. I will show how leading AI products use durable sessions to enable these experiences - with demos built on real code.\n\nYou will walk away with a clear picture of the UX bar being set right now, and practical patterns for building these experiences, regardless of which model, framework, or infrastructure you use.","day":"April 9","time":"2:50-3:10pm","room":"Westminster","type":"talk","track":"Harness Engineering","speakers":["Mike Christensen"]}]},{"name":"Mike Spitz","role":"CTO","company":"PFF","twitter":"https://x.com/mikespitz_uk","linkedin":"https://www.linkedin.com/in/mike-spitz-89741243/","photoUrl":"https://ai.engineer/speakers/europe/mike-spitz.jpg","sessions":[{"title":"Agents Don't Do Standups: Building the Post-Engineer Engineering Org","description":"For decades our engineering organisations have run on the same ritutals designed for engineers. Standups, sprint planning/refinement, retros.\n\nWhat happens when humans are no longer the ones a company needs to cater for but agents?\n\nWe did a simple thought exercise at PFF, what happens when we stop asking \"how do we help engineers output more?\" and started asking \"how can we make the agents faster?\". Over the last few months, we dismantled our entire development lifecycle — CI/CD, code review, architecture, deployment and rebuilt it around agent-first principles. \n\nOur results speak for ourselves: 10x developer output, deploy cycles compressed from 5 days to multiple shipments per day, and customer satisfaction scores of 79%.\n\nThis talk is a field report from the post-engineer engineering org. I'll cover the framework we used to decide what to hand to agents, the guardrail architecture that replaced human review gates (layered agent evaluation, agent-on-agent review, and tiered human-in-the-loop for high-risk only), and the uncomfortable truth about what happened to team size.\n\nScrum didn't survive. Neither did most of the processes you're running today. I'll share exactly what replaced them, and the concrete metrics that prove it works. This is a framework that should inspire conversation on what needs to change at companies struggling to understand how to adopt AI in the new engineering world.\n\nYou'll leave with an opinionated, practical playbook: how to evaluate your own pipelines for agent-first redesign, what guardrail patterns actually work in production, and why the biggest mistake you can make right now is moving too slowly.","day":"April 10","time":"3:10-3:30pm","room":"Westminster","type":"talk","track":"AI Architects","speakers":["Mike Spitz"]}]},{"name":"Misha Kaletsky","role":"Founding Engineer","company":"Iterate","companyDescription":"AI agent platform for automating business processes","twitter":"https://x.com/mmkalmmkal","linkedin":"https://www.linkedin.com/in/mkaletsky/","github":"https://github.com/mmkal","photoUrl":"https://ai.engineer/speakers/europe/misha-kaletsky.jpg","sessions":[{"title":"Make your own event-sourced agent harness using stream processors","description":"https://github.com/iterate/ai-engineer-workshop\n\nThis workshop is in two parts:\n\nFirst, we'll show you for 10 minutes how to make a basic agent harness on top of our humble durable streams ish API . The only abstraction we'll need is a simple \"stream processor\", which consists of\n1) Some state (e.g. \"message history\" and \"is an llm request in progress\")\n\n2) A reducer that takes events and sometimes updates the state\n\n3) A hook / reactor that causes side effects (such as LLM requests) after an event is added to the stream\n\nThen we'll give you an hour to build the coolest possible thing with this. You can implement from scratch\n- an agent that calls tools via codemode\n\n- an agent with multiple LLM calls in progress at once\n\n- agent-to-agent message capability\n\n- an agent that responds to slack messages\n\n- an agent that is deployed as a serverless function and \"wakes up\" when new events arrive\n\nAnd much more!\n\nIt's kind of a wild abstraction that we started hacking on last week. Hitting all the buzzwords\n- event sourced\n\n- serverless\n\n- agent harness\n\nCheck\n\nSecond, you'll get to implement an agent harness of your dreams. You can\n\n- Implement code mode from scratch\n\n- Experiment with safety\n\nIt's a bit like making pi extensions, without trying to run them all in a single process.\n\nIn this workshop we'll show you how to build an AI agent like claude or pi or codex from scratch on top of this very simple abstraction.","day":"April 8","time":"10:40am-12:00pm","room":"Shelley","type":"workshop","speakers":["Misha Kaletsky","Jonas Templestein"]}]},{"name":"Neil Zeghidour","role":"CEO","company":"Gradium AI","twitter":"https://x.com/neilzegh","linkedin":"https://www.linkedin.com/in/neil-zeghidour-a838aaa7/","github":"https://github.com/neilz","photoUrl":"https://ai.engineer/speakers/europe/neil-zeghidour.jpg","sessions":[{"title":"Neil Zeghidour - Voice AI: when is the \"Her\" moment?","description":"As natural language becomes our primary interface with machines, voice is having a moment, from customer support to NPCs to “voicecoding.” Yet it’s still far from truly human. We’ll look back at how voice AI has evolved, the technical hurdles ahead, and where we’re taking the field to.","day":"April 9","time":"2:30-2:50pm","room":"Abbey","type":"talk","track":"Voice & Vision","speakers":["Neil Zeghidour"]}]},{"name":"Nicholas Kang","role":"Product Manager","company":"Google DeepMind","companyDescription":"AI research lab by Google","linkedin":"https://www.linkedin.com/in/nicholaskangjj","photoUrl":"https://ai.engineer/speakers/europe/nicholas-kang.jpg","sessions":[{"title":"Agentic Evaluations at Scale — For Everybody","description":"AI evaluations today are broken. They're decentralized, opaque, and in the hands of a few.\nBut AI should benefit all of humanity, and therefore evals should reflect the diversity of work that AI needs to undertake - and be transparent and reproducible so we can trust them.\nJoin us as we talk about how we're trying to solve these issues and the problems we're facing.","day":"April 9","time":"12:00-12:20pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Nicholas Kang","Michael Aaron"]}]},{"name":"Nick Nisi","role":"Developer Experience Engineer","company":"WorkOS","companyDescription":"Enterprise identity and access management","twitter":"https://x.com/nicknisi","linkedin":"https://linkedin.com/in/nicknisi","github":"https://github.com/nicknisi","photoUrl":"https://ai.engineer/speakers/europe/nick-nisi.jpg","sessions":[{"title":"Skills at Scale","description":"Write once, run in Claude, Codex, Cursor, and your own agents\n\nEvery developer using AI tools has the same problem: they prompt the same way, for the same tasks, over and over. Skills fix this. A skill is a portable unit of agent behavior that teaches any AI tool how to do a specific job. Write one, drop it into your editor, and it just works. Across tools. Across teams.\n\nMost people don't know this primitive exists. In this hands-on workshop, you'll write real skills, test them live, and see how one file can power Claude.ai, Claude Code, Cursor, and Codex without changing a line.\n\nThen we'll go deeper. You'll see how the WorkOS CLI uses this same pattern to power 15 framework\n\nintegrations — each one a skill composed with others, wired into an agent that installs and configures\n\nAuthKit in under 60 seconds. That's not a demo. That's production code, shipping today.\n\nWhat you'll do:\n\nWrite 2+ skills for tasks you actually do at work\n\nInstall and test them across AI tools in real time\n\nLearn the craft of good skill writing — specificity, constraints, composability\n\nSee how skills compose and scale inside a real CLI powered by the Claude Agent SDK\n\nWhat you'll leave with:\n\nWorking skills installed in your AI tools, ready to use Monday morning\n\nA repeatable pattern for turning any recurring task into a portable skill\n\nThe mental model for when a skill is enough and when you need a full agent\n\nNo repos to clone. No dependencies to install. Bring a laptop with Claude Code or Claude.ai and something you're tired of doing manually.","day":"April 8","time":"10:40am-12:00pm","room":"Abbey","type":"workshop","speakers":["Nick Nisi","Zack Proser"]}]},{"name":"Nick Taylor","role":"Developer Advocate","company":"Pomerium","companyDescription":"Zero trust access control and identity-aware proxy","twitter":"https://x.com/nickytonline","linkedin":"https://www.linkedin.com/in/nickytonline/","github":"https://github.com/nickytonline","photoUrl":"https://ai.engineer/speakers/europe/nick-taylor.jpg","sessions":[{"title":"Claws Out: Securing and Building with OpenClaw","description":"Running OpenClaw  without hardening access to it is a bad idea. We'll cover how I secured my OpenClaw, McClaw, contributed trusted-proxy auth mode to the OpenClaw project, and how I use it to build tools.\n\nWe're going to build something live during the talk using OpenClaw, the same way I built Clawspace, a browser-based file explorer/editor for your OpenClaw workspace.\n\nfeat(gateway): add trusted-proxy auth modegiithub.com/nickytonline/clawspace, a browser-based file explorer/editor for an OpenClaw workspace.github.com/pomerium/pomerium, an open core Identity-Aware Proxy","day":"April 9","time":"12:40-1:00pm","room":"Fleming","type":"talk","track":"Claws & Personal Agents","speakers":["Nick Taylor"]}]},{"name":"Nico Albanese","role":"DX Engineer","company":"Vercel","companyDescription":"Frontend cloud platform","twitter":"https://x.com/nicoalbanese10","linkedin":"https://www.linkedin.com/in/nicoalbanese/","github":"https://github.com/nicoalbanese","photoUrl":"https://ai.engineer/speakers/europe/nico-albanese.jpg","sessions":[{"title":"AI SDK v6","day":"April 8","time":"9:00-10:20am","room":"Shelley","type":"workshop","speakers":["Nico Albanese"]}]},{"name":"Nitya Narasimhan","role":"Senior AI Advocate","company":"Microsoft","companyDescription":"Cloud and AI platform","twitter":"https://x.com/NityaNarasimhan","linkedin":"https://www.linkedin.com/in/nityan/","photoUrl":"https://ai.engineer/speakers/europe/nitya-narasimhan.jpg","sessions":[{"title":"Mind the Gap (In your Agent Observability)","day":"April 8","time":"9:00-10:20am","room":"Moore","type":"workshop","speakers":["Amy Boyd","Nitya Narasimhan"]}]},{"name":"Nuno Campos","role":"CTO & Co-Founder","company":"Witan Labs","twitter":"https://x.com/nfcampos","linkedin":"https://www.linkedin.com/in/nuno-f-campos/","github":"https://github.com/nfcampos","photoUrl":"https://ai.engineer/speakers/europe/nuno-campos.jpg","sessions":[{"title":"Teaching Coding Agents to do Spreadsheets","description":"Full content available here https://github.com/witanlabs/research-log","day":"April 9","time":"11:40am-12:00pm","room":"St. James","type":"talk","track":"Context Engineering","speakers":["Nuno Campos"]}]},{"name":"Oli Gaymond","role":"Product Manager","company":"Google DeepMind","companyDescription":"AI research lab by Google","linkedin":"https://linkedin.com/in/ogaymond","photoUrl":"https://ai.engineer/speakers/europe/oli-gaymond.jpg","sessions":[{"title":"AI on Android: Ask me Anything","day":"April 9","time":"12:20pm-12:40pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Florina Muntenescu","Oli Gaymond"]}]},{"name":"Omar Sanseviero","role":"DevX Lead","company":"Google DeepMind","companyDescription":"AI research lab","twitter":"https://x.com/osanseviero","linkedin":"https://www.linkedin.com/in/omarsanseviero/","github":"https://github.com/osanseviero","photoUrl":"https://ai.engineer/speakers/europe/omar-sanseviero.jpg","sessions":[{"title":"Gemma, DeepMind's Family of Open Models","description":"Google DeepMind’s Gemma family is expanding. Join us for a deep dive into the latest models of the Gemma ecosystem. From vibe fine-tuning to Sovereign AI, you'll learn about the latest model capabilities, how to build high-performance applications, and how to get started with open models.","day":"April 10","time":"9:00-9:20am","room":"Keynote","type":"keynote","speakers":["Omar Sanseviero"]}]},{"name":"Onur Solmaz","role":"OpenClaw Maintainer","company":"OpenClaw","companyDescription":"Open-source personal AI assistant platform","twitter":"https://x.com/onusoz","linkedin":"https://www.linkedin.com/in/osolmaz/","github":"https://github.com/osolmaz","photoUrl":"https://ai.engineer/speakers/europe/onur-solmaz.jpg","sessions":[{"title":"Scaling Agents on Kubernetes with acpx and ACP","description":"What happens when you stop using one coding agent at a time and start running dozens in parallel on Kubernetes? Onur Solmaz, maintainer of OpenClaw and creator of acpx, walks through the architecture behind massively parallel PR triage and bug fixing using disposable agent pods. He introduces acpx — a headless CLI client for the Agent Client Protocol (ACP) — and shows how it replaces brittle PTY scraping with structured, protocol-driven communication between agents. The talk covers how ACP enables harness interoperability in OpenClaw, letting users drive Claude Code, Codex, and other coding agents through a single unified interface — whether from a terminal, Telegram, or Discord. Expect a live demo of agentic workflows orchestrated through acpx v0.4's node-based graph system, where deterministic steps drive multiple coding agents through routine engineering tasks at scale.","day":"April 9","time":"2:30-2:50pm","room":"Fleming","type":"talk","track":"Claws & Personal Agents","speakers":["Onur Solmaz"]}]},{"name":"Oussama Hafferssas","role":"Trainline","company":"Trainline","photoUrl":"https://ai.engineer/speakers/europe/oussama-hafferssas.jpg","sessions":[{"title":"Shipping complex AI applications | Braintrust & Trainline","description":"Getting a prototype working is straightforward. Making it reliable in production, especially with multi-step agents, tool use, and real users is the hard part. In this hands-on workshop, you'll work through the core parts of building production-grade AI applications with Braintrust.","day":"April 8","time":"1:15pm-3:15pm","room":"Westminster","type":"workshop","speakers":["Giran Moodley","Mayan Soni","Oussama Hafferssas"]}]},{"name":"Paige Bailey","role":"AI Developer Relations Lead","company":"Google DeepMind","companyDescription":"AI research lab by Google","twitter":"https://x.com/DynamicWebPaige","linkedin":"https://linkedin.com/in/dynamicwebpaige","github":"https://github.com/dynamicwebpaige","photoUrl":"https://ai.engineer/speakers/europe/paige-bailey.jpg","sessions":[{"title":"Build & deploy AI-powered apps","day":"April 8","time":"3:45pm-5:45pm","room":"Rutherford","type":"workshop","speakers":["Paige Bailey"]},{"title":"Build & deploy AI-powered apps","description":"Got a massive idea but stuck in the \"just talking about it\" phase? Let’s fix that. In this session, we’re cutting the fluff and diving straight into how to actually build and prototype at lightning speed using AI Studio Build and Antigravity - for 100% free.\nWe’ll be breaking down Google DeepMind's AI tech stack so you know exactly what tools to grab. You'll learn when to use heavyweights like Gemini 3.1 Pro or the brand-new Gemma 4 (huge shoutout to it being fully open and Apache 2.0 licensed 🔓), and when to keep things hyper-fast with Gemini 3 Flash and Flash-Lite. Plus, we’ll be messing around with Veo 3.1 Lite for video generation, NanoBanana 2, Lyria 3 for music generation, Genie 3 for world model building, and OpenClaw with Gemini to see what happens when you really push the limits of your prototypes.\n\nTL;DR: basically zero slides, more shipping. I'll be running live demos to show you how to take your side quests from \"just an idea\" to a fully working prototype, show you how to add new features to existing codebases, and we'll save time for a live Q&A to troubleshoot your builds and ideas.\n\nLet's build! 🚀","day":"April 9","time":"11:40-12:00pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Paige Bailey"]}]},{"name":"Patrick Debois","role":"Creator of DevOps","company":"Tessl","twitter":"https://x.com/patrickdebois","linkedin":"https://www.linkedin.com/in/patrickdebois/","photoUrl":"https://ai.engineer/speakers/europe/patrick-debois.jpg","sessions":[{"title":"Context Is the New Code","description":"We version control code, review it, test it, and observe it in production. We spent two decades building rigorous lifecycles around it.\n\nNow look at how we treat the context that drives AI coding agents: rules files copy-pasted from blog posts, prompts edited by hand, memory nobody audits. We’re in the cowboy coding era of context.\n\nIf context is the primary lever determining what agents produce, it deserves the same engineering rigor we give code. The Context Development Lifecycle (Generate, Evaluate, Distribute, Observe) gives us the stages. The process practices wrap around it: version control, peer review, CI/CD pipelines, and the team workflows to make context a shared engineering responsibility.\n\nThen there’s the bigger picture: the context flywheel. As agents consume context and produce results, every observation feeds back into better context, which produces better results. The teams that get this loop spinning build a compounding advantage that becomes their moat.\n\nThis is not a solved problem. It’s a journey we’ve already started, and if the DevOps transition taught us anything, the teams that figure out the lifecycle first will pull ahead fast.","day":"April 10","time":"11:15-11:40am","room":"Westminster","type":"track_keynote","track":"AI Architects","speakers":["Patrick Debois"]}]},{"name":"Patrick Löber","role":"Developer Relations Engineer","company":"Google DeepMind","companyDescription":"AI research lab by Google","twitter":"https://x.com/patloeber","linkedin":"https://linkedin.com/in/patrick-l%C3%B6ber-403022137","github":"https://github.com/patrickloeber","photoUrl":"https://ai.engineer/speakers/europe/patrick-loeber.jpg","sessions":[{"title":"Any-to-Any: Building Native Multimodal Agents","day":"April 9","time":"2:50-3:10pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Patrick Löber"]}]},{"name":"Paul Iusztin","role":"Senior AI Engineer & Educator","company":"Decoding AI","companyDescription":"AI education and consulting","linkedin":"https://www.linkedin.com/in/pauliusztin","github":"https://github.com/iusztinpaul","photoUrl":"https://ai.engineer/speakers/europe/paul-iusztin.jpg","sessions":[{"title":"Build Your Own Deep Research Agent + Technical Writer","description":"Deep research is one of the best ways to learn how to build real AI systems because it forces you to combine reasoning, planning, autonomy, tools, grounding, and feedback loops in a single end-to-end workflow. In this hands-on workshop, you will build an MCP-powered deep research agent that can plan a research strategy, search the web, analyze YouTube videos, gather grounded evidence, filter for relevance and trustworthiness, and synthesize its findings into a cited research artifact. Rather than treating research as just another chatbot interaction, we will frame it as a goal-directed research loop: one that can search, inspect, pivot, and progressively refine its understanding of a topic.\n\nFrom there, we will connect that research artifact to a lightweight technical writing workflow that turns raw findings into polished, non-sloppy technical multimodal content. This second part of the system is deliberately more constrained: you will see how research and writing require much different architectures, why exploratory work benefits from agentic behavior, and why writing quality often improves with tighter workflows, review loops, and explicit guidance. Along the way, we will show how to choose between prompts, workflows, and agents depending on the task, and how to keep the overall system practical rather than over-engineered.\n\nWe will also cover observability and evaluation so the system is not only impressive in a demo, but measurable and improvable in practice. Most importantly, the workshop is grounded in experience: it distills what we learned over the past year building and using this research-and-writing pipeline internally. Attendees will leave with their own deep research agent, connecting it to a reliable technical writing workflow, and understanding the engineering tradeoffs behind both.","day":"April 8","time":"1:15pm-3:15pm","room":"St. James","type":"workshop","speakers":["Louis-François Bouchard","Paul Iusztin","Samridhi Vaid"]}]},{"name":"Pedro Rodrigues","role":"AI Tooling Engineer","company":"Supabase","companyDescription":"Open source Firebase alternative","twitter":"https://x.com/rodriguespn23","linkedin":"https://www.linkedin.com/in/pedro-neves-rodrigues/","github":"https://github.com/Rodriguespn","photoUrl":"https://ai.engineer/speakers/europe/pedro-rodrigues.jpg","sessions":[{"title":"Skill Issue: How We Used AI to Make Agents Actually Good at Supabase","description":"Writing Agent Skills is easy. Writing ones that actually improve agent performance is not.\n\nIn this hands-on workshop, you’ll build, test, and iterate on Agent Skills against real Supabase workflows using a prebuilt environment with MCP, CLI tooling, and an eval harness powered by Braintrust.\n\nYou’ll start by writing a simple Skill and observing how it changes agent behavior. Then we’ll push further: you’ll modify the Skill, introduce bad patterns, and see how performance shifts — sometimes improving, sometimes getting worse, and sometimes doing nothing at all. Along the way, we’ll surface common failure modes, like Skills that aren’t used, misleading instructions, or changes that look good but don’t hold up under evaluation.\n\nThe core loop of the workshop is simple: write a Skill, run evals, inspect results, and iterate. By the end, you’ll have a practical understanding of how to validate Skills, how to avoid common pitfalls, and how to design Skills that actually help agents perform better in real systems.\n\nIf you’re working with agents, this workshop will give you the tools to move beyond guesswork and start measuring what actually works.\n\nAnd if you want to see how these patterns hold up at scale, the follow-up talk on the 9th dives into our eval results and what actually moved the needle in production.","day":"April 8","time":"9:00-10:20am","room":"Abbey","type":"workshop","speakers":["Pedro Rodrigues"]},{"title":"Combine Skills and MCP to Close the Context Gap","description":"Agents don’t fail because they’re weak — they fail because they lack the right context.\n\nEven when working with something as well-known as PostgreSQL, agents regularly produce insecure queries, inefficient patterns, or incorrect migrations. Not because they can’t reason, but because their knowledge is outdated, generic, or misaligned with the specific environment they’re operating in.\n\nIn this talk, we explore how Agent Skills and MCP work together to close that gap.\n\nUsing real-world Postgres workflows — from writing secure RLS policies to debugging slow queries and fixing broken migrations — we’ll show what actually breaks when agents operate without structured context, and what changes when you introduce the right abstractions. MCP provides the safe, auditable interface to interact with the system, while Agent Skills inject domain-specific guidance tailored to the actual environment, including Supabase-specific patterns and constraints.\n\nBut the results aren’t as straightforward as “more context = better agents.” We’ll share findings from our internal benchmarks comparing agent performance across different setups, showing that Skills don’t replace MCP — they amplify it — and that poorly designed or untested Skills can have little impact or even degrade performance.\n\nWe’ll also dig into how we structure Skills for real systems (not just isolated tasks), and how we use evals to measure their impact across realistic workflows. You’ll see how different combinations of Skills, MCP, and CLI-based interaction affect agent behavior, reliability, and outcomes.\n\nBecause once agents touch production systems like Postgres, the problem isn’t intelligence — it’s having the right context, delivered in the right way.","day":"April 9","time":"3:10-3:30pm","room":"St. James","type":"talk","track":"Context Engineering","speakers":["Pedro Rodrigues"]}]},{"name":"Peter Gostev","role":"AI Capability Lead","company":"Arena.ai","twitter":"https://x.com/petergostev","linkedin":"https://www.linkedin.com/in/peter-gostev/","photoUrl":"https://ai.engineer/speakers/europe/peter-gostev.jpg","sessions":[{"title":"What Do Models Still Suck At?","description":"What type of real world model responses do users still hate? We get to see millions of user's prompts - and we let users 'dislike both' on the Arena. We'll show you trends and examples of the tasks that LLMs still suck at despite the relentless hillclimbing.","day":"April 10","time":"5:30-5:50pm","room":"Keynote","type":"keynote","speakers":["Peter Gostev"]}]},{"name":"Peter Steinberger","role":"OpenClaw","company":"OpenAI","companyDescription":"AI research and deployment company","twitter":"https://x.com/steipete","linkedin":"https://www.linkedin.com/in/steipete/","photoUrl":"https://ai.engineer/speakers/europe/peter-steinberger.jpg","sessions":[{"title":"OpenClaw update","day":"April 9","time":"10:10-10:30am","room":"Keynote","type":"keynote","speakers":["Peter Steinberger"]},{"title":"OpenClaw AMA","day":"April 9","time":"11:15-11:40am","room":"Fleming","type":"track_keynote","track":"Claws & Personal Agents","speakers":["Peter Steinberger","swyx"]}]},{"name":"Peter Werry","role":"Engineer","company":"Unblocked","companyDescription":"AI context layer for developer productivity","photoUrl":"https://ai.engineer/speakers/europe/peter-werry.jpg","sessions":[{"title":"Mergeable by default: Building the context engine to save time and tokens","description":"Agents can generate code. The hard part is generating code that's right for your system, team conventions, and past decisions. That's a context problem that naive RAG, MCP servers, and bigger context windows don't solve. Without the right context, that code costs you twice: once in tokens, again in long review cycles.\n\nThis session is a practitioner's guide to building a context engine: the reasoning layer that brings together your organizational context and delivers only what the agent needs for the task at hand. I'll walk through the challenges that matter: reasoning across conflicting sources, maintaining permissions, and personalizing results based on who's asking and what they're working on. Along the way, we'll go deep on specific components with live demos and technical breakdowns.\n\nDrawn from real lessons building this in production, including what we got wrong.","day":"April 8","time":"1:15pm-3:15pm","room":"Wordsworth","type":"workshop","speakers":["Peter Werry"]}]},{"name":"Phil Hetzel","role":"Head of Solutions Engineering","company":"Braintrust","companyDescription":"AI evaluation and observability platform","photoUrl":"https://ai.engineer/speakers/europe/phil-hetzel.jpg","sessions":[{"title":"Why building eval platforms is hard","description":"An eval platform is not just a test runner. You are building shared definitions of \"good,\" reliable data pipelines, labelling workflows, versioning, and trust in results across many teams and model changes. This session breaks down the hidden complexity, the common failure modes, and the design principles that make evals credible and usable in day-to-day engineering.","day":"April 9","time":"11:15-11:40am","room":"Moore","type":"track_keynote","track":"Evals & Observability","speakers":["Phil Hetzel"]},{"title":"The maturity phases of running evals","description":"Most teams start with ad hoc spot-checks and end up needing a repeatable system that prevents regressions. This session walks through the maturity curve from manual examples, to automated test sets, to continuous evaluation tied to releases and real user traffic. You will leave with a clear model for what to do next at each stage, and what to avoid as you scale.","day":"April 9","time":"1:05-1:23pm","room":"Wordsworth","type":"expo_session","track":"Expo Sessions (Wordsworth)","speakers":["Phil Hetzel"]},{"title":"How agent o11y differs from traditional o11y","description":"Traditional observability tells you if a system is up and where it failed, but not whether it's actually delivering value to users. The best agent observability tools create a \"flywheel\" that connects observability and evals into a continuous improvement loop. This talk will show how agent observability drives not just uptime, but real agent quality, and how both technical and non-technical teams can power the flywheel together.","day":"April 9","time":"3:45-4:03pm","room":"Shelley","type":"expo_session","track":"Expo Sessions (Shelley)","speakers":["Phil Hetzel"]},{"title":"Does GenAI \"belong\" to data scientists?","description":"GenAI work sits at the intersection of modelling, product, and software engineering, and many orgs are still arguing about ownership. This session explores what data scientists uniquely contribute, what has shifted toward application engineering, and how teams can split responsibilities without creating bottlenecks. The goal is a practical operating model that ships features faster while improving quality and safety.","day":"April 10","time":"10:30-10:48am","room":"Shelley","type":"expo_session","track":"Expo Sessions (Shelley)","speakers":["Phil Hetzel"]}]},{"name":"Philipp Schmid","role":"Staff Engineer DevX","company":"Google DeepMind","companyDescription":"AI research lab by Google","twitter":"https://x.com/_philschmid","linkedin":"https://www.linkedin.com/in/philipp-schmid-a6a2bb196/","github":"https://github.com/philschmid","photoUrl":"https://ai.engineer/speakers/europe/philipp-schmid.jpg","sessions":[{"title":"Building Conversational Agents","day":"April 8","time":"1:15pm-3:15pm","room":"Rutherford","type":"workshop","speakers":["Thor Schaeff","Philipp Schmid"]},{"title":"Why (Senior) Engineers Struggle to Build AI Agents","day":"April 9","time":"12:10-12:20pm","room":"Westminster","type":"lightning","track":"Harness Engineering","speakers":["Philipp Schmid"]}]},{"name":"Prince Canuma","role":"ML Research Engineer","company":"Arcee.ai","twitter":"https://x.com/Prince_Canuma","linkedin":"https://pl.linkedin.com/in/prince-canuma","github":"https://github.com/Blaizzy","photoUrl":"https://ai.engineer/speakers/europe/prince-canuma.jpg","sessions":[{"title":"MLX Genmedia","day":"April 10","time":"11:40am-12:00pm","room":"Moore","type":"talk","track":"Generative Media","speakers":["Prince Canuma"]}]},{"name":"Priscila Andre de Oliveira","role":"Senior Software Engineer","company":"Sentry","sessions":[{"title":"Comprehend First, Code Later: The AI Skill I Rely On Daily","description":"Literally everyone is vibe coding. It's about letting AI write, commit, and ship code you never even read. Perfect for prototypes and side projects - no argument there. But what happens when you're working in a million-line codebase where you need to understand before you change? Quality code still matters.\n\nIt is widely stated in the software development community that developers spend 70–80% of their time reading and understanding existing code, not writing new code. We now have an incredibly smart tool - so why not use it for exactly that?\n\nSo I went straight to the data - 239 of my own messages from daily work at Sentry. What I found flipped the narrative: my #1 use of AI wasn't generation. It was comprehension. Whether navigating unfamiliar code or reconstructing past decisions from commit history - AI became the teammate who never gets tired of my questions.\n\nIn this talk, I'll show you the loop that actually makes me productive in a large, complex codebase: understand first, then code.","day":"April 10","time":"1:45-2:03pm","room":"Shelley","type":"expo_session","track":"Expo Sessions (Shelley)","speakers":["Priscila Andre de Oliveira"]}]},{"name":"Radek Sienkiewicz","role":"Owner","company":"VelvetShark.com","twitter":"https://x.com/velvet_shark","linkedin":"https://www.linkedin.com/in/radeksienkiewicz/","github":"https://github.com/velvetshark","photoUrl":"https://ai.engineer/speakers/europe/radek-sienkiewicz.jpg","sessions":[{"title":"I Gave an AI Agent the Keys to My Life (Here's What Happened)","description":"My AI agent reads my email, backs up its own memory and configuration at 2am, monitors its own health, and drafts replies to business proposals. I didn't plan any of this. It happened one permission at a time.\n\nThis talk is an honest walkthrough of what it actually looks like to live with a personal AI agent running around the clock. Not a demo of what's possible but a story of what happened.\n\nI'll cover:\n\n- The permission creep timeline: from \"read my files\" to \"handle my communications\" over several months\n- What the agent does at 3am: the real cron ecosystem, what I wake up to, what it produces overnight\n- How it monitors itself, recovers from failures, updates itselft\n- The trust that emerged: what it can do without asking, what requires permission, how that boundary gets enforced\n- Why I gave my agent a personality file that tells it to disagree with me (this helps a lot!)\n\nThis talk is for anyone building or thinking about building personal AI agents. No theory, no frameworks. Just real lessons from months of running one in production against my actual life. It's still (and always) a work in progress.","day":"April 9","time":"12:00-12:20pm","room":"Fleming","type":"talk","track":"Claws & Personal Agents","speakers":["Radek Sienkiewicz"]}]},{"name":"Rafael Levi","role":"Developer Relations","company":"Bright Data","sessions":[{"title":"Your Agent's Biggest Lie: \"I Searched the Web\"","description":"AI agents get blocked by anti-bot systems, served fake pages, and hit with CAPTCHAs - but report back as if everything worked fine. I'll walk through why this happens and demo the same agent on the same sites with and without Bright Data's Web MCP.\n\nTalking points:\n- Cloudflare AI Labyrinth actively feeds fake content to AI agents\n- Cloudflare blocks AI crawlers by default on 20% of the web\n- Columbia/Tow Center: 60%+ citation failure across 8 AI search engines\n- AWS built an IETF protocol (Web Bot Auth) to address the problem\n- Live demo: same agent, same sites - blocked vs clean data\n- How Bright Data's Web MCP handles anti-bot, CAPTCHAs, and JS rendering transparently","day":"April 9","time":"10:50-11:08am","room":"Shelley","type":"expo_session","track":"Expo Sessions (Shelley)","speakers":["Rafael Levi"]},{"title":"From MCP to Scale: Pipelines That Build Themselves","description":"An agent with Bright Data's MCP can do more than fetch data - it can build and maintain entire data pipelines. I'll show an agent that uses MCP to explore a target site, understand its structure, then autonomously writes a production script using Bright Data's APIs for scale processing. The site changes? The agent updates the pipeline. No human scraper maintenance.\n\nTalking points:\n- The agent as pipeline builder: MCP to explore and understand, API scripts for scale execution\n- Pre-recorded demo: agent explores site via MCP, detects structure, handles JS rendering\n- Agent writes production code using Bright Data's APIs for batch processing at scale\n- Agent maintains the pipeline - when the site changes, the agent fixes it\n- Bright Data as the complete toolkit the agent needs: MCP for intelligence, APIs for execution","day":"April 9","time":"1:25-1:43pm","room":"Shelley","type":"expo_session","track":"Expo Sessions (Shelley)","speakers":["Rafael Levi"]},{"title":"Your Agent's Biggest Lie: \"I Searched the Web\"","description":"AI agents get blocked by anti-bot systems, served fake pages, and hit with CAPTCHAs - but report back as if everything worked fine. I'll walk through why this happens and demo the same agent on the same sites with and without Bright Data's Web MCP.\n\nTalking points:\n- Cloudflare AI Labyrinth actively feeds fake content to AI agents\n- Cloudflare blocks AI crawlers by default on 20% of the web\n- Columbia/Tow Center: 60%+ citation failure across 8 AI search engines\n- AWS built an IETF protocol (Web Bot Auth) to address the problem\n- Live demo: same agent, same sites - blocked vs clean data\n- How Bright Data's Web MCP handles anti-bot, CAPTCHAs, and JS rendering transparently","day":"April 10","time":"3:45-4:03pm","room":"Wesley","type":"expo_session","track":"Expo Sessions (Wesley)","speakers":["Rafael Levi"]}]},{"name":"Raia Hadsell","role":"VP of Research","company":"Google DeepMind","companyDescription":"Frontier AI research lab","linkedin":"https://uk.linkedin.com/in/raia-hadsell-35400266","github":"https://github.com/raiah","photoUrl":"https://ai.engineer/speakers/europe/raia-hadsell.jpg","sessions":[{"title":"Frontier AI and the Future of Intelligence","day":"April 9","time":"9:30-9:50am","room":"Keynote","type":"keynote","speakers":["Raia Hadsell"]}]},{"name":"Raj Navakoti","role":"Staff Software Engineer","company":"IKEA","companyDescription":"Global home furnishings retailer","linkedin":"https://www.linkedin.com/in/raj-navakoti-529880b1/","photoUrl":"https://ai.engineer/speakers/europe/raj-navakoti.jpg","sessions":[{"title":"Build Your First Demand-Driven Context Base: Let AI Agents Tell You What They Need","description":"London's black cab drivers don't learn \"The Knowledge\" by reading a map. An examiner gives them a destination, they get lost, they discover which streets they didn't know, they go learn those streets. Each run fills gaps the previous one revealed — until they know the city. Nobody told them what to learn. The journeys told them.\n\nThat's the opposite of what every enterprise team is doing with AI agents right now. We curate tribal knowledge into skills.md files and structured knowledge bases, three people argue over a Miro board about what the agent needs to know, someone screenshots it into Confluence, and we wonder why the agent still can't reason about anything domain-specific. The industry has great tools for context management — RAG, vector databases, prompt engineering. But the harder unsolved problem is context discovery: how do you even know what to curate? Without solving it, your enterprise agent is just an expensive autocomplete.\n\nIn this hands-on workshop, you'll do what the cab drivers do — and what we do at IKEA Digital — give your agent destinations instead of maps. You'll build a Demand-Driven Context (DDC) base from scratch, where real problems drive what gets curated, not top-down guessing.\n\nThe exercise is simple but the insight is profound:\n\nYou'll get a realistic enterprise problem (we provide problem cards)\nYou'll give it to an AI agent with zero domain context\nThe agent will fail — and generate an information checklist of exactly what's missing\nYou'll fill the gaps using reference material we provide\nThe agent will try again — and succeed\nThat moment — when the agent goes from confidently wrong to correctly reasoned — is when DDC clicks. You didn't document everything. You documented exactly what one problem demanded. Now multiply that by 30 problems and you have a better knowledge base than months of top-down curation.\n\nWhat you'll build:\n\nA working knowledge base repo with structured domain entities, a sandbox for problem exploration, and a repeatable process for growing the knowledge base problem-by-problem. Everything in Markdown — human-readable, machine-parseable, Git-friendly. You'll also see how to use Claude Code sub-agents to separate concerns — a curator agent that identifies what context is missing, a solver agent that uses the curated context to reason, and role-specific agents (architect, engineer, product owner) that share the same knowledge base but operate with different reasoning boundaries. No custom tooling required — just CLAUDE.md files, sub-agent task delegation, and the knowledge repo structure doing the heavy lifting.\n\nWhat you'll experience:\n\nThe \"flip\" moment — agents telling you what they need instead of you guessing what to give them\nHow learning paths emerge from problems rather than being designed upfront\nHow different problems reveal overlapping context needs — showing you where to invest curation effort\nThe TDD parallel — DDC is to knowledge bases what TDD is to code\nWhat you'll see from real production use:\n\nWe've been running DDC at IKEA Digital against real vendor integration problems, architecture decisions, and system design tasks. The workshop includes a live walkthrough of actual DDC sessions — real enterprise problems, the information checklists agents generated, how context was curated, and the before/after of agent output quality. You'll see real numbers: knowledge base growth curves, context reuse across problems, and how agent accuracy improves from problem 1 to problem N.\n\nWhat you'll leave with:\n\nA working DDC knowledge base you can extend with your own domain\nA repeatable process for demand-driven curation\nA template repo with structure, formats, and agent guidance\nEvidence from real production use that demand-driven curation produces less volume but more signal than top-down documentation\nThe conviction that 30 real problems beat 6 months of documentation\nNo specific programming language required. Bring a laptop with Claude Code, Cursor, or any LLM-powered coding tool. All content is in Markdown — this is about knowledge, not code.\n\nWhether you're an engineer building enterprise agents, an architect designing knowledge systems, or a team lead who's tried and failed to curate domain knowledge for LLMs — this workshop gives you a framework you can apply the week you get home from London.","day":"April 8","time":"3:45pm-5:45pm","room":"Shelley","type":"workshop","speakers":["Raj Navakoti"]}]},{"name":"RL Nabors","role":"Developer Experience Engineer","company":"Dressed for Space","twitter":"https://x.com/nearestnabors","linkedin":"https://www.linkedin.com/in/nearestnabors","github":"https://github.com/rachelnabors","photoUrl":"https://ai.engineer/speakers/europe/rl-nabors.jpg","sessions":[{"title":"Your Agent Is an Infinite Canvas","description":"Right now, most agents interact with users in text/markdown. Chat has been heralded as the one-size fits all UI of the future. But we’ve had an interface like this before that was never adopted by the mainstream: the terminal.\n\nProgrammers can type out their intentions, but “point and grunt” interfaces win with end users. \n\nFortunately, we solved rich interactive UI thirty years ago with CSS, HTML, and JavaScript. These technologies are well rounded and feature complete, and thanks to MCP Apps and WebMCP, your agent just inherited the entire web platform as its rendering surface.\n\nMCP Apps embed full HTML, CSS, and JavaScript inside agent interfaces, not as a hack, but as a standard. WebMCP, a W3C community incubation led by the Chrome and Edge teams, is bringing structured tool calling directly into the browser. Together, they turn your agent from a text-in-text-out pipe into an infinite canvas: interactive forms, data visualizations, image galleries, approval workflows. Anything the web can render, your agent can now surface.\n\nThis talk demos the full stack live. You'll watch me browse, search, and read a 23-year-old web comic archive entirely through an agent: no browser tabs, no screen scraping, no DOM wrangling. \n\nWhat you'll walk away with:\n\n- How MCP Apps and WebMCP actually work (iframes, postMessage, and a W3C spec you should be tracking)\n- When to return text vs. structured data vs. rendered UI from your MCP server\n- How to make existing web content agent-native without a rewrite\n- Why the web platform is the most powerful and underused primitive in the MCP ecosystem","day":"April 10","time":"2:50-3:10pm","room":"St. James","type":"talk","track":"MCP","speakers":["RL Nabors"]}]},{"name":"Ruben Casas","role":"Staff Engineer","company":"Postman","companyDescription":"API development platform","twitter":"https://x.com/Infoxicador","linkedin":"https://www.linkedin.com/in/ruben-casas-17100383/","photoUrl":"https://ai.engineer/speakers/europe/ruben-casas.jpg","sessions":[{"title":"Beyond Components: Designing Generative UI for MCP Apps","description":"Most early interactions with large language models were limited to plain text. As agent systems evolved, chat interfaces gained richer interactions from tool-driven components to fully embedded third-party MCP applications.\n\nWhile these advances significantly improve usability, they still rely on a familiar paradigm: invoking pre-built, static UI components. Meanwhile, modern code-oriented models have become remarkably capable of generating high-quality frontend code in real time. Together, these trends unlock a new design space: generative user interfaces that are created dynamically, personalised per interaction, and still predictable, performant, and trustworthy.\n\nIn this talk, we’ll explore how MCP applications enable this shift, using concrete examples and live demos to map the emerging spectrum of generative UI patterns, including:\n\n* Static UI templates exposed through MCP tools and resources\n* Declarative and configuration-driven interfaces assembled at runtime\n* Fully generative interfaces authored on the fly by LLMs enabled by the “reverse agent” and sampling\n\nWe’ll discuss the trade-offs of each approach, when they make sense in production systems, and how MCP Apps helps balance flexibility with control. Attendees will leave with a practical framework for thinking about generative UI in MCP and a clearer sense of how to move beyond static components without sacrificing reliability or user trust.","day":"April 10","time":"12:40-1:00pm","room":"St. James","type":"talk","track":"MCP","speakers":["Ruben Casas"]}]},{"name":"Ryan Lopopolo","role":"Member of Technical Staff","company":"OpenAI","companyDescription":"AI research and deployment company","twitter":"https://x.com/_lopopolo","linkedin":"https://www.linkedin.com/in/ryanlopopolo/","github":"https://github.com/lopopolo","photoUrl":"https://ai.engineer/speakers/europe/ryan-lopopolo.jpg","sessions":[{"title":"Harness Engineering: How to Build Software When Humans Steer and Agents Execute","day":"April 9","time":"9:50-10:10am","room":"Keynote","type":"keynote","speakers":["Ryan Lopopolo"]},{"title":"OpenAI Symphony & Harness Engineering AMA","day":"April 9","time":"11:15-11:40am","room":"Westminster","type":"track_keynote","track":"Harness Engineering","speakers":["Ryan Lopopolo","Vibhu Sapra"]}]},{"name":"Sally Ann O'Malley","role":"Principal Software Engineer","company":"Red Hat","linkedin":"https://www.linkedin.com/in/sally-ann-omalley/","photoUrl":"https://ai.engineer/speakers/europe/sally-ann-omalley.jpg","sessions":[{"title":"Lobster Trap: OpenClaw in Containers from Local to K8s and Back","description":"You’ve spent hours building the right AI agent setup: the right model, the right tools and skills, AGENTS.md with your team’s conventions, and the workflows that actually work. Then your teammates want the same thing. Do you hand them a pile of markdown, config files, and YAML, or do you give them a working baseline?\n\nIn this demo, I’ll take OpenClaw from a stock local Podman setup to a deployment with a curated agent bundle, then lift that same baseline into Kubernetes. Along the way, I’ll show a pragmatic secret story: Podman secrets for local development, Vault for cluster deployments, and the same agent baseline moving between both.\n\nThe payoff is simple: your best engineer’s setup just became your team standard. Containers make that setup reproducible, Kubernetes makes it distributable, and “back to local” means the baseline is a starting point, not a cage.\n\nEverything runs on a laptop with local containers and a Kind cluster, and you’ll leave with a reproducible tutorial repo.","day":"April 9","time":"12:20-12:40pm","room":"Fleming","type":"talk","track":"Claws & Personal Agents","speakers":["Sally Ann O'Malley"]}]},{"name":"Sally-Ann Delucia","role":"Director, Product","company":"Arize","companyDescription":"AI observability platform","linkedin":"https://www.linkedin.com/in/sallyann-delucia-59a381172/","photoUrl":"https://ai.engineer/speakers/europe/sally-ann-delucia.jpg","sessions":[{"title":"Hierarchical Memory: Context Management in Agents","description":"An emerging pattern is clear: the best agent memory systems aren't built top-down; they emerge bottom-up from composable primitives. Whether it's grep piping to sort, table previews pointing to full spans, or databases feeding file systems, the winning architecture is always a hierarchy of tools that agents can chain together. The Unix philosophy – small, focused tools that do one thing well and compose infinitely – turns out to be exactly what LLMs need to make 200k tokens feel like 200 trillion. Fifty years later, the lessons still hold: make memory feel infinite by making it hierarchical, and make it hierarchical by making it composable. In this presentation, we’ll outline new findings/data on AI memory from working on thousands of deployed agents.","day":"April 9","time":"12:00-12:20pm","room":"St. James","type":"talk","track":"Context Engineering","speakers":["Sally-Ann Delucia"]}]},{"name":"Sam Morrow","role":"Senior Software Engineer","company":"GitHub","linkedin":"https://www.linkedin.com/in/sammorrow","github":"https://github.com/SamMorrowDrums","photoUrl":"https://ai.engineer/speakers/europe/sam-morrow.jpg","sessions":[{"title":"Lessons from Scaling GitHub's Remote MCP Server","description":"GitHub operates one of the most heavily-utilised MCP servers in the ecosystem, with over 4 million downloads of the stdio server alone. \n\nDiscover the architectural decisions, technical challenges and lessons learned while building and scaling a remote MCP server on production infrastructure.\n\nThe session walks through the journey from initial implementation to horizontal scaling, covering the specific challenges of condensing a platform as expansive as GitHub into a coherent MCP interface. Attendees will learn practical strategies for managing tool overload, optimizing context usage, implementing distributed session storage, and maintaining observability without compromising user privacy.\n\nWhether building a first remote server or optimizing an existing implementation, attendees will gain concrete patterns, anti-patterns, and architectural guidance from real production experience.\n\nKey Takeaways\n\t•\tArchitecture patterns for stateless, horizontally scalable remote MCP servers\n\t•\tPractical approaches to tool proliferation and context window constraints\n\t•\tWhy a focus on auth, security and privacy is essential to success","day":"April 10","time":"2:30-2:50pm","room":"St. James","type":"talk","track":"MCP","speakers":["Sam Morrow"]}]},{"name":"Samridhi Vaid","role":"AI Engineer","company":"Towards AI Inc","linkedin":"https://www.linkedin.com/in/samridhivaid/","github":"https://github.com/SamridhiVaid","photoUrl":"https://ai.engineer/speakers/europe/samridhi-vaid.jpg","sessions":[{"title":"Build Your Own Deep Research Agent + Technical Writer","description":"Deep research is one of the best ways to learn how to build real AI systems because it forces you to combine reasoning, planning, autonomy, tools, grounding, and feedback loops in a single end-to-end workflow. In this hands-on workshop, you will build an MCP-powered deep research agent that can plan a research strategy, search the web, analyze YouTube videos, gather grounded evidence, filter for relevance and trustworthiness, and synthesize its findings into a cited research artifact. Rather than treating research as just another chatbot interaction, we will frame it as a goal-directed research loop: one that can search, inspect, pivot, and progressively refine its understanding of a topic.\n\nFrom there, we will connect that research artifact to a lightweight technical writing workflow that turns raw findings into polished, non-sloppy technical multimodal content. This second part of the system is deliberately more constrained: you will see how research and writing require much different architectures, why exploratory work benefits from agentic behavior, and why writing quality often improves with tighter workflows, review loops, and explicit guidance. Along the way, we will show how to choose between prompts, workflows, and agents depending on the task, and how to keep the overall system practical rather than over-engineered.\n\nWe will also cover observability and evaluation so the system is not only impressive in a demo, but measurable and improvable in practice. Most importantly, the workshop is grounded in experience: it distills what we learned over the past year building and using this research-and-writing pipeline internally. Attendees will leave with their own deep research agent, connecting it to a reliable technical writing workflow, and understanding the engineering tradeoffs behind both.","day":"April 8","time":"1:15pm-3:15pm","room":"St. James","type":"workshop","speakers":["Louis-François Bouchard","Paul Iusztin","Samridhi Vaid"]}]},{"name":"Samuel Colvin","role":"CEO","company":"Pydantic","twitter":"https://x.com/samuelcolvin","linkedin":"https://www.linkedin.com/in/samuel-colvin/","github":"https://github.com/samuelcolvin","photoUrl":"https://ai.engineer/speakers/europe/samuel-colvin.jpg","sessions":[{"title":"Playground in Prod - Optimising Agents in Production Environments","description":"Deploying an agent is just the beginning. The real challenge is making it better once it's live — without redeploying, without downtime, and ideally without a human in the loop at all.\n\nIn this talk I'll introduce two complementary approaches to optimising agents in production, both built on Pydantic AI and Logfire:\n\n**Managed Variables** — a new Pydantic AI feature (build on the Open Feature standard) that lets you externalise key parameters of your agent (system prompts, model configuration, tool descriptions, thresholds) and update them instantly from the Logfire UI. Change a system prompt, swap a model, or adjust a temperature parameter and see the effect on the next request — no redeploy, no restart. This turns production into a playground where you can iterate on agent behaviour in seconds based on real traffic and real feedback.\n\n**Autonomous optimisation with GEPA** — once you can update agent parameters without redeploying, the natural next step is to let an optimiser do it for you. I'll show how GEPA (Genetic-Pareto reflective text evolution) can be wired into Logfire's managed variables to create a closed loop: observe agent performance via Logfire traces, reflect on failures, evolve better prompts, and push improvements live — all without human intervention.\n\nTogether these form a practical workflow: start by manually tuning your agent in production using managed variables, build up evaluation datasets from real traces, then hand the optimisation loop to GEPA to continuously improve performance.\n\n## Outline\n\n1. **The problem with agent deployment today** — why \"deploy and forget\" doesn't work for agents, and why traditional CI/CD is too slow for prompt iteration.\n2. **Managed Variables in Logfire** — live demo showing how to externalise a system prompt, update it from the Logfire dashboard, and see the change take effect immediately on a running agent.\n3. **From manual tuning to automated optimisation** — using Logfire's observability data (traces, evaluations, failure modes) to build the feedback signal GEPA needs.\n4. **GEPA + Managed Variables** — closing the loop: GEPA reflects on agent traces, evolves better prompts, and pushes them live via managed variables. Live demo of an agent that gets measurably better over time without any code changes.\n5. **Practical considerations** — guardrails, rollback, A/B testing between prompt variants, and when to trust autonomous optimisation vs. keeping a human in the loop.\n\n## Audience Takeaways\n\n- How to set up agents so key parameters can be changed without redeployment\n- A practical workflow for iterating on agent behaviour using production data\n- How reflective prompt evolution (GEPA) works and when to use it\n- How to combine Pydantic AI, Logfire, and GEPA into a continuous improvement loop for production agents","day":"April 8","time":"10:40am-12:00pm","room":"Westminster","type":"workshop","speakers":["Samuel Colvin"]}]},{"name":"Samuel Humeau","role":"AI Scientist","company":"Mistral","companyDescription":"Open-weight AI models","twitter":"https://x.com/DrSamuelBHume","linkedin":"https://www.linkedin.com/in/samuelhumeau/","photoUrl":"https://ai.engineer/speakers/europe/samuel-humeau.jpg","sessions":[{"title":"Mainstream TTS model architecture","description":"This talk is directed at builders who want an insight on how TTS model operate, how they can stream audio and clone voices. It will discuss about the dominant architectural pattern in TTS in 2026, taking Mistral's latest released TTS as an example.","day":"April 9","time":"11:40am-12:00pm","room":"Abbey","type":"talk","track":"Voice & Vision","speakers":["Samuel Humeau"]}]},{"name":"Sander Dieleman","role":"Research Scientist","company":"Google DeepMind","companyDescription":"AI research lab by Google","twitter":"https://x.com/sanderdieleman","linkedin":"https://www.linkedin.com/in/sanderdieleman","github":"https://github.com/benanne","photoUrl":"https://ai.engineer/speakers/europe/sander-dieleman.jpg","sessions":[{"title":"What goes into building generative image and video models at scale","day":"April 10","time":"12:00-12:40pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Sander Dieleman"]}]},{"name":"Sandipan Bhaumik","role":"Data & AI Tech Lead","company":"Databricks","linkedin":"https://www.linkedin.com/in/sandipanbhaumik","photoUrl":"https://ai.engineer/speakers/europe/sandipan-bhaumik.jpg","sessions":[{"title":"The Production AI Playbook: Deploying Agents at Enterprise Scale","description":"Every AI engineer knows the demo-to-production gap. Few cross it systematically. This workshop reveals playbook for taking agents from \"it works!\" to production systems generating measurable value. You'll build three artifacts: an evaluation framework for your use case, a production readiness assessment, and a 90-day deployment plan. We'll cover the Production AI Program's five pillars separating POCs from production, hands-on evaluation design using your actual project, and multi-agent orchestration patterns. No vendor pitches—just frameworks from deploying agents in financial services, healthcare, and enterprise environments where failure costs millions.","day":"April 9","time":"3:10-3:30pm","room":"Westminster","type":"talk","track":"Harness Engineering","speakers":["Sandipan Bhaumik"]}]},{"name":"Sarah Chieng","role":"Head of Developer Experience","company":"Cerebras","twitter":"https://x.com/sarahchieng","linkedin":"https://www.linkedin.com/in/sarah-chieng-888595139/","photoUrl":"https://ai.engineer/speakers/europe/sarah-chieng.jpg","sessions":[{"title":"Fast Models Need Slow Developers","description":"In the past few years, we've developed a series of 'bad habits' as a consequence of slow AI code generation. We write huge prompts, generate massive code diffs, or create 10 parallel sessions because each response takes so long.\n\nCodex Spark represents a genuine paradigm shift in AI capabilities and use cases: 1,200+ tokens/second enables real-time collaboration that requires a new approach to work. This talk is a practical playbook for working in this new regime, where the AI is faster than we can keep up with.","day":"April 10","time":"12:00-12:20pm","room":"Fleming","type":"talk","track":"Coding Agents","speakers":["Sarah Chieng"]}]},{"name":"Shivam Verma","role":"Staff Machine Learning Engineer","company":"Spotify","companyDescription":"Music streaming platform","twitter":"https://x.com/kaffeinated","linkedin":"https://www.linkedin.com/in/shivam13verma","photoUrl":"https://ai.engineer/speakers/europe/shivam-verma.jpg","sessions":[{"title":"Personalization in the Era of LLMs","description":"Streaming platforms serve hundreds of millions of users across a catalog of 100M+ items that changes daily. Classical recommender systems rank well but can't explain, converse, or generalize across surfaces. LLMs can — but they don't know your catalog, and they definitely don't know your user.\n\nIn this talk, I'll walk through how we bridge that gap at Spotify: teaching open-weight LLMs to be catalog-aware and user-aware without full fine-tuning. I'll cover three building blocks: (1) learned user representations that transfer across search, and recommendation; (2) Semantic IDs — discrete token sequences that let generative models reason over catalog entities the way they reason over words; and (3) parameter-efficient conditioning methods that inject user context into frozen LLMs — from single-token to multi-token projections that give the model a richer \"user prompt\" to attend over.\n\nI'll share what actually worked, what didn't, and the engineering tradeoffs of serving personalized LLMs at web scale.","day":"April 9","time":"12:20-12:40pm","room":"St. James","type":"talk","track":"Context Engineering","speakers":["Shivam Verma"]}]},{"name":"Stephan Steinfurt","role":"Consultant","company":"TNG Technology Consulting","companyDescription":"Values-based IT consulting partnership","twitter":"https://x.com/StSteinfurt","linkedin":"https://www.linkedin.com/in/stephan-steinfurt-968315115","sessions":[{"title":"Daily chess puzzle explanations on YouTube: Our agent analyzes and describes chess puzzles in an accessible way - arrows included!","description":"Hallway Track: Attendees propose lightning talks, peers vote, and the best talks get to go onstage as impromptu lightning talks. Check the AIE attendee Slack for signup and voting instructions.","day":"April 10","time":"3:40-3:50pm","room":"Moore","type":"lightning","track":"Hallway Track","speakers":["Stephan Steinfurt"]}]},{"name":"Stephen Batifol","role":"Developer Advocate","company":"Black Forest Labs","twitter":"https://x.com/stephenbtl","linkedin":"https://www.linkedin.com/in/stephen-batifol/","github":"https://github.com/stephen37","photoUrl":"https://ai.engineer/speakers/europe/stephen-batifol.jpg","sessions":[{"title":"Black Forest Labs: FLUX, Open Research, and the Future of Visual AI","description":"Black Forest Labs started with FLUX.1 - the open-weights image model that changed what people could do - and hasn't slowed down since. FLUX.1 Kontext, FLUX.2, FLUX.2 Klein. Each release pushed the frontier further, each one shipped in the open.This talk traces that arc, pulls back the curtain on the research happening behind the scenes, and gets into where BFL is actually going: building the standard for visual intelligence. Not better image generators - models that understand and simulate the physical world. Multimodal by design, everything in, everything out. The foundation for generative UIs, simulation, and agents that reason over the physical world.BFL's moat is velocity. They own the full stack from research to API, ship faster than anyone, and do it in the open. This talk is about how - and what comes next.","day":"April 10","time":"11:15-11:40am","room":"Moore","type":"track_keynote","track":"Generative Media","speakers":["Stephen Batifol"]}]},{"name":"Stephen Chin","role":"VP of Developer Relations","company":"Neo4j","companyDescription":"Graph database platform","twitter":"https://x.com/steveonjava","linkedin":"https://linkedin.com/in/steveonjava","photoUrl":"https://ai.engineer/speakers/europe/stephen-chin.jpg","sessions":[{"title":"Connecting the Dots with Context Graphs","description":"AI systems need more than intelligence; they need context that persists. Without it, even strong models can misinterpret information, lose decision rationale, or repeat the same mistakes. Context Graphs have emerged as a practical pattern for agentic AI: a living graph that captures not only what was retrieved or known, but how context led to actions through tool calls, constraints, policies, and outcomes, stitched across entities and time so precedent becomes searchable.\n\nThis talk explores context engineering as the discipline of designing that context layer, and shows how context graphs complement retrieval by enabling multi-hop, structured context assembly (building on GraphRAG-style hierarchical summaries) while improving explainability and evaluation. Attendees will leave with a practical understanding of how to build context pipelines that combine contextual retrieval with persistent memory and provenance, and why context graphs are becoming central to trustworthy, enterprise-ready AI systems.","day":"April 9","time":"2:30-2:50pm","room":"St. James","type":"talk","track":"Context Engineering","speakers":["Stephen Chin"]}]},{"name":"Stephen Parkinson","role":"Co-Founder","company":"Always Further","companyDescription":"Infrastructure for reliable, secure, and efficient AI agents","twitter":"https://x.com/SCPARKINSON","linkedin":"https://www.linkedin.com/in/scparkinson","github":"https://github.com/scp7","sessions":[{"title":"Nono.sh: Run Claude in dangerous mode","description":"Hallway Track: Attendees propose lightning talks, peers vote, and the best talks get to go onstage as impromptu lightning talks. Check the AIE attendee Slack for signup and voting instructions.","day":"April 10","time":"3:20-3:30pm","room":"Moore","type":"lightning","track":"Hallway Track","speakers":["Stephen Parkinson"]}]},{"name":"Steve Kaliski","role":"Principal Software Engineer","company":"Stripe","sessions":[{"title":"Building safe Payment Infrastructure for the autonomous economy","description":"Agents are evolving from calling free APIs to executing real transactions, creating a new challenge: how do we let software spend money autonomously without catastrophic risk? This talk presents Stripe's approach to solving the dual problems of secure credential transmission and making businesses discoverable to agents. Through live code examples, we'll explore how to build guardrails that make autonomous spend safe and examine what infrastructure is needed as agents purchasing becomes a core capability. Whether you're building agent frameworks or enabling your business to work with agents, you'll learn how to make agent transactions both powerful and safe.","day":"April 9","time":"4:05-4:23pm","room":"Wordsworth","type":"expo_session","track":"Expo Sessions (Wordsworth)","speakers":["Steve Kaliski"]}]},{"name":"Steve Ruiz","role":"CEO","company":"tldraw","companyDescription":"Infinite canvas SDK","twitter":"https://x.com/steveruizok","linkedin":"https://www.linkedin.com/in/steve-ruiz-61a150239/","photoUrl":"https://ai.engineer/speakers/europe/steve-ruiz.jpg","sessions":[{"title":"Agents on the Canvas in tldraw","description":"At tldraw, we've been bringing agents to our infinite canvas. In December 2025, we ran a one-month experiment named Fairydraw where users could work with three fairies—virtual collaborators who work with you, with your human collaborators, and coordinate together on large tasks. Learn what we learned.","day":"April 9","time":"11:40am-12:00pm","room":"Westminster","type":"talk","track":"Harness Engineering","speakers":["Steve Ruiz"]}]},{"name":"Sunil Pai","role":"Principal Systems Engineer","company":"Cloudflare","companyDescription":"Web infrastructure and security","twitter":"https://x.com/threepointone","linkedin":"https://www.linkedin.com/in/sunil-pai-a47732253/","photoUrl":"https://ai.engineer/speakers/europe/sunil-pai.jpg","sessions":[{"title":"Sunil Pai - Keynote","day":"April 9","time":"5:40-6:00pm","room":"Keynote","type":"keynote","speakers":["Sunil Pai"]}]},{"name":"swyx","role":"Founder","company":"AI Engineer","companyDescription":"AI Engineer conference and community","twitter":"https://x.com/swyx","github":"https://github.com/swyxio","photoUrl":"https://ai.engineer/speakers/europe/swyx.jpg","sessions":[{"title":"Running AI Engineer with AI","day":"April 10","time":"5:50-6:00pm","room":"Keynote","type":"keynote","speakers":["swyx"]},{"title":"OpenClaw AMA","day":"April 9","time":"11:15-11:40am","room":"Fleming","type":"track_keynote","track":"Claws & Personal Agents","speakers":["Peter Steinberger","swyx"]},{"title":"Leadership Lunch","description":"Leadership Addon and Max attendees: Moderated by swyx. swyx will facilitate the discussion with Igor Karpovich on Skyscanner's journey making agentic development work at scale, followed by peer-to-peer chats with leadership attendees.","day":"April 9","time":"1:00-2:00pm","room":"Keynote","type":"keynote","track":"Leadership Lunch","speakers":["swyx","Igor Karpovich"]},{"title":"Software Engineering + AI = ?","day":"April 9","time":"4:30-5:00pm","room":"Keynote","type":"keynote","speakers":["Gergely Orosz","swyx"]},{"title":"AMA + Feedback + Brainstorm: Improving AI Engineer","day":"April 10","time":"4:05-4:23pm","room":"Moore","type":"expo_session","track":"Expo Sessions (Moore)","speakers":["swyx"]}]},{"name":"Talha Sheikh","role":"AI Software Engineer","company":"Checkout.com","companyDescription":"Global payments platform","linkedin":"https://www.linkedin.com/in/talha-sheikh-007","sessions":[{"title":"Your coding agent doesn't always follow your rules. An agent harness makes sure it does, in real-time, every time.","description":"Hallway Track: Attendees propose lightning talks, peers vote, and the best talks get to go onstage as impromptu lightning talks. Check the AIE attendee Slack for signup and voting instructions.","day":"April 10","time":"3:10-3:20pm","room":"Moore","type":"lightning","track":"Hallway Track","speakers":["Talha Sheikh"]}]},{"name":"Tara Agyemang","role":"Developer Relations Engineer","company":"Google DeepMind","companyDescription":"AI research lab by Google","twitter":"https://x.com/tara_ojo","linkedin":"https://uk.linkedin.com/in/taraojo","github":"https://github.com/taraojo","photoUrl":"https://ai.engineer/speakers/europe/tara-agyemang.jpg","sessions":[{"title":"The agent-ready web: Simplify user actions with WebMCP","day":"April 9","time":"11:15-11:40am","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Tara Agyemang"]}]},{"name":"Tejas Kumar","role":"Developer Advocate (AI)","company":"IBM","companyDescription":"Keynote speaker, web developer, author, and host of the ConTejas Code podcast","twitter":"https://x.com/tejask","linkedin":"https://www.linkedin.com/in/tejasq/","github":"https://github.com/TejasQ","photoUrl":"https://ai.engineer/speakers/europe/tejas-kumar.jpg","sessions":[{"title":"Harnesses in AI: A Deep Dive","day":"April 9","time":"2:30-2:50pm","room":"Westminster","type":"talk","track":"Harness Engineering","speakers":["Tejas Kumar"]}]},{"name":"Thor Schaeff","role":"Developer Experience","company":"Google DeepMind","companyDescription":"AI research lab by Google","twitter":"https://x.com/thorwebdev","linkedin":"https://www.linkedin.com/in/thorwebdev","github":"https://github.com/thorwebdev","photoUrl":"https://ai.engineer/speakers/europe/thor-schaeff.jpg","sessions":[{"title":"Building Conversational Agents","day":"April 8","time":"1:15pm-3:15pm","room":"Rutherford","type":"workshop","speakers":["Thor Schaeff","Philipp Schmid"]},{"title":"What's new in AI Audio?","description":"A look at the latest updates on AI Audio from the Google DeepMind team. We'll be looking at the latest capabilities in audio understanding, speech generation, real-time multimodal agents, and music generation.","day":"April 9","time":"3:10-3:30pm","room":"Abbey","type":"talk","track":"Voice & Vision","speakers":["Thor Schaeff"]}]},{"name":"Tuomas Artman","role":"CTO","company":"Linear","companyDescription":"Project management tool","twitter":"https://x.com/artman","linkedin":"https://www.linkedin.com/in/tuomasartman/","photoUrl":"https://ai.engineer/speakers/europe/tuomas-artman.jpg","sessions":[{"title":"Fireside Chat with Gergely Orosz and Linear's Tuomas Artman","day":"April 10","time":"4:30-5:00pm","room":"Keynote","type":"keynote","speakers":["Gergely Orosz","Tuomas Artman"]}]},{"name":"Vaibhav Srivastav","role":"Developer Experience","company":"OpenAI","companyDescription":"AI research and deployment company","twitter":"https://x.com/reach_vb","linkedin":"https://www.linkedin.com/in/vaibhavs10","github":"https://github.com/vb-openai","photoUrl":"https://ai.engineer/speakers/europe/vaibhav-srivastav.jpg","sessions":[{"title":"Codex and Subagents","day":"April 8","time":"9:00-10:20am","room":"Westminster","type":"workshop","speakers":["Vaibhav Srivastav"]}]},{"name":"Vibhu Sapra","role":"Moderator","company":"Independent Researcher","twitter":"https://x.com/vibhuuuus","linkedin":"https://www.linkedin.com/in/vibhusapra/","photoUrl":"https://ai.engineer/speakers/europe/vibhu-sapra.jpg","sessions":[{"title":"OpenAI Symphony & Harness Engineering AMA","day":"April 9","time":"11:15-11:40am","room":"Westminster","type":"track_keynote","track":"Harness Engineering","speakers":["Ryan Lopopolo","Vibhu Sapra"]}]},{"name":"Vincent Chen","role":"Founding Research Fellow","company":"Snorkel AI","twitter":"https://x.com/vincentsunnchen","linkedin":"https://www.linkedin.com/in/vincentsunnchen","github":"https://github.com/vincentschen","photoUrl":"https://ai.engineer/speakers/europe/vincent-chen.jpg","sessions":[{"title":"The Art & Science of Benchmarking Agents","description":"Our ability to measure AI has been outpaced by our ability to develop it, and this eval gap is one of the most important problems in AI. We need more enduring benchmarks to close this gap, and consequently advance entire new vectors of capabilities for the field. In this talk, I'll share our learnings evaluating agents, drawing from experience working with nearly all global frontier labs and leading academics. We'll discuss the science (i.e., mechanics that make benchmarks rigorous and effective) and art (i.e., intangibles driving ambitious and enduring benchmarks) of building great benchmarks. I'll close by sharing some of the learnings from Open Benchmarks Grants— a $3M initiative in partnership with Hugging Face, Together AI, Prime Intellect, Factory, and others— and highlighting some of the projects we're most excited about funding.","day":"April 9","time":"12:40-1:00pm","room":"Moore","type":"talk","track":"Evals & Observability","speakers":["Vincent Chen"]}]},{"name":"Vincent Koc","role":"AI Research Engineer, Evals, DevRel","company":"Comet ML - OpenClaw","twitter":"https://x.com/vincent_koc","photoUrl":"https://ai.engineer/speakers/europe/vincent-koc.jpg","sessions":[{"title":"Dark Factory: OpenClaw Ships Faster Than You Can Read the Diff","description":"When your open-source project hits 320k+ stars and hundreds of daily commits, you stop managing contributions and start engineering the factory itself. I'll share how OpenClaw's team rebuilt around a plugin architecture, modular security boundaries, and a development model where agents and humans ship side by side, and why the real lesson for anyone building AI infrastructure is that process scales harder than code.","day":"April 9","time":"11:40am-12:00pm","room":"Fleming","type":"talk","track":"Claws & Personal Agents","speakers":["Vincent Koc"]},{"title":"Malleable Evals: Why Are We Still Evaluating Adaptive Systems with Static Tests?","day":"April 10","time":"2:05-2:23pm","room":"Wordsworth","type":"expo_session","track":"Expo Sessions (Wordsworth)","speakers":["Vincent Koc"]}]},{"name":"Weiyi Wang","role":"Software Engineer","company":"Google DeepMind","companyDescription":"AI research lab by Google","linkedin":"https://www.linkedin.com/in/weiyiwang1993","sessions":[{"title":"Accelerating AI on Edge","day":"April 10","time":"2:30-2:50pm","room":"Rutherford","type":"talk","track":"Google DeepMind/Gemini","speakers":["Chintan Parikh","Weiyi Wang"]}]},{"name":"William Tarr","role":"Deputy Director","company":"Ministry of Justice","linkedin":"https://www.linkedin.com/in/willtarr/","photoUrl":"https://ai.engineer/speakers/europe/william-tarr.jpg","sessions":[{"title":"Building the Justice AI Unit: Shipping Production AI Inside Government","description":"Building the Justice AI Unit (https://ai.justice.gov.uk/) — forward-deployed engineers, entrepreneurial model, shipping production AI every day, but inside government.","day":"April 9","time":"12:40-1:00pm","room":"Westminster","type":"talk","track":"Harness Engineering","speakers":["William Tarr"]}]},{"name":"Wolfram Ravenwolf","role":"AI Evangelist","company":"Weights & Biases / CoreWeave","companyDescription":"AI developer tools and cloud infrastructure","twitter":"https://x.com/WolframRvnwlf","linkedin":"https://www.linkedin.com/in/wolframravenwolf","github":"https://github.com/WolframRavenwolf","sessions":[{"title":"WolfBench: Why one score is not enough & why thinking can make agents dumber","description":"Hallway Track: Attendees propose lightning talks, peers vote, and the best talks get to go onstage as impromptu lightning talks. Check the AIE attendee Slack for signup and voting instructions.","day":"April 10","time":"3:00-3:10pm","room":"Moore","type":"lightning","track":"Hallway Track","speakers":["Wolfram Ravenwolf"]}]},{"name":"Zack Proser","role":"Full-stack Developer, Developer Education","company":"WorkOS","companyDescription":"Enterprise identity and access management","linkedin":"https://linkedin.com/in/zackproser","github":"https://github.com/zackproser","photoUrl":"https://ai.engineer/speakers/europe/zack-proser.jpg","sessions":[{"title":"Skills at Scale","description":"Write once, run in Claude, Codex, Cursor, and your own agents\n\nEvery developer using AI tools has the same problem: they prompt the same way, for the same tasks, over and over. Skills fix this. A skill is a portable unit of agent behavior that teaches any AI tool how to do a specific job. Write one, drop it into your editor, and it just works. Across tools. Across teams.\n\nMost people don't know this primitive exists. In this hands-on workshop, you'll write real skills, test them live, and see how one file can power Claude.ai, Claude Code, Cursor, and Codex without changing a line.\n\nThen we'll go deeper. You'll see how the WorkOS CLI uses this same pattern to power 15 framework\n\nintegrations — each one a skill composed with others, wired into an agent that installs and configures\n\nAuthKit in under 60 seconds. That's not a demo. That's production code, shipping today.\n\nWhat you'll do:\n\nWrite 2+ skills for tasks you actually do at work\n\nInstall and test them across AI tools in real time\n\nLearn the craft of good skill writing — specificity, constraints, composability\n\nSee how skills compose and scale inside a real CLI powered by the Claude Agent SDK\n\nWhat you'll leave with:\n\nWorking skills installed in your AI tools, ready to use Monday morning\n\nA repeatable pattern for turning any recurring task into a portable skill\n\nThe mental model for when a skill is enough and when you need a full agent\n\nNo repos to clone. No dependencies to install. Bring a laptop with Claude Code or Claude.ai and something you're tired of doing manually.","day":"April 8","time":"10:40am-12:00pm","room":"Abbey","type":"workshop","speakers":["Nick Nisi","Zack Proser"]}]},{"name":"Zaid Zaim","role":"Developer Advocate","company":"Neo4j","twitter":"https://x.com/ZaidZaim2k","linkedin":"https://www.linkedin.com/in/zaidzaim/","github":"https://github.com/ZaidZaim","photoUrl":"https://ai.engineer/speakers/europe/zaid-zaim.jpg","sessions":[{"title":"Context Graphs for Explainable, Decision-Aware AI Agents","description":"AI agents can follow prompts and use tools, but often lack the institutional context needed to explain why a decision is made. That reasoning: policies, precedents, and past outcomes are usually scattered across systems and human memory.\nContext graphs capture this missing layer by modeling decision traces over time, including causality and context. By giving agents access to just enough historical and organizational knowledge, context graphs enable more explainable, consistent, and auditable decisions.","day":"April 9","time":"2:50-3:10pm","room":"St. James","type":"talk","track":"Context Engineering","speakers":["Andreas Kollegger","Zaid Zaim"]}]},{"name":"Ziv Ilan","role":"AI Labs","company":"NVIDIA","companyDescription":"GPU computing and AI platform","linkedin":"https://www.linkedin.com/in/ziv-ilan-deci/","photoUrl":"https://ai.engineer/speakers/europe/ziv-ilan.jpg","sessions":[{"title":"You Might Not Need 50 Diffusion Steps","day":"April 10","time":"2:30-2:50pm","room":"Abbey","type":"talk","track":"GPUs & LLM Infrastructure","speakers":["Ziv Ilan"]}]}]}