Maxime Labonne - Everything you need to know about Finetuning and Merging LLMs

Video Available!

Fine-tuning LLMs is a fundamental technique for companies to customize models for their specific needs. In this talk, we will cover when fine-tuning is appropriate, popular libraries for efficient fine-tuning, and key techniques. We will explore both supervised fine-tuning (LoRA, QLoRA) and preference alignment (PPO, DPO, KTO) methods.

Maxime Labonne

Maxime Labonne is a Senior Staff Machine Learning Scientist at Liquid AI, serving as the head of post-training. He holds a Ph.D. in Machine Learning from the Polytechnic Institute of Paris and is recognized as a Google Developer Expert in AI/ML. An active blogger, he has made significant contributions to the open-source community, including the LLM Course on GitHub, tools such as LLM AutoEval, and several state-of-the-art models like NeuralBeagle and Phixtral. He is the author of the best-selling book “Hands-On Graph Neural Networks Using Python,” published by Packt. Connect with him on X and LinkedIn @maximelabonne.

Maxime Labonne
Maxime LabonneSenior Staff ML Scientist

Buy Tickets

We have now sold out of Early Bird tickets; General Admission has also sold out.
Please join us online for the free livestream.

Buy Tickets SOLD OUT!