Harassment, Hate Speech, Pig Butchering Scams, and Underage users. These are just some of the possible categories of the (very) long tail of harm on Tinder. How can we possibly train, serve, and maintain models for all of these, at global, real-time scale? We build off of pre-trained models and an increasingly mature open-source ecosystem. In this talk, we'll cover how we've dramatically accelerated our modeling pipeline with (1) human-AI hybrid dataset generation for different harm vectors, (2) automated parameter-efficient fine-tuning of open-source large language and multimodal models for violation detection, and (3) serving fine-tuned adapters efficiently in real-time and at scale using LoRAX and cascade classification.
"Vibhor Kumar is a computer scientist, amateur neuroscientist, and armchair philosopher of science. He likes working at the intersection of the theoretical and applied. His work has been involved in mapping fly brains, catching financial fraud, and generating assets of various types using AI.
He is currently a software engineer in Trust and Safety at Tinder, a hands-on advisor to AI startups including Togethr.ai, a contributor to open-source AI projects, and an angel investor."
"Vibhor Kumar is a computer scientist, amateur neuroscientist, and armchair philosopher of science. He likes working at the intersection of the theoretical and applied. His work has been involved in mapping fly brains, catching financial fraud, and generating assets of various types using AI.
He is currently a software engineer in Trust and Safety at Tinder, a hands-on advisor to AI startups including Togethr.ai, a contributor to open-source AI projects, and an angel investor."
We have now sold out of Early Bird tickets; General Admission has also sold out.
Please join us online for the free livestream.