2024 Conference Programme

Subpage Hero

Subpage Hero

      

Loading

How to evaluate if your GenAI models are safe

09 Oct 2024
AI, Machine Learning & Advanced Analytics Theatre
How to evaluate if your GenAI models are safe

We will talk with you about red teaming, and evaluation of safety and biases of LLMs. We're investing in this area and want to share our approach with model-builders, app-builders and industry regulators. AI technology is moving past the Proof-of-Concept stage, and today we better understand its power and limitations. From regulators and research institutes to enterprises and startups, all our clients are paying closer attention to pre-release and in-production model evaluation. While there is a lot of great research today, many gaps still exist: some focus on the extremely basic topics, others explore about futuristic risks. We want to walk you through our risk-based methodology for evaluating safety and biases that is in the middle of these extremes. Here's what you can expect from this session: - A bit of theory: why it matters, safety methods overview, and safety evaluation techniques. - A DIY guide: how to create your own safety policy evaluation, assess fairness & biases, and when and how to use red-teaming. - What to assess: major risk categories, user and model intents and scenarios.

Speakers
Ilya Kochik, VP of Strategic Initiatives - Toloka AI

Sponsors

Keynote Theatre Sponsor

AI, Machine Learning & Advanced Analytics Theatre Sponsor


 

VIP Lounge Sponsor

VIP Lunch Sponsors


 

Gold Sponsors

Silver Sponsors

Bronze Sponsors

Exhibitors

Partners

Data & AI Learning Partner

Preferred Learning Partner

Community Partner

AI Insights Partner

Association Partners

Event Partners

Media Partners

Official News Release Distributor Partner

Official Training Partner

Knowledge Partner

frost & sullivan

 

Official Partner Hotel

Held In

Supported By

Singapore MICE Sustainability Certification - BRONZE