Skip to yearly menu bar Skip to main content



Invited Talks
Keynote
Sandra Wachter
Abstract

"AI is increasingly used to make automated decisions about humans. These decisions include assessing creditworthiness, hiring decisions, and sentencing criminals. Due to the inherent opacity of these systems and their potential discriminatory effects, policy and research efforts around the world are needed to make AI fairer, more transparent, and explainable.

To tackle this issue the EU recently passed the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring as well as transparency tools. But is Europe ready for this task?

In this session I will examine several EU legal frameworks and demonstrate how AI weakens legal recourse mechanisms. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe.

I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world."

Keynote
Lourdes Agapito · Vittorio Ferrari
Abstract

Synthesia is one of Europe's newest billion-euro startups. Its core technology is script-to-video: realistic AI avatars delivernig compelling presentations to the virtual camera. Used by more than 50,000 companies worldwide, including 400 of the Fortune 500, it is computer vision technology that operates in the real world.

Lourdes Agapito and Vittorio Ferrari will talk about the development of this technology from computer vision research papers to real-world product, and about the current and future directions of their research.

Keynote
Sanmi Koyejo
Abstract

Distribution shifts describe the phenomena where the deployment performance of an AI model exhibits differences from training. On the one hand, some claim that distribution shifts are ubiquitous in real-world deployments. On the other hand, modern implementations (e.g., foundation models) often claim to be robust to distribution shifts by design. Similarly, phenomena such as “accuracy on the line” promise that standard training produces distribution-shift-robust models. When are these claims valid, and do modern models fail due to distribution shifts? If so, what can be done about it? This talk will outline modern principles and practices for understanding the role of distribution shifts in AI, discuss how the problem has changed, and outline recent methods for engaging with distribution shifts with comprehensive and practical insights. Some highlights include a taxonomy of shifts, the role of foundation models, and finetuning. This talk will also briefly discuss how distribution shifts might interact with AI policy and governance.