AI Models Red-Teaming | join seminar with Prof. Przemyslaw Biecek

26.09.2023

XAI (explainable artificial intelligence) techniques offer tools for in-depth analysis of predictive models, also in terms of identifying potential vulnerabilities. This is what Prof. Przemyslaw Biecek will talk about during his seminar on October 11. Join us for a lecture entitled ‘Red-Teaming AI models – how and why to use XAI to analyze predictive models.’

AI models are increasingly used in information systems. But are they the strongest or weakest link in these systems? By analyzing the incident base of AI models (e.g., https://incidentdatabase.ai/), it is easy to see that vulnerabilities in these models can be exploited in a wide range of potential attacks on IT systems. XAI (explainable machine learning, or explainable artificial intelligence) techniques offer tools for in-depth analysis of predictive models, also in terms of identifying potential model vulnerabilities. During the presentation, Przemyslaw Biecek will present the most popular techniques for analyzing XAI models based on the Explanatory Model Analysis (https://ema.drwhy.ai/) monograph, and discuss some basic attacks and defenses on models and explanations of predictive models.

Attend in person at NASK or online (via YouTube), but register in advance at this link.

Przemysław Biecek is a professor at the Faculty of Mathematics and Information Sciences at Warsaw University of Technology (MiNI@Warsaw University of Technology). He graduated in mathematical statistics at PAM@WUST and software engineering at CSM@WUST. He pursues his scientific mission by developing methods and tools for responsible machine learning, trustworthy artificial intelligence and reliable software engineering. He is interested in predictive modeling of large and complex data, data visualization and interpretability of models. His main research project is DrWhy.AI – tools and methods for exploring and explaining predictive models.