“Forget Killer Robots—Bias Is the Real AI Danger. (…) The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased.” says John Giannandrea, head of Google’s artificial intelligence division. What does this mean, how does this threat manifest itself, and what can we do about it? Join the webinar with Dr. Karolina Kulicka on April 10, at 10:00 am. Don’t forget to register!

Page image

Despite a good idea and multi-million dollar budgets, new product development projects often fail. The reasons for project failure can be incomplete training data, initial assumptions based on stereotypes rather than facts, or tests conducted on too homogeneous a group of people. During the webinar, hosted by Dr. Karolina Kulicka, you will learn how to eliminate the impact of stereotypes on data in machine learning, biometrics and cybersecurity, as well as:

  • what are the sources of errors due to stereotypes in AI technology development projects,
  • how to understand and apply the concepts of gender, race, class – in machine learning and AI models,
  • what are examples of stereotype-induced algorithmic errors and how to prevent them.

The webinar will be held in Polish. Please register yourself here.

Dr. Karolina Kulicka completed her doctoral studies at the State University of New York at Buffalo. She combines experience in the private sector (training, consulting), public sector (Ministry of Science and Information Technology, EU, UN, etc.) and scientific sector (National Center for Research and Development, NASK). Academically, she works on gender equality and diversity in organizational management, feminist theory and the topic of stereotypes in technologies based on artificial intelligence and machine learning. Her research has been awarded the International Peace Scholarship, Social Impact Fellowship, UB Humanities Institute Fellowship, Outstanding Article Award in Administrative Theory and Praxis, etc.