Ethics & Bias in
Artificial Intelligence

Vienna Deep Learning Meetup, May 7, 2018

The Vienna Deep Learning Meetup and the Centre for Informatics and Society invite to an evening
of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques
are in terms of their potential to do good, the technologies raise a number of ethical questions and
are prone to biases that can subvert their well-intentioned goals.

Vienna Deep Learning Meetup

Vienna's largest monthly event focussing on Deep Learning and topics of Artificial Intelligence.

The Meetup has been launched to discuss the latest achievements in deep learning research. Due to the lively interest of the community, this event has become a networking event, where monthly invited international lecturers from academia as well as from the private sector present how they sucessfully use deep learning in their fields.

Centre for Informatics & Society

The Center for Informatics and Society is a research initiative of the Faculty of Computer Science at the Vienna University of Technology.

Since its inception in 2016, the CIS has sought to explore developments in the field of tension between academic research, technological advances and the resulting consequences and challenges for society. Last but not least, the CIS represents a contribution of the Faculty of Computer Science to the field of action "Society", as defined in the Development Plan of the Vienna University of Technology in 2016.

Ethics & Bias in AI

The Vienna Deep Learning Meetup and the Centre for Informatics and Society invite to an evening of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques are in terms of their potential to do good, the technologies raise a number of ethical questions and are prone to biases that can subvert their well-intentioned goals.

Machine learning systems, from simple spam filtering or recommender systems to Deep Learning and AI, have already arrived at many different parts of society. Which web search results, job offers, product ads and social media posts we see online, even what we pay for food, mobility or insurance - all these decisions are already being made or supported by algorithms, many of which rely on statistical and machine learning methods. As they permeate society more and more, we also discover the real world impact of these systems due to inherent biases they carry. 

For instance, criminal risk scoring to determine bail for defendants in US district courts has been found to be biased against black people, and analysis of word embeddings has been shown to reaffirm gender stereotypes because of biased training data. While a general consensus seems exist that such biases are almost inevitable, solutions range from embracing the bias as a factual representation of an unfair society to mathematical approaches trying to determine and combat bias in machine learning training data and the resulting algorithms.

Besides producing biased results, many machine learning methods and applications already in use today raise complex ethical questions. Should governments use machine learning and AI methods to determine the trustworthiness of their citizens (cf. [3])? Should the use of algorithmic systems that are known to have biases be tolerated to benefit some while disadvantaging others? Is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans? And how do we keep AI and automation technologies from widening society's divides, such as the digital divide or income inequality?

These and many more questions and issues need a broad and multidisciplinary discussion to ensure a fair and overall beneficial future of AI and related technologies. This event aims to provide a platform for debate in the form of two keynotes and a panel discussion with five international experts from numerous scientific fields. 

Panelists

Prof. Moshe Vardi

Moshe Y. Vardi is Karen Ostrum George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology. His interests focus on automated reasoning, a branch of Artificial Intelligence with broad applications to computer science, including database theory, computational-complexity theory, knowledge in multi-agent systems, computer-aided verification, and teaching logic across the curriculum.

Prof. Peter Purgathofer

Peter Purgathofer is associate professor at the vienna university of technology, faculty of informatics, at the institute of design and assessment of technology, human-computer interaction group. His research centers around questions of the interplay between design and (software) development, especially the role and place of design in software engineering. also, he is working in the field of »informatics and society«.

Prof. Sarah Spiekermann-Hoff

Sarah Spiekermann teaches and conducts research at Vienna University of Economics and Business (WU Vienna), where she has headed the Institute of Business Administration and Information Systems since 2009. She is concerned with computer ethics in the field of tension of the Internet economy. The aim of her work is both a more ethical reflection of technology and a better understanding of human expectations and access to technology.

Prof. Mark Coeckelbergh

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the Department of Philosophy, University of Vienna since 2015. Since 2014 he is also (part-time) Professor of Technology and Social Responsibility at the Centre for Computing and Social Responsibility, De Montfort University, UK. Currently he is President of the Society for Philosophy and Technology (SPT) and a member of the steering committee of ETHICOMP, as well as member of the Technical Expert Committee (TEC) for the Foundation for Responsible Robotics and of the Committee on Embedding Values Into Autonomous Intelligent Systems of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.

Dr. Christof Tschohl

Christof Tschohl was one of the founding initiators of the AKVorrat in 2009 and - together with Ewald Scheucher - drafted the successful mass complaint against data retention to the Constitutional Court and cited it as the first appellant. He is scientific director of the Research Institute AG & Co KG - Center for Digital Human Rights. Since 2015, he is chairman of the AKVorrat (now epicenter.works). Christof Tschohl is a recognized expert in data protection and teaches at the University of Vienna as well as at the University of Hanover, European data protection and telecommunications law and since 2007 regularly contributes to the education and training of Austrian judges.


Moderation: Markus Mooslechner

Markus Mooslechner is Executive Producer at Terra Mater Factual Studios where he focuses on the development of new television formats. Before joining TMFS, Markus worked as a chief editor and presenter of the weekly television science program Newton, at the Austrian Public Broadcaster ORF. Prior to that, Markus worked as live reporter and editor for several editorial offices such as the daily news program at the ORF among others. He has a university degree (University Graz (A) / Williams College (USA)) and a deep interest and fascination in all things science. In 2014 he was awarded the Austrian Adult Education Television Prize for editorial management of the series TM Wissen (ServusTV).

Agenda
  • 18:30 - 19:00     Welcome
  • 19:00 - 19:30     Deep Learning and the Crisis of Trust in Computing, Prof. Moshe Vardi, Rice University
  • 19:30 - 20:00     The Big Data Illusion and its Impact on Flourishing with General AI, Prof. Sarah Spiekermann-Hoff, WU Wien
  • 20:00 - 21:30     Panel Discussion
  • 21:30 - 23:00     Networking, Buffet

Event Registration

If you wish to attend our special Event Ethics & Bias in AI, please register on the Vienna Deep Learning Meetup Web-Page.

Hosts

Thomas Lidy

Thomas Lidy has been a researcher in music analysis combined with machine learning at TU Wien since 2004. He is now the Head of Machine Learning at Musimap, a company that uses Deep Learning to analyze styles, moods and emotions in the global music catalog, in order to empower emotion-aware recommender engines.

Jan Schlueter

Jan Schlueter has been pursuing research on deep learning for audio processing since 2010, currently as a postdoctoral researcher at the Austrian Research Institute for Artificial Intelligence (OFAI).

Alexander Schindler

Alexander researches audio-visual aspects of music information. He is machine learning specialist at the Digital Insight Lab of the AIT Austrian Institute of Technology and lecturer at the Technical University of Vienna.

Florian Cech

Florian is a University Assistant at the CIS - Centre for Informatics and Society. His research covers different aspects of the digital transformation, with a focus on critical algorithm studies and related fields.

Venue

Prechtl-Saal - TU Wien
Hauptgebaeude, Stiege I, Erdgeschoss (Ground floor)
Karlsplatz 13
1040 Wien

Google Map Initialization...

Where to find the Prechtl-Saal?

The Prechtl-Saal is one of the largest halls of the TU Wien. It is located on the ground floor of the main building of the TU Wien. The easiest way to reach the hall is to take the main entrance from Resselpark.

The long, continuously arched Prechtlsaal is located on the ground floor just to the left of the main entrance.

Recommended Literature