AI Intelligibility and Public Trust

This theme has launched with a workshop at Microsoft Cambridge Research Labs on 4th December, 2018, which led to its call for projects, four of which were funded—as described here. The academics lead for this theme is Ewa Luger (U. Edinburgh).

This theme examines ways to improve upon the established system design goals such as algorithmic transparency, via HDI’s core concepts. Legibility would be one clear step forward, but we also explore intelligibility in context.

Why Human Data Interaction?  We are increasingly surrounded by intelligent systems.  These systems are driven by algorithms; sets of instructions, or rules, for a computer to follow.  It has been said that if every algorithm in the world stopped working at the same time, it would be the end of the world as we know it.  Algorithms are part of our everyday lives; in our smartphones, our laptops, our cars, appliances and toys – and at the systemic level in areas such as banking, airplane scheduling and piloting, trading and record keeping.  Our actions generate the data that keeps these systems in operation.

AI is the latest iteration of algorithmically driven systems. The algorithms we are now imagining behave less like those of old and more like the human brain.  With that comes a greater level of complexity, but also a greater level of obscurity.  When the context is relatively benign, for example recommendations of what you might buy next, then this isn’t such a problem.  But what happens when the system decides whether you can access medication, or whether you get hired, or who gets elected?

How do these systems reach their judgments? What data do they use? Why did they decide this thing and not something else? How can users change the outcome?  How should these systems present their decisions?  These questions, and others, are arising again and again. Helping people to understand and change how these systems work is a core concern for Human Data Interaction.