The call will soon be open. Please check back here and/or follow us on Twitter to find when it is open. If you would like to talk through your research ideas, please contact the theme lead: Ewa Luger at U. Edinburgh.
The application of data-driven systems within social and economic spheres “requires the transformation of social problems into technical problems” (Crawford & Whittaker, 2016: 19). This translation is not direct, as it requires some reframing of the problem so that it can be articulated within the constraints of what we might design. Consent, for example, is easily designed into a system as a set of information followed by a check box, but we know that this mechanism does not tell us whether the consenting individual was capable of consenting, whether their choice was freely made, or whether they fully understood the implications of sharing their data.
More generally, contemporary data-driven systems are largely not designed within notions of human agency, data legibility or redress at the forefront. This means that there exist no ready grounds upon which any algorithmic determination, or perceived harm, might be contested by those impacted. An inability to meaningfully contest decisions made by an AI-driven system could have the effect of reinforcing existing power asymmetries or unintentionally creating new ones. There are also concerns that the more dominant an organisation, the more able it is to (re)define what constitutes ethical practice. So, how might we reveal power asymmetries by design?
Equally, unlike other technological development (e.g. the Internet, which allowed users to be curators and creators of content), AI is undemocratic in its design, in that the power to develop and train an AI lies only in the hands of organisations that have the datasets, computational power and specialised skillset required to develop such systems. Whilst there is no doubt that in many cases there is a desire to create value-neutral systems, the reality is that such systems both embed values, and fail to reflect social diversity, and thereby are not designed to meet the goals of a plural society or global community. So, how can we ensure that the values we enshrine in our systems are balanced, and how can we make visible the inequities that limit human agency?
Human Data Interaction raises three concepts believed to be important in ensuring the relationship between humans and data is better managed: Agency, Negotiability and Legibility. These concepts stem from the idea that people should have some control over their data, particularly where such data results in decisions that might affect our long-term wellbeing both as individuals and as a society.
The call: In this call, we are looking for proposals that directly address one or more of the three HDI concepts (Agency, Legibility and Negotiability) with a goal of developing solutions, provocations or experimental explorations to moral problems arising from data-driven systems. This can include (but is not limited to) software development, prototype evaluation, user testing, interface design, arts-based responses and the empirical development of heuristics and design guidelines.
Tentative deadline: 10th July 2020
Helpful links:
- PROPOSAL FORM FOR ALL THEMES
- Detailed instructions on how to complete your proposal can be found on this page.