What are the implications posed by the Internet of Things for HDI, data privacy, and the role of the human in contemporary data-driven and data-sharing environments? This covers challenges posed in the field of engineering, privacy, data protection and security, law, governance and policy, and other related academic disciplines.
Projects funded in the theme IoT, System Design, and the Law
We were very pleased to fund two medium-sized projects and one small project.
Trust in Home: Rethinking Interface Design in IoT (THRIDI)
Cigdem Sengul et al, Brunel University
IoT systems in smart homes present several privacy challenges. While GDPR creates a general duty for data controllers to implement privacy by default and privacy by design, this obligation requires taking into account the state-of-the-art. However, the state-of-the-art in the smart home context is in its infancy, requiring research into building accountability and trust via the appropriate design of user interface and access control systems.One of the key issues is that users’ lack the experience or knowledge to control the current IoT systems. For instance, the technology for safeguarding personal data typically requires identity and access control systems. However, for access control to enhance user privacy and trust, authorisations should reflect a user’s personal preferences and interests. Hence, the quality of protection of these access control models is only effective to that extent that users can express their privacy needs, and are aware of the potential risks of permitting data sharing
This requirement entails that the IoT allows for end-user control, providing the users with the agency to tweak and personalise the way their data is shared and access is managed. Such design would need to take into account the constraints of resources, time, attention and skills of the users, as well as their priorities in everyday life
Governing Philosophies in Technology Policy: Permissionless Innovation vs. the Precautionary Principle
Vian Baker, Gilad Rosner, Bangor University and Internet Privacy Forum
The public policymaking processes for the governance of IoT and AI technologies are subject to discourses and political activity that valorise market activity and self-regulation over government regulation and intervention. A common refrain in technology policymaking is that too much, or too broad, or too early regulation could injure innovation and the social benefits it entails. Often, the political orientations and governing philosophies underpinning these positions are either not visible or not substantially explored within public debate. These positions are not neutral: they support and are supported by commercial logic – which seeks to minimize regulatory costs and burdens while maximizing product development capacities – and they align with libertarian politics, which sees regulation as an unwanted government interference in the functioning of the market, and generally sees government itself as a detrimental actor. In the United States, these discourses and positions have coalesced under the heading of permissionless innovation, whose ideological proponents promote their position through academia, think tanks, and government activity.
The counterposition to this ideology is the Precautionary Principle, which argues for attention to be paid to future harms and encourages greater public deliberation in regard to governance. This Principle has largely been developed within environmental policy, most famously in the UN’s 1992 Rio Declaration on Environment and Development, as well as Article 191 of the Treaty on the Functioning of the European Union. A key element of this Principle is that a lack of concrete evidence of future harms that could have wide impact should not preclude regulation. A coordi- nated discursive attack on this Principle is evident in a range of publications, and politically aligned language can be seen, among other places, in comment submissions to American regulators and public agencies. Aside from this attack, there is highly limited academic investigation and political discourse application of the Precautionary Principle to technology policy. This project addresses that gap, robustly exploring the potential merits of the Principle’s application to the governance of IoT and AI technologies. It will explore American, British and European regulatory culture align- ment with the Precautionary Principle, and surface the political positions informing its support or disparagement.
Who, then, in law is my neighbour? – Judgment, responsibility, and expectations of the onlife reality.
Petros Terzis, Dr. Martina Hutton, Dr. Emma Nottingham, University of Winchester
The present socio-legal research project attempts to review the common law right to privacy (broadly defined) under the light of the challenges posed by IoT systems and devices. In doing so, it aspires to bring agency back to the equation of ethics and responsibility.
This research project has two pillars: the ‘duty of care’ and ‘the standard of care’. Emanating from the recent literature on positionality-aware machine learning, the first pillar draws on previous work from legal theory, computer science and philosophy, to highlight the importance of sociotechnical systems in human-computer interaction and the unique standpoint and role of individuals (project managers, software developers, UX/UI designers) in the design and development of IoT systems. It attempts to explore the privacy-related trade-offs at the micro-level of the design and development processes of IoT systems and to scrutinize the range of privacy-related options that software developers, UX/UI designers and project managers have during the various stages of IoT systems development. How do the different agents perceive the concept of privacy; And are their actions (i.e. during the optimization phase) legally significant vis-à-vis privacy? The ultimate scope of this pillar is to understand the discretional freedom that micro-workers have vis-a-vis privacy and compare it with the normative framework of the tort of negligence, a framework that dates to 1932 and the Donoghue v Stevenson case, where an opaque bottle of ginger beer changed the history of liability for harmful products. Is there any common ground for developing a correspond- ing duty of care in the realm of the tort of negligence, owed from the developers of IoT devices to the end-users?
The second pillar of our project is a critical review of the ‘reasonable expectations of privacy’ test. It discusses the different schools of thought regarding people’s stance towards their privacy by building on a recent Pew Research Centre report, as well as on relevant legal research10 and work from communications and cultural studies. By organizing focus groups, we attempt to shape the picture of the ‘reasonable expectations of privacy’ today. We will then situate these findings within the above-mentioned studies, the normative background of privacy as developed by English and ECHR case-law and the conceptualization of privacy as an issue of power with our goal being to understand whether the ‘reasonable expectations of privacy’ test is the appropriate regime in light of the challenges posed by IoT systems.
To pursue our aims of the two pillars, we will: 1) conduct interviews with software developers, UX/UI designers, and project managers; and 2) organize focus groups with users of smart devices.