Showcase Projects

On this page we will be reporting on our funded projects as they progress, and as they come to fruition.

Governing Philosophies in Technology Policy: Permissionless Innovation vs. the Precautionary Principle

Our IoT, System Design and the Law theme asks, ‘What are the implications posed by the Internet of Things (IoT) for HDI, data privacy, and the role of the human in contemporary data-driven and data-sharing environments?’

Attending to the hidden ideologies that often underpin technology policy and governance, is Gilad Rosner’s project, Governing Philosophies in Technology Policy: Permissionless Innovation vs. the Precautionary Principle

The project explores how the three HDI tenets of legibility, agency and negotiability are amplified (or not) via two opposing governance and regulatory philosophies (one being ‘Permissionless Innovation’ and the other ‘the Precautionary Principle’).  The project also exposes the relationships, politics, norms and values of the actors and institutions involved in the governance of IoT, AI and emerging technologies.

Gilad Rosner presents the case for the Precautionary Principle

‘If we over-regulate, we lose future social benefits from regulation’ is a common phrase in both professional and governmental discourse on policymaking for technology, particularly in the US.  This phrase seems intuitively reasonable, but it is not neutral.  It pits state regulation against innovation, assuming that the state is too slow to keep up with technological change, that markets are better at regulating themselves than governments, and that markets will generally produce outcomes that benefit society.  These assumptions form the bedrock of what can be termed Permissionless Innovation, ‘the notion that experimentation with new technologies should generally be permitted by default’. (Thierer, 2016).  This philosophical standpoint is realized through active political discourse, and industry positions on IoT governance are of the view that concrete harms from technological innovation must be shown prior to regulation.

Through this philosophy, government regulation (or permission) is presented as an antagonist to innovation, and an inhibitor to freedom and organic, bottom-up solutions.  Thus, the preference is for those wishing for regulation to show evidence that the technology is harmful.  However, this is of course often not straightforward.

Let’s consider a context where unintended and unforeseen societal harms are in fact avoided from Permissionless Innovation’s counter-argument, The Precautionary Principle.  In the case of environmental change, it is easy to see how ‘when there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation’ (United Nations Rio Declaration on Environment and Development, 1992).  Indeed, it is within the sphere of environmental regulation that the majority of academic and legal discussion of the principle takes place.

But what about the Precautionary Principle’s relevance to IoT, AI and other emerging technologies?  In the case of Cambridge Analytica, 50,000,000 people had their data used in ways that violated the spirit of consent, against consumer expectations, in a fashion many would consider manipulative.  Furthermore, the intended role of social online platforms was hijacked for the unintended uses of political campaigning.  Surfacing here are political and market conditions which directly disfavour the legibility of algorithms and data relationships. What philosophy is befitting for this type of problem?

Vian Bakir on fake news and the economy of emotions online

In one of the few papers that explore the alignment of the Precautionary Principle with privacy and data protection, researcher Luiz Costa noted its benefits.  Costa points out how the Precautionary Principle avoids risk-taking without a larger public discussion, thereby involving citizens in a decision-making process that counterbalances asymmetric citizen-government and citizen-industry relations.  Our agency as citizens interacting with IoT systems is currently determined by our capacity to act within technical or business relations.  The policy environment (and its philosophical orientation) is therefore fundamental to our agency.  Here, regulation can be seen as a tool to protect citizens, counteracting the power imbalance against governments and industry. 

As Costa also points out, monetary compensations for damages that cover Permissionless Innovation when it goes wrong are often inadequate for IoT privacy and data protection, where many of the emerging dangers are both non-economic and irreparable.  The stockpiling of emotional data that render our inner lives transparent, the collection of children’s data from toys (see McStay & Rosner, Emotional AI and Children: Ethics, Parents, Governance, also funded by HDI Network+), our subjection to commercial manipulation via data, the diminishment of private spaces, exacerbated socioeconomic inequality and surveillance capitalism are all emerging dangers of technological innovation.

The slowing down of technological deployment through a precautionary approach gives society the time it needs to negotiate the political and institutional relationships that govern how these products are offered, and to form the norms needed to manage the sensitive and revelatory outcomes of technological innovation. 

Despite being enshrined in Article 191 of the Treaty on the Functioning of the European Union, the Precautionary Principle’s application to privacy, data protection and technology governance is very limited.  By vigorously examining the benefits of a precautionary approach however, and relating it to the core HDI tenets, this project is seeking to improve information policymaking in service of human-centric principles and privacy values.

The project aims to build networks of researchers, professionals, data protection authorities and other stakeholders in order to launch an inquiry into the role and benefits of the Precautionary Principle in the policymaking of emerging technology, and how such an orientation supports the HDI tenets of agency, legibility and negotiability.  Through discussing the opposing orientations of Permissionless Innovation and the Precautionary Principle, a collaboratively reached white paper will be published.  Also to be laid is the foundation for future grant funding to hold a Transatlantic Symposium on the Precautionary Principle in Technology Policy, as well as building momentum to continue the network beyond the scope of this project.

Vian Bakir

Vian Bakir is Professor in Journalism and Political Communication at Bangor University, UK, and is a leading international scholar of technology and society with expertise in the impact of the digital age on strategic political communication, dataveillance and disinformation. Her books include: Intelligence Elites and Public Accountability (2018);  Torture, Intelligence and Sousveillance in the War on Terror (2016); Sousveillance, Media and Strategic Political Communication (2010) and Communication in the Age of Suspicion (2007). She has been awarded grants on data governance and transparency from UK national research councils (ESRC, EPSRC AHRC and Innovate UK). She has advised UK national research councils (EPSRC, AHRC) on their major investments into digital citizenship, AI and governance; and the European Commission on its Horizon 2020 work programme on digital disinformation. Her work has helped parliaments understand the impact of dataveillance, microtargeting and disinformation (e.g. Electoral Matters Committee, Victoria, Australia; UK All Party Parliamentary Group on AI; UK House of Lords Select Committee on Democracy & Digital Technologies; UK Parliament Digital, Culture, Media & Sport Committee); trade unions on adapting to data surveillance (National Union of Journalists); and businesses on public perceptions of data use.

Gilad Rosner

Gilad Rosner is a data protection officer, privacy researcher and government advisor. Gilad’s work focuses on data protection, US & EU privacy regimes, digital identity management, and emerging technologies. His research has been used by the Office of the Privacy Commissioner of Canada and the UK House of Commons Science & Technology Committee. He has been a featured expert on the BBC and other news outlets, and his 25-year IT career spans ID technologies, digital media, robotics and telecommunications. Gilad is a member of the UK Cabinet Office Privacy and Consumer Advisory Group, and a member of the Advisory Group of Experts convened to support the forthcoming review of the OECD Privacy Guidelines. He is a Visiting Researcher at the Horizon Digital Economy Research Institute, and has consulted on trust issues for the UK government’s identity assurance program, Gilad was a policy advisor to a Wisconsin State Representative, contributing directly to legislation on law enforcement access to location data, access to digital assets upon death, and the collection of student biometrics. Gilad is founder of the non-profit IoT Privacy Forum, which produces research, guidance, and best practices to help industry and government lower privacy risk and innovate responsibly with connected devices, and he has recently completed pioneering research on the privacy and ethics of using emotional AI with children.

Our Future of Mental Health theme has funded the ExTRA-PPOLATE project, which explores HDI approaches to the creation of an automatic coding tool that can help therapists improve their practice, and importantly, is trusted by stakeholders such as therapists and patients. The multidisciplinary research team includes Mat Rawsthorne and Jacob Andrews (Principal Investigators) from the NIHR Mindtech MedTech Co-operative, Sam Malins (Clinical Psychologist), Dan Hunt (Linguist) and Jeremie Clos (Computer Scientist) from Nottingham University, as well as Tahseen Jilani (Data Analyst) from Health Data Research UK and Yunfei Long (Computer Scientist and natural language processing specialist) from the University of Essex.

Sadly, the fallout from COVID-19 is expected to affect mental health for years to come. Large quantities of high quality therapy are going to be needed. This means carefully assessing therapy sessions’ effectiveness, whilst also thinking carefully about how therapists’ time is spent. Currently, therapy assessment is resource expensive, often requiring a second and more senior therapist in the room. This second therapist could instead be seeing another patient, and their presence can also alter the dynamic between the patient and their therapist. The ExTRA-PPOLATE tool offers to assist this problem using machine learning. As an augmented intelligence system, ExTRA-PPOLATE aims to help therapists assess their sessions and support decision making, identifying weak spots and suggesting ways to improve the sessions.  The legibility of ExTRA-PPOLATE’s algorithm is also important to the research team, and the model offers insight to therapists on how it has reached decisions, allowing the therapist to make corrections.

In order to create the ExTRA-PPOLATE tool, natural language processing techniques were applied to therapy transcripts to identify features associated with different psychological processes in therapy. Features included sentence length, emotive words, sentence polarity and readability scores. To train the ExTRA-POLLATE tool, machine learning was then applied to the numbers generated from these techniques. ExTRA-PPOLATE can now be used to identify processes that are (or aren’t) happening in sessions, although further work is needed to validate the tool.

The involvement and engagement of therapists, patients and the public have been priorities for the ExTRA-PPOLATE team throughout the project. Two PPIE (Patient and Public Involvement and Engagement) members with lived experience of mental health difficulties have been involved since the project’s inception. Also involved are a PPRG (Patient and Practitioner Reference Group), made up of patients, carers, psychotherapists, therapy trainers and therapy managers. With the help of Matt Burton McFaul from Virtual Health Labs, the ExTRA-PPOLATE team has conducted three interactive online workshops with the PPRG, to help reflect on and reassess the project, specifically looking at issues of transparency and trust. These sessions have been crucial in informing the tool’s development and enabled the implementation of HDI approaches that increase stakeholders’ agency, legibility and negotiability capabilities around personal data, differentiating the project from approaches that are exclusively data-driven.

Key changes to the ExTRA-PPOLATE tool have been informed by the workshops. For example, patients and carers indicated that if therapists were to review each output for accuracy, the resultant codings would be subjective only to the therapists’ choices and professional biases, preventing any opportunities to negotiate and discuss interpretations of the output with patients. In addition, therapists thought it impractical to spend extra time reviewing the coding after seeing each patient. Thus, the system is now being approached as a tool to provide an indication of where therapists could improve, without them having to check each automatic coding.

Patients and carers also explained that they would like their own view of the system readout and would like to be able to review the processes occurring in their sessions together with therapists, to provide agency and enable negotiability in how therapists’ practice should be changed.  Finally, patients suggested that a small part of a therapy session could be analysed together with patients, such that the patients had a better idea of what the system was doing, permitting them greater legibility.

The feedback from the workshops is thus showing a design trade-off between legibility of this system, and usability and practicality, when used by stakeholders. Tensions and frictions can emerge in clinical contexts where strong power dynamics are present, and limited resources may challenge patients’ agency. PPRG members stated that while patients should not be required to invest significant amounts of time to understand how their data is being used, there is still a requirement for them to be enabled to understand this, such that informed consent can be legitimately received and patients have negotiability in choosing whether they consented to the use of the system with their data or not.

Therapist members of the reference group explained that attempts to show the workings of the tool, here provided as short sentences describing the language features that have been used to predict the identified psychological process, do not inform action that can be taken by any stakeholder, and thus while increasing understanding of how the tool has identified particular processes, do not provide a practical purpose. Other explainability features of the system, including an indicator of which processes should be used more frequently within a psychotherapy session, and which less, were seen as more useful features by steering group members.

The project team are currently reviewing the output from these workshops to reflect on how best to move forward with the practical design of the system while moving away from traditional data-driven approaches. Our objective is to keep championing legibility, negotiability and agency for patients in future related projects that aim to improve therapeutic interventions and practices. For more on ExTRA-PPOLATE’s interface, from the second of these three workshops, see here.

On the 20th April, there is a (online!) roadshow event, aimed at those categories of people that form part of the PPRG (patients, therapists, therapy trainers, service managers). 

To register for the roadshow, email

If you want to know more about the project and have any questions, please feel free to email:





Figure 1. Breathers member with pulse oximeter on his finger, 11th Feb 2020.

The pandemic has imposed many obstacles that have needed innovative workarounds from those involved across our projects, in order to continue carrying out their cutting-edge research. There is one particular project in our second theme, Beyond ‘Smart Cities’, that we focus on for this showcase, because its deep entanglements with COVID-19, though a barrier to the project, also reveal its work to be ever more urgent. The project is BREATHE – IoT in the Wild by Katharine Willis (School of Art, Design and Architecture, University of Plymouth), with RA Marcin Roszkowski, the Breathers support group, the ERDF-funded EPIC eHealth project, the Hi9 start-up, the South West IoT Network and the U. Edinburgh IoT Network

Taking place in isolated and rural communities in Cornwall, BREATHE aims to test the value of smart technology to people with health problems in those communities, to enable them to breathe more easily.  Participants are given wearable blood oxygen readers, and are able to manage their own health data, facilitated by an IoT test bed network.  Through this approach, the processes of data collection, sharing and assessment are put into the hands of the people the data concerns. 

Fig 2. Wearable pulse oximeter.

This project investigates the challenges of creating an IoT network in a low-connectivity, rural area, and also the potential of such a network for improving health outcomes for those living in more rural and isolated settings.  The pandemic has both hindered and highlighted the need for these investigations.

At the beginning of 2020, BREATHE partnered with the Liskeard and Wadebridge Breathers support groups to cooperatively plan, conduct and assess fieldwork.  Breathers is a patient–run group in which people with Chronic Obstructive Pulmonary Disease (COPD), and similar long-term conditions, can get together to share experiences and take gentle exercise.  Participating members were given pulse oximeters, and were asked to record their blood oxygen readings, alongside keeping a diary of readings and comments on their health. The next stage of the fieldwork had to be put on hold due to the COVID pandemic, as these vulnerable groups were required to shield.

Fig 3. A day’s BREATHE data for one participant.

Since March, the focus has been the technical development of prototyping a LoRAWAN IoT network, and linking it to the wearable blood oxygen readers, alongside an interactive speaker that people can speak to and get reports about their data from.  Moving forward, the plan is to focus back into the community when the pandemic allows it, trialing a pilot of this IoT network ‘in the wild’ with the Breathers groups, alongside running a data ethics workshop.

Whilst the pandemic has hindered the follow up trial and community–driven aspect of the fieldwork, it has at the same time put breathing health, and the need for support networks, firmly on many organisations’ agendas, clarifying the necessity of this project. 

An additional arm of this project is to assess its replication in isolated communities elsewhere in the UK, through partnership with U. Edinburgh’s IoT test bed network, working in the Highlands and Islands of Scotland.  As with Cornwall, this part of the project has also been encumbered by COVID-19. 

However, as this project progresses in the midst of this pandemic, the ethical challenges it addresses are becoming more urgent.  It matters how people access and share their data, especially highly personal data such as their health data, which can have large consequences if shared with commercial companies.  Accessing personal data about breathing can also help self-awareness of the condition and allow data sharing with health professionals to help manage symptoms.  

Figure 4. Participant talking to the Breathers group about the project.

Traditionally, testing smart sensor networks ‘in the wild’ has not been widely undertaken, and this project seeks to create a model for data sharing, where people and communities share data to help both the individual, the group and more widely for the treatment of particular health conditions.

On a technical level, the project aims to demonstrate the benefits of IoT networks ‘in the wild’ with a view to new products and services being created, to innovate off-grid data sharing for health policy guidance on data ethics and truly benefit patients and people living with health conditions in rural and coastal communities.

We very much look forward to seeing this project progress, as it sheds light on the potential for improved health for those in rural communities, through applying HDI design principles.

Professor Katharine Willis

Professor Willis is Professor of Smart Cities and Communities and part of the Centre for Health Technology at the University of Plymouth. She leads on the UKRI-funded Centre for Health Technology Pop-up. Over the last two decades she has worked to understand how technology could support communities and contribute to better connections to space and place. Her recent research addresses issues of digital and social inclusion in smart cities, and aims to provide guidance as to how we can use digital connectivity to create smarter neighbourhoods.

One of the first projects to come out of the AI Intelligibility and Public Trust call out is Prof A McStay’s (Bangor University) and G. Rosner’s (IoT Privacy Forum) report on Emotional AI and Children: Ethics, Parents, Governance, which has already informed and been cited in UNICEF’s white paper Policy Guidance on AI for Children. 

A relatively new form of artificial intelligence, Emotional AI in children’s toys is expected to become increasingly widespread in the next few years.  Through measuring biometrics of a child (such as heart rate, facial expression, and vocal timbre) these ‘intelligent toys’ have potential to improve learning, detect and assist with developmental health problems, assist with family dynamics, help with behaviour regulation, and diversify entertainment.  However, being placed at the heart of a vulnerable and delicate stage of life, it is critical that emotional AI and the legal frameworks surrounding them are legible and negotiable. They should provide children and parents with agency every step of the way, in order to prevent the exploitation of children (and their parents) via emotional data. 

Using AI to assess and act upon the complex emotional and educational development of a child is no simple task, and with algorithmic complexity comes obscurity.  We must ensure parents are equipped to understand the implications these systems and the data they harvest have.  It is one thing to attend to a child’s emotional state in a playful or educational setting, but what happens when a child’s emotions inform content marketed to that same child? What happens when the turbulent emotional development of childhood is used to profile that person in later life, affecting chances of employment or insurance?  The reductive nature of AI in a parental role is concerning too, as is the commercialisation of parenting and childhood more generally.   

Alert to both the positive and negative potentials, this report sets out to explore the socio-technical terms by which emotional AI and related applications should be used in children’s toys. Based on interviews with experts from the emotional AI industry, policy, academia, education and child health, it is recognised that there are serious potential harms to the introduction of emotion and mood detection in children’s products.  In addition, the report pays particular interest to the views of parents, and carries out UK surveys and focus groups (the latter with the help of Dr. Kate Armstrong and the Institute of Imagination in London).  Through these, McStay and Rosner outline the necessary frameworks for emotional AI in toys.   Problems and potential solutions discussed include the issue of current data protection and privacy law being very adult-focused, and not comprehensive enough to address child-focused emotional AI.   The UN’s Convention on the Rights of the Child is also recommended as a valuable guide to the governance of children’s emotional AI technologies.  It is also suggested that policymakers should consider a ban on using children’s emotion data to market to them or their parents.   

In short, emotional AI design must be informed by good governance that reviews and adapts existing laws, and must be built around fairness to children, support for the nuanced roles of parenting, and care when involving AI in our early inner lives.   

Read the full report and findings here.