Showcase Projects

On this page we will be reporting on our funded projects as they progress, and as they come to fruition.

Figure 1. Breathers member with pulse oximeter on his finger, 11th Feb 2020.

The pandemic has imposed many obstacles that have needed innovative workarounds from those involved across our projects, in order to continue carrying out their cutting-edge research. There is one particular project in our second theme, Beyond ‘Smart Cities’, that we focus on for this showcase, because its deep entanglements with COVID-19, though a barrier to the project, also reveal its work to be ever more urgent. The project is BREATHE – IoT in the Wild by Katharine Willis (School of Art, Design and Architecture, University of Plymouth), with RA Marcin Roszkowski, the Breathers support group, the ERDF-funded EPIC eHealth project, the Hi9 start-up, the South West IoT Network and the U. Edinburgh IoT Network

Taking place in isolated and rural communities in Cornwall, BREATHE aims to test the value of smart technology to people with health problems in those communities, to enable them to breathe more easily.  Participants are given wearable blood oxygen readers, and are able to manage their own health data, facilitated by an IoT test bed network.  Through this approach, the processes of data collection, sharing and assessment are put into the hands of the people the data concerns. 

Fig 2. Wearable pulse oximeter.

This project investigates the challenges of creating an IoT network in a low-connectivity, rural area, and also the potential of such a network for improving health outcomes for those living in more rural and isolated settings.  The pandemic has both hindered and highlighted the need for these investigations.

At the beginning of 2020, BREATHE partnered with the Liskeard and Wadebridge Breathers support groups to cooperatively plan, conduct and assess fieldwork.  Breathers is a patient–run group in which people with Chronic Obstructive Pulmonary Disease (COPD), and similar long-term conditions, can get together to share experiences and take gentle exercise.  Participating members were given pulse oximeters, and were asked to record their blood oxygen readings, alongside keeping a diary of readings and comments on their health. The next stage of the fieldwork had to be put on hold due to the COVID pandemic, as these vulnerable groups were required to shield.

Fig 3. A day’s BREATHE data for one participant.

Since March, the focus has been the technical development of prototyping a LoRAWAN IoT network, and linking it to the wearable blood oxygen readers, alongside an interactive speaker that people can speak to and get reports about their data from.  Moving forward, the plan is to focus back into the community when the pandemic allows it, trialing a pilot of this IoT network ‘in the wild’ with the Breathers groups, alongside running a data ethics workshop.

Whilst the pandemic has hindered the follow up trial and community–driven aspect of the fieldwork, it has at the same time put breathing health, and the need for support networks, firmly on many organisations’ agendas, clarifying the necessity of this project. 

An additional arm of this project is to assess its replication in isolated communities elsewhere in the UK, through partnership with U. Edinburgh’s IoT test bed network, working in the Highlands and Islands of Scotland.  As with Cornwall, this part of the project has also been encumbered by COVID-19. 

However, as this project progresses in the midst of this pandemic, the ethical challenges it addresses are becoming more urgent.  It matters how people access and share their data, especially highly personal data such as their health data, which can have large consequences if shared with commercial companies.  Accessing personal data about breathing can also help self-awareness of the condition and allow data sharing with health professionals to help manage symptoms.  

Figure 4. Participant talking to the Breathers group about the project.

Traditionally, testing smart sensor networks ‘in the wild’ has not been widely undertaken, and this project seeks to create a model for data sharing, where people and communities share data to help both the individual, the group and more widely for the treatment of particular health conditions.

On a technical level, the project aims to demonstrate the benefits of IoT networks ‘in the wild’ with a view to new products and services being created, to innovate off-grid data sharing for health policy guidance on data ethics and truly benefit patients and people living with health conditions in rural and coastal communities.

We very much look forward to seeing this project progress, as it sheds light on the potential for improved health for those in rural communities, through applying HDI design principles.

Professor Katharine Willis

Professor Willis is Professor of Smart Cities and Communities and part of the Centre for Health Technology at the University of Plymouth. She leads on the UKRI-funded Centre for Health Technology Pop-up. Over the last two decades she has worked to understand how technology could support communities and contribute to better connections to space and place. Her recent research addresses issues of digital and social inclusion in smart cities, and aims to provide guidance as to how we can use digital connectivity to create smarter neighbourhoods.

One of the first projects to come out of the AI Intelligibility and Public Trust call out is Prof A McStay’s (Bangor University) and G. Rosner’s (IoT Privacy Forum) report on Emotional AI and Children: Ethics, Parents, Governance, which has already informed and been cited in UNICEF’s white paper Policy Guidance on AI for Children. 

A relatively new form of artificial intelligence, Emotional AI in children’s toys is expected to become increasingly widespread in the next few years.  Through measuring biometrics of a child (such as heart rate, facial expression, and vocal timbre) these ‘intelligent toys’ have potential to improve learning, detect and assist with developmental health problems, assist with family dynamics, help with behaviour regulation, and diversify entertainment.  However, being placed at the heart of a vulnerable and delicate stage of life, it is critical that emotional AI and the legal frameworks surrounding them are legible and negotiable. They should provide children and parents with agency every step of the way, in order to prevent the exploitation of children (and their parents) via emotional data. 

Using AI to assess and act upon the complex emotional and educational development of a child is no simple task, and with algorithmic complexity comes obscurity.  We must ensure parents are equipped to understand the implications these systems and the data they harvest have.  It is one thing to attend to a child’s emotional state in a playful or educational setting, but what happens when a child’s emotions inform content marketed to that same child? What happens when the turbulent emotional development of childhood is used to profile that person in later life, affecting chances of employment or insurance?  The reductive nature of AI in a parental role is concerning too, as is the commercialisation of parenting and childhood more generally.   

Alert to both the positive and negative potentials, this report sets out to explore the socio-technical terms by which emotional AI and related applications should be used in children’s toys. Based on interviews with experts from the emotional AI industry, policy, academia, education and child health, it is recognised that there are serious potential harms to the introduction of emotion and mood detection in children’s products.  In addition, the report pays particular interest to the views of parents, and carries out UK surveys and focus groups (the latter with the help of Dr. Kate Armstrong and the Institute of Imagination in London).  Through these, McStay and Rosner outline the necessary frameworks for emotional AI in toys.   Problems and potential solutions discussed include the issue of current data protection and privacy law being very adult-focused, and not comprehensive enough to address child-focused emotional AI.   The UN’s Convention on the Rights of the Child is also recommended as a valuable guide to the governance of children’s emotional AI technologies.  It is also suggested that policymakers should consider a ban on using children’s emotion data to market to them or their parents.   

In short, emotional AI design must be informed by good governance that reviews and adapts existing laws, and must be built around fairness to children, support for the nuanced roles of parenting, and care when involving AI in our early inner lives.   

Read the full report and findings here.