Surveillance / Resistance

This is the eighth theme of the Network Plus and is led by Matthew Chalmers and Alan Munro (University of Glasgow). It addresses questions of:

How to design for resistance against data surveillance? Human Data Interaction offers a conceptual framework for system design that to goes beyond notions of data/algorithmic transparency, to focus on helping people understand what is happening to data about them (legibility), to change relevant systems to be in better accord with their wishes (agency), and to work with the people using the data so as to improve that processing (negotiability). Work so far in HDI generally assumes that at least one of these three features can be implemented effectively, to meet people’s needs and desires.

However, what happens when one has some understanding of what is happening with one’s data, but cannot change the system or work positively with the people processing it? Then, the collection of one’s personal data becomes something more akin to surveillance — in ways that are often driven by contemporary business models, as per the widely discussed issue of ’surveillance capitalism’. This invites the question that is at the core of this theme: what should we do when legibility, agency and negotiability are not enough?  It seems that we could then support systems and processes that allow people to resist or subvert such abusive, illegal or otherwise undesired surveillance — in ways that are not in themselves abusive or illegal. In this way, we can be build on the existing three tenets of HDI, and make resistance be a fourth part of our conceptual framework.

Projects funded by the call

In this call, we were looking for proposals that directly address this question in a practical and demonstrable way, by developing technical solutions, provocations or experimental explorations. We funded four projects:

Countermeasures: Giving children better control over how they’re observed by digital sensors

Angus Main, RCA.

This project focuses on the privacy implications stemming from the increased use of sophisticated digital sensors (e.g. biometric, proximity, and motion sensors) in common devices, such as phones, tablets, games consoles and televisions. It aims to address the power imbalance caused by the ability of sensor-enabled devices to observe children’s behaviour and activities, without them necessarily being aware of when and how they are being observed, or how this data is used. It also addresses the knock on effect this surveillance has on limiting their voices in relation to software or other products designed for them.

To counteract these issues we will provide children with methods for gaining agency over the sensors which observe them, providing knowledge of how this information is used by tech companies in ways that unknowingly impact their lives. For example, many software companies develop products designed for children using meta data taken from their tech use which is in direct contradiction to the UN Convention (1989) on the Rights for Children that states they should be able to voice their own opinions on matters that affect them.

The proposal builds on the experience of the project team in researching children’s use of technology, (e.g. an AHRC/ESRC funded Japan/UK networking project (Yamada-Rice et al, 2020), and extends the PI’s existing research into tools for deceiving digital sensors (Main, 2019).

The PI’s previous sensor resistance research, on which this project is built, resulted in a booklet for identifying sensors, and a toolkit for blocking or disrupting sensors, intended primarily for adult use (ispysensors.com). This proposal seeks to build on these outcomes by focusing specifically on the needs of children (age 8-12 years), who face particular risks in terms of consent and privacy (detailed further in section 3), as well as the removal of their consultation in products and services designed for them that result from the data obtained by the sensors. The project aims to gain insights into the needs and attitudes of this group through primary research using cultural probes (Gaver et al, 1999) and facilitated disruptive design workshops with children. It also aims to draw together knowledge from existing research and relevant practice by convening a network of subject matter experts to share requirements and assess potential solutions.

A core outcome of the project will be a new toolkit of open source, low cost, low tech, resources for identifying and subverting sensors. These will be primarily designed for and with children, but also intentionally accessible for device users of all ages. The toolkit will consist of downloadable resources, and make-at-home mechanisms, which will be hosted on the project website.

The second core aim, from a critical design perspective, is to raise awareness and encourage discussion around the privacy issues related to sensors with members of the HCI network and children’s media industry partners.

Collaborative ResistancE to Web Surveillance (CREWS)

Steven Murdoch, University College, London.

Much engagement between individuals is now through the world-wide-web, but here respect for the tenets of HDI is notably lacking. Websites apply tracking techniques to link a user’s ac- tivity between websites to build up a profile of the individual that is more detailed than any one site would be able to create. Such linkage violates the principle of contextual integrity1, by breaking boundaries between aspects of a person’s life that they wish to be separate.

The GDPR has resulted in some data controls being made available, but so far these are inad- equate. Legibility is purported to be provided through privacy-policies, but these are incom- prehensible2. Agency is purported to be provided through opt-in/opt-out privacy notices, but these don’t offer a genuine choice3. Negotiability is purported to be provided through cookie- dialogs, but the PI’s research has shown that websites commonly block individuals exercising their right to privacy4. Even if cookies are blocked, websites can and do use IP-address track- ing to perform linkage and geolocation.

Resistance to web tracking allows an individual to unilaterally assert their agency, rather than waiting for websites to offer controls for how their personal data is used. This resistance must cover both tracking through cookies and equivalent web features, as well as tracking based on IP-addresses. The most popular technology for achieving resistance to web-tracking is the Tor Browser, created by the project PI in 2009. Tor Browser employs advanced anti-tracking measures within the browser to prevent linkage through web-features and routes communi- cation over the Tor network to prevent linkage through IP addresses.

Tor Browser blocks the information implicitly disclosed by web-tracking, granting users the agency of whether or not to disclose information explicitly. Changing information disclosures from implicit to explicit enhances legibility around what information is disclosed, and agency over whether or not to disclose it.

In CREWS, we will explore how Tor Browser is used as a form of resistance in an area where the tenets of HDI are not available. Furthermore, we will explore how to apply the tenets of HDI to enhance the effectiveness of this resistance further. Specifically, we will evaluate approaches for encrypting web browsing data as it leaves the Tor network to prevent it from be- ing eavesdropped. Firstly, we will evaluate individual resistance by informing users of the risks of unencrypted browsing (legibility) and give users the agency to allow it or not. Secondly, we will evaluate collective resistance by giving users the agency to share anonymised web brows- ing data to protect themselves and other users better. Finally, we will explore active re- sistance to augment passive resistance, to increase the level of privacy achieved by Tor Browser.

Resisting Surveillance in Connected Cars

Anna Rudnicka, University College, London

This project will examine and challenge the power dynamics between the users of connected cars

and the companies collecting in-car data about them.

Car companies collect increasing volumes of data about users, without providing full transparency about data type/storage/reason for collection – or willingness to involve users in decision-making. Enclosing the data subjects, connected cars become moving panopticons, transforming an envi- ronment naturally associated with privacy (Wells, 2017) into an always-on surveillance tool.

Most studies exploring privacy issues in connected cars provide technological solutions to prevent interference from third parties (e.g. Ogundoyin, 2020; Rasheed et al., 2018) but do not limit firms’ ability to collect data. Where users’ privacy preferences are considered, it is consistently in the con- text of helping car companies minimise barriers to adoption (e.g. Derikx et al., 2016; Hakimi et al., 2018). Strikingly, there is little to no discussion about empowering users to regulate the volume and type of in-car data that can be collected about them. The highly personal nature of the data potentially collected (e.g. frequently visited places, travel companions, etc.) makes is important that users have control over what data is actually collected.

We propose two studies involving users of connected cars. Study 1 sets out to study the data shar- ing practices of this population and engage them in reflection on their role in negotiating their in- car privacy expectations. Study 2 sets out to provide an in-depth understanding of privacy-relevant behaviour in the car, as well as participants’ strategies, and barriers to negotiating their privacy needs.

This research will provide a foundation for an investigation of connected car privacy issues grounded in the Human-Data Interaction framework, which will lead to a grant application to cover further work exploring the degree to which resistance is required for connected car users, and how legibility, agency and negotiability could be supported and improved upon. This future research package will include both experimental and observational work, as well as place an emphasis on engagement with current and potential connected car users, policymakers, and firms.

Privacy Awareness for Active Resistance (PAAR)

Andrea Cavallaro, Queen Mary University, London

Platforms and algorithms mediate our social interactions, and have radically transformed how we share, access and assess information. Despite having made these platforms necessary and critical to our day-to-day life, we poorly understand the consequences of sharing personal information on the internet and the impact on our lives.

Amongst many, images are arguably the richest sources of personal information we share – mostly unwittingly – online. These images are routinely parsed by algorithms to identify policy viola- tions and also build profiles, which are in turn used for personalisation. Detailed digital profiles are the basis of targeted marketing and political campaigns, which have a substantial impact on our perceptions, our decisions and actions we take.

The project will focus on helping people develop a privacy-informed use of digital images shared online and understand what information about them can be extracted. We will build an interface for individuals to interact with image data, hence raising awareness on potential risks and consequences of sharing images. Awareness will be gained through practical and demonstrable simulations based on scenarios that – through interactive and experiential activities – will elicit the emergence of rituals associated with privacy. These scenarios will be generated from the collaboration with a focus group in a series of predetermined day-to-day rituals (e.g. sharing a selfie or a picture of a meal) where, to develop informed decisions, participants will interact with information that algorithms can extract from images.

These interactions with data will create a personal definition of privacy upon which a series of behaviours or actions will be recommended for people to experience the avoidance of undesired profiling. Specifically, we will develop narratives simulating day-to-day rituals (e.g. sharing pictures from home, while training in a gym, and spending time with friends) generated from a set of private concepts that may be extracted from the images the focus group shared. These narratives will be the means to turn privacy into a tangible experience.