Privacy is one of the key concerns of people using digital technology. Of course, this blog has been warning about threats in this area for years, but it’s common now to read about data protection issues in many mainstream, non-technical titles. That increased awareness is welcome, but it is often quite superficial, and limited to obvious areas where privacy may be at risk – things like leaks of private data, or government spying. However, it is important to be alert to more subtle forms of attack as well, particularly emerging ones.
The family home might seem immune to such problems, since it represents the most personal, and protected sphere in our lives. But as Privacy News Online has reported previously, there are already the first signs of how digital technology can be used there in ways that fail to respect privacy. For example, Google is exploring always-on domestic monitoring that would allow parents to spy continuously on their children , desploying AI techniques to assess whether they are misbehaving. The use of lightweight GPS ankle bracelets allows the location of children to be tracked at all times . More recently, a post looked at how the rise of the Internet of Things provides numerous opportunities for domestic abusers to spy on and harass their partners . Those are worrying indications that the home is no data protection haven, and more attention needs to be paid to the scope for privacy to be undermined there. That’s precisely what a new paper from Karen Levy and Bruce Schneier does:
This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships. Many common assumptions about privacy are upended in the context of these relationships, and many otherwise effective protective measures fail when applied to intimate threats.
As well as the kind of monitoring of family and partners described above, the paper lists dozens of ways – complete with references – in which people within intimate relationships might compromise the privacy of those close to them. Sometimes, that might be with the best intentions – for example, checking that children, or aged parents, are safe. Sometimes it might happen accidentally, when devices are shared among various members of a family, and information about one is unexpectedly revealed to another.
In an attempt to start to think about how these threats can be addressed, the researchers first identify common features. One is that, unlike traditional hackers, intimate attackers may have multiple motivations, including beneficent ones, often tied to a complex bundle of emotions. Rather than seeking economic gain, they are more likely to be driven by a desire to seek knowledge about, or control over, the other person. One important consequence of this is that traditional cost-benefit analysis of threats and resources is unlikely to be applicable here.
Choosing providers that support data residency helps companies satisfy their customers’ increasingly regional expectations of privacy — especially when paired with a robust, globally-focused privacy program.Working with service providers that support data residency helps ensure that information can be collected, processed, and stored in a way that meets different expectations.
A key aspect of attacks within a family or intimate context is what the paper calls “co-presence” – the fact that attacker and victim not only know each other, but tend to spend time in the same physical space. Again, that’s unlike traditional attacks, where strangers try to break in from outside. This physical closeness means that it is much easier for an attacker to watch passwords as they are entered – over someone’s shoulder, for example – or read messages that display on the other person’s screen. One consequence of this co-presence is that the traditionally strong two-factor authentication may not work, since the attacker probably has access to other devices used for verification.
Intimate relationships often involve complex patterns of power and control, and those can make attacks much easier. For example, the person who pays for a family phone plan can probably access data about everyone involved. Perhaps most importantly, people who are in intimate relationships with their victims tend to know extremely personal information about the latter – because that is part of the definition of intimacy. It is precisely this kind of supposedly “secret” data that online services like banks use in order to authenticate users.
Carrying out this analysis is useful, because it provides the basis for a discussion of what engineers and designers can do to limit the risk and harm of these kind of attacks. For example:
some intimate privacy threats occur by virtue of copresence between victim, attacker, and device. Designers should be attentive to what information is displayed visually on the user interface, recognizing that this can be a vector for a privacy breach. Such disclosures are likely to be inadvertent on the part of the user, and information may be actively or passively received by an intimate adversary. In either case, these common disclosures demonstrate how a device can inadvertently divulge information that its owner may prefer to keep private.
Google has released the TensorFlow Privacy tool, an update to its open source TensorFlow machine learning framework, that will allow developers to enhance the privacy of their AI ( artificial intelligence ) models. The use of differential privacy essentially means that AI models trained on user data can’t encode personally identifiable information.
Other practical suggestions put forward by the researchers are to allow people to change their privacy and sharing preferences more easily in order to reflect the dynamics of their intimate relationships, rather than assume that such things never change. It’s also important for designers to recognise that households are not monolithic units: they are made up of individuals, each with their own privacy needs and rights. A device may be shared, but that does not mean that all the information on it should be. In particular, the purchaser of the product is not necessarily the only user, and should not automatically be given administrative power over others.
As well as alerting us to a crucial but largely overlooked area, this new paper’s practical design suggestions could help to make a broad range of digital products better – whether or not they involve sharing information with those who are part of our lives’ innermost circle.Featured image by Karen Baijens.