Internet-connected doorbells with cameras built in are becoming very popular. Amazon-owned Ring is the best-known product. Google also has the Nest Hello. But the rise of doorbell video has privacy experts worried about the potential for misuse and abuse of these home surveillance devices. Users can shame people or unrightly label people or behavior as suspicious. And the companies that make this tech might have access to the video in ways customers don't understand. Molly Wood talks with Laura Norén, director of research at Obsidian Security. She says part of the problem is that owners of video doorbells are filming a lot more territory than the terms of service say they should. The following is an edited transcript of their conversation.
Laura Norén: The terms of service put a lot of the onus on the person who installs the camera, and I think a lot of the people who own these cameras are not aware that they're not allowed to point them at the public street or into their neighbor's yards. I happen to be on the Ring platform myself as a user, and I can see that people are violating that particular point of the terms of service agreement left and right. It leads to a lot of community policing that might not be fair or just.
Molly Wood: What access does Ring or Amazon have to the videos that are filmed on these cameras?
Norén: Ring is pretty clear in its terms and conditions that people are allowing Ring employees to access videos, not live streams, but cached videos. And that's in order to train that artificial intelligence to be better at recognizing neighbors, because they're trying to roll out a feature where they use facial recognition to match with the people that are considered safe. So if I have the Ring cameras, I can say, "All these are safe people. Here's pictures of my kids, my neighbors. If it's not one of these people, consider them unsafe." So that's a new technology. They need to be able to train their algorithms to recognize who's a person, what's a car, what's a cat. Some subset of the videos that are being uploaded just for typical usage are then being shared with their research team in the Ukraine.
Wood: In terms of the overall legal framework for consent, whether it's the terms of service, the ability to opt out, the ability to not be filmed by my neighbor when I'm on the street, it feels like our legal framework is nowhere near able to deal with this sort of self-surveillance on a mass scale that we seem to be moving toward.
Norén: No, we really have a version 1.0 understanding of consent procedures. I think as it's framed now, one company taking a fairly substantial amount of profit from a system where most of the people ending up on the app, I'm imagining, are not the owners of the cameras. So the people benefiting from the service and the people profiting from the service aren't the ones who are sacrificing their privacy. So there's a big disjuncture there. We don't have consent procedures that work at the group level.
Ring told us only a small number of employees can see videos and only with explicit consent.
And now for some related links:
- Last month, privacy experts at the ACLU sounded the alarm, no pun intended, about an Amazon patent application that would combine doorbell video with facial recognition technology and match people who come to your door with a database of suspicious people, and you could even upload photos of people you think are suspicious. That's kind of what Ring's Neighbors app already does, but not quite as automatically.
- CNET also published a story about Neighbors and how it's become a sort of other Nextdoor, where most of the posts are videos taken from doorbells and security cameras. Just this week the city of Houston said it will partner with Ring to use the Neighbors app for real-time safety alerts and to monitor for crimes. Not for nothing, Laura Norén pointed out that one of the things all these posts and cameras do is make us think there's a lot more crime than there actually is. Crime rates in most of the United States are historically low and have been dropping for years. But then again, fear sells connected doorbells.
- In other bummer privacy news this week: Kate O'Neill wrote a piece in Wired about the meme that's going around Facebook right now where people are posting profile pictures of themselves 10 years ago and today. She speculates that, realistically, the meme was probably thought up by Facebook as an easy way to collect some photos to help train its facial recognition algorithms. To be clear, we have no idea if that's true or not, but remember this same time last year when there was that fun Google Arts & Culture app that let you match your face to famous portraits in the art world? And then everyone was like, this is probably also a facial recognition training tool? Google said it wasn't using selfies for anything other than art matching. But it tells you that people are getting a little more suspicious about the steady march of technology. Which, on the whole, isn't the worst outcome.