Amazon-Powered AI Cameras Used to Detect Emotions of Unwitting UK Train Passengers

6 months ago 63

Network Rail did not answer questions about the trials sent by WIRED, including questions about the current status of AI usage, emotion detection, and privacy concerns.

“We take the security of the rail network extremely seriously and use a range of advanced technologies across our stations to protect passengers, our colleagues, and the railway infrastructure from crime and other threats,” a Network Rail spokesperson says. “When we deploy technology, we work with the police and security services to ensure that we’re taking proportionate action, and we always comply with the relevant legislation regarding the use of surveillance technologies.”

It is unclear how widely the emotion detection analysis was deployed, with the documents at times saying the use case should be “viewed with more caution” and reports from stations saying it is “impossible to validate accuracy.” However, Gregory Butler, the CEO of data analytics and computer vision company Purple Transform, which has been working with Network Rail on the trials, says the capability was discontinued during the tests and that no images were stored when it was active.

The Network Rail documents about the AI trials describe multiple use cases involving the potential for the cameras to send automated alerts to staff when they detect certain behavior. None of the systems use controversial face recognition technology, which aims to match people’s identities to those stored in databases.

“A primary benefit is the swifter detection of trespass incidents,” says Butler, who adds that his firm’s analytics system, SiYtE, is in use at 18 sites, including train stations and alongside tracks. In the past month, Butler says, there have been five serious cases of trespassing that systems have detected at two sites, including a teenager collecting a ball from the tracks and a man “spending over five minutes picking up golf balls along a high-speed line.”

At Leeds train station, one of the busiest outside of London, there are 350 CCTV cameras connected to the ​​SiYtE platform, Butler says. “The analytics are being used to measure people flow and identify issues such as platform crowding and, of course, trespass—where the technology can filter out track workers through their PPE uniform,” he says. “AI helps human operators, who cannot monitor all cameras continuously, to assess and address safety risks and issues promptly.”

The Network Rail documents claim that cameras used at one station, Reading, allowed police to speed up investigations into bike thefts by being able to pinpoint bikes in the footage. “It was established that, whilst analytics could not confidently detect a theft, but they could detect a person with a bike,” the files say. They also add that new air quality sensors used in the trials could save staff time from manually conducting checks. One AI instance uses data from sensors to detect “sweating” floors, which have become slippery with condensation, and alert staff when they need to be cleaned.

While the documents detail some elements of the trials, privacy experts say they are concerned about the overall lack of transparency and debate about the use of AI in public spaces. In one document designed to assess data protection issues with the systems, Hurfurt from Big Brother Watch says there appears to be a “dismissive attitude” toward people who may have privacy concerns. One question asks: “Are some people likely to object or find it intrusive?” A staff member writes: “Typically, no, but there is no accounting for some people.”

At the same time, similar AI surveillance systems that use the technology to monitor crowds are increasingly being used around the world. During the Paris Olympic Games in France later this year, AI video surveillance will watch thousands of people and try to pick out crowd surges, use of weapons, and abandoned objects.

“Systems that do not identify people are better than those that do, but I do worry about a slippery slope,” says Carissa Véliz, an associate professor in psychology at the Institute for Ethics in AI, at the University of Oxford. Véliz points to similar AI trials on the London Underground that had initially blurred faces of people who might have been dodging fares, but then changed approach, unblurring photos and keeping images for longer than was initially planned.

“There is a very instinctive drive to expand surveillance,” Véliz says. “Human beings like seeing more, seeing further. But surveillance leads to control, and control to a loss of freedom that threatens liberal democracies.”

Read Entire Article