French police test AI-powered security cameras ahead of Olympics – Yahoo! Voices

2 minutes, 2 seconds Read

French police are testing controversial AI-powered cameras at two Depeche Mode concerts in Paris this week, ahead of their planned use for security at the Olympic Games.

In a first for France, Paris police have deployed six cameras equipped with AI technology in and around the Accor Arena to analyse crowd movements and identify abnormal or dangerous activity.

Concertgoers attending the Depeche Mode shows, on Sunday and Tuesday, are the unwitting subjects of the controversial security measure. The French parliament passed a law last May authorising the use of AI for the security of sporting and recreational events.

The chaos of the 2022 Champions League Final game at the Stade de France between Liverpool and Real Madrid – when fans were crushed in crowded bottlenecks and pepper sprayed by riot police – was cited as one of the reasons for the law.

But the key purpose of the experiment is to prepare for the Paris Olympics in five months, which are expected to present police with a significant security challenge. Some 30,000 officers are expected to safeguard the opening ceremony alone.

AI will detect eight types of event

In the real-time experiment that started at last night’s concert, AI cameras should alert surveillance operators of any suspicious or potentially dangerous activity.

The AI has been trained to detect eight types of event: traffic that goes against the flow, the presence of people in prohibited zones, crowd movement, abandoned packages, the presence or use of weapons, overcrowding, a body on the ground and fire.

Once an incident has been flagged by the AI, surveillance operators will then decide whether or not to alert authorities and request police action.

However, for the purposes of the test, ministers have promised that no arrests will be carried out based on images selected by the AI cameras.

To allay “Big Brother” concerns raised by citizen privacy groups, the law also prohibits facial recognition.

But that has not been enough to appease privacy group Quadrature du Net, which said the plan is a slippery slope towards “the legitimisation” of increased surveillance tools being used on the general public and that increased reliance on AI technology will lead to more arbitrary arrests.

“Algorithmic video surveillance is inherently dangerous biometric technology,” the group said. “Accepting it opens the way to the worst surveillance tools.”

Broaden your horizons with award-winning British journalism. Try The Telegraph free for 3 months with unlimited access to our award-winning website, exclusive app, money-saving offers and more.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts