A sizzling potato: In a uncommon (nowadays) story about synthetic intelligence of the non-generative sort, France’s National Assembly has authorised using AI to help in video surveillance of the 2024 Paris Olympics. The transfer comes regardless of opposition from rights teams who say its use is a possible violation of civil liberties whereas paving the way in which for the long run use of invasive algorithm-driven video surveillance throughout Europe.
As per The Reg, the French authorities adopted Article 7 of the pending legislation for the 2024 Olympic and Paralympic Games, authorizing using automated evaluation of surveillance video from fastened and drone cameras.
The system is claimed to detect particular suspicious occasions in public areas, similar to irregular conduct, pre-determined occasions, and crowd surges.
While the AI-powered surveillance plan could possibly be challenged on the highest constitutional court docket, France seems on monitor to changing into the primary nation within the European Union to make use of such a system.
It seems that France ignored the warning of 38 civil society organizations who expressed their issues over the expertise in an open letter. They say the proposed surveillance measures violate worldwide human rights legislation as they contravene the rules of necessity and proportionality, and pose unacceptable dangers to elementary rights, similar to the suitable to privateness, the liberty of meeting and affiliation, and the suitable to non-discrimination.
The letter warns that ought to the AI system be adopted, it’s going to set a precedent of unjustified and disproportionate surveillance in publicly accessible areas.
“If the purpose of algorithm-driven cameras is to detect specific suspicious events in public spaces, they will necessarily capture and analyze physiological features and behaviors of individuals present in these spaces, such as their body positions, gait, movements, gestures, or appearance,” the open letter reads. “Isolating individuals from the background, without which it would be impossible to achieve the aim of the system, will amount to ‘unique identification.'”
As is usually the case with AI surveillance, there are additionally discrimination fears. “Using algorithmic systems to fight crime has resulted in over-policing, structural discrimination in the criminal justice system, and over-criminalization of racial, ethnic, and religious minorities,” the teams add.
Mher Hakobyan, Amnesty International’s advocacy advisor on AI regulation, mentioned the choice places France susceptible to completely remodeling right into a dystopian surveillance state.
France’s Commission Nationale de l’Informatique et des Libertés (CNIL) regulatory fee backed the invoice underneath the situation that no biometric knowledge is processed, however privateness advocates don’t consider such a factor is feasible.
Daniel Leufer, the coverage advisor at digital rights group Access Now, mentioned, “You can do two things: object detection or analysis of human behavior – the latter is the processing of biometric data.”
Masthead: Henning Schlottmann