EU bans use of AI for mass surveillance and social credit scores
The European Union plans to ban the use of AI for a large number of use cases, including mass surveillance and social credit scores, according to a draft proposal from the European Commission that has been first disclosed by Politico. An official statement is expected next week.
One of the project’s proposals recommends that the European Commission ban certain use cases of AI and limit its use for other applications if they do not meet certain standards. The document recommends banning the use of AI for mass surveillance or the development of a social credit scoring system.
The draft proposal also requests special permission to use “ remote biometric identification systems ” and requires explicit notification to individuals when interacting with AI systems, “ unless it is obvious ”. It also calls for monitoring ‘high risk’ AI systems that pose a direct threat to someone’s safety, such as self-driving cars or systems that have a direct impact on livelihoods, such as systems. AIs used for hiring, assigning recidivism scores, or granting personal loans.
EU member states would be required to create assessment committees to test and validate high-risk AI systems. The draft proposal also calls for a ‘European artificial intelligence committee’ with representatives from all member states to help the European Commission identify systems that can be classified as high risk.
Companies that do not comply can be fined up to 20 million euros or 4% of their turnover.
While the United States and China have focused their attention on developing powerful AI systems, they have failed to put in place an airtight regulatory framework to ensure security and individual rights. The EU has a powerful GDPR (General Data Protection Regulation) in place to address these issues. This draft proposal is also in line with the EU’s ‘human-centered’ approach to the development of AI.
However, the leaked draft drew criticism from politicians, calling for improvements in terms of the ambiguity of the language used. Experts are asking for more clarity on what constitutes AI and what is harmful or high risk in AI use cases.
Subscribe to our newsletter
Receive the latest updates and relevant offers by sharing your email.
Join our Telegram group. Be part of an engaging online community. Join here.