EU passes draft of “AI Act” for strongest AI regulation
EU passes "AI Act" for strong AI regulation.
Source: Phoenix Technology Network
On Wednesday local time, the European Parliament passed a draft law called the “AI Act,” which is an important step in regulating AI. As decision-makers in various countries strive to set up “guardrails” for this rapidly developing technology, this law may become a template for other countries.
This vote on the draft is just one step in the long process of the EU passing this law. The final version of the law is expected to be passed later this year.
The “AI Act” regulates AI in a “risk-based” way, focusing on the application areas that pose the greatest risk to humans, including AI systems used to operate critical infrastructure such as water or energy, used in the legal system, and determining the acquisition of public services and government welfare. AI system developers must conduct risk assessments before the technology is put into daily use, similar to drug approval procedures.
According to the latest version of the “AI Act” passed on Wednesday, generative AI will face new transparency requirements, including the release of summaries of copyrighted materials used to train the system, a proposal supported by the publishing industry but opposed by technical developers as impractical. Manufacturers of generative AI systems must also take measures to prevent them from producing illegal content.
- Is USDT facing serious selling pressure due to market makers exiting?
- Conversation with Stripe Founder: How to Effectively Operate a Star...
- How can investment firms prepare for Web3.0 after 2023?
At the same time, the “AI Act” will severely restrict the use of facial recognition software and require developers of AI systems such as ChatGPT chatbots to disclose more data used to create the program. The use of facial recognition is a major point of contention. The European Parliament voted to ban the use of real-time facial recognition, but the question of whether legal exemptions should be allowed for national security and other law enforcement purposes remains.
According to the current draft, companies that do not comply with the “AI Act” will face fines of up to 6% of their global revenue.
In terms of regulating AI, the EU has gone further than the US and other Western powers. The EU has been debating this issue for over two years. After the release of ChatGPT last year, this issue became more urgent, exacerbating concerns about AI’s potential impact on employment and society.
However, tech leaders have also been trying to influence the AI debate in various countries. OpenAI CEO Sam Altman, the developer of ChatGPT, has met with at least 100 US lawmakers and other global policymakers in South America, Europe, Africa, and Asia in recent months, including European Commission President Ursula von der Leyen. Altman has called for AI to be regulated, but has also said that the company may have difficulty complying with the EU’s regulatory draft and threatened to withdraw.
It is currently unclear how much impact AI regulations will have. The pace of AI development seems to have outpaced the speed at which European legislators are creating laws. For example, earlier versions of the AI Act did not pay much attention to so-called generative AI systems like ChatGPT.