Business

EU passes world’s first AI Law. Here’s what you need to know

×

EU passes world’s first AI Law. Here’s what you need to know

Share this article
EU passes world’s first AI Law. Here’s what you need to know


The European Parliament has just passed the world’s first comprehensive law on artificial intelligence (AI). The law is designed as a framework for constraining the risks of AI. As the sector sees explosive growth, driving huge profits for companies involved in the scene, the need for a complete law to safeguard humanity is required.

The AI Act works by classifying products based on the risk and adjusting scrutiny accordingly. According to the lawmakers, the law will make the tech more “human-centric.” The law also places the EU at the front of attempts to address the dangers linked to AI.

Member of European Parliament (MEP) Dragos Tudorache says, “The AI act is not the end of the journey, but the starting point for a new governance built around technology.”

The AI Act was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

What is the AI Act all about?

Originally designed to act as consumer safety legislation, the main aim of the law is to regulate the AI industry by its capacity to cause harm to the general society. Based on the law, the EU can ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems as well as image scraping.

See also  Are EV sales increasing in Malaysia? Here's the data from JPJ

Emotion recognition in the workplace and schools, social scoring, predictive policing and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

What is included and excluded in the law

AI systems considered “high-risk”, such as those used in critical infrastructure, education, healthcare, law enforcement, border management or elections, will have to comply with strict requirements.

Low-risk services, such as spam filters, will face the lightest regulation – the EU Parliament expects most services will fall into this category.

There are provisions in the law to cover generative AI such as ChatGPT. Some of the provisions include:

  • Developers of general-purpose AI models — from European startups to OpenAI and Google — will have to provide a detailed summary of the text, pictures, video and other data on the internet that is used to train the systems as well as follow EU copyright law
  • AI-generated deepfake pictures, video or audio of existing people, places or events must be labelled as artificially manipulated.
  • Report any serious incidents, such as malfunctions that cause someone’s death or serious harm to health or property
  • Have cybersecurity measures in place; and disclose how much energy their models use.

The European Union Says that violations of the law could draw fines of up to 35 million Euros (about RM179.5 million), or 7% of the company’s global revenue. There are no specific fines for individuals, but the law is still being fine-tuned.

What’s next for the law?

The AI Act is expected to officially become law by May or June, after a few final formalities, including blessing from EU member countries. Provisions will then start taking effect in stages, with countries required to ban prohibited AI systems six months after the rules enter the lawbooks.

See also  Scaling Your Business And Your Billing Requires Better Data

For enforcement, each EU country will set up its own AI Watchdog for citizens to file a complaint for AI Act violations. Brussels, as the seat of the European Union parliament, will create an AI Office to enforce and supervise the law for general-purpose AI systems.

What is the rest of the world doing?

EU is not the only one creating or drafting AI-related laws. President Joe Biden signed an executive order on AI in October last year, with lawmakers in at least seven US states drafting their own AI legislation.

Chinese President Xi Jinping meanwhile proposed his Global AI Governance Initiative for fair and safe use of AI, and authorities have issued “ interim measures ” for managing generative AI, which applies to text, pictures, audio, video and other content generated for people inside China.

Malaysia meanwhile is creating its own set of AI governance ethics, which is expected to be ready by this year. The country is investing big in AI thanks to the collaboration announcement between YTL and NVIDIA earlier this year, as well as interest by other companies such as Google. Science, Technology and Innovation Minister Chang Lih Kang said the plan to come up with AI regulation is the end goal and it starts with the establishment of AI governance and code of ethics.

[SOURCE]





Source Link Website

Leave a Reply

Your email address will not be published. Required fields are marked *