Europe Sets Limits on Artificial Intelligence; India Can’t Afford to Wait

EU to Prevent Harmful Uses of AI and Protect People’s Rights and Safety

July 6, 2025

A silhouette of a woman looking at a wall showing technology with an eye.

The European Union has decided to go ahead with its Artificial Intelligence Act on time, rejecting requests from some tech companies to delay it. This new law is meant to keep AI under control and protect people’s rights, safety and democracy. Countries in Asia, especially India, need to take this seriously. If they wait too long, AI systems might become common in daily life without any rules, and that can cause irreparable harm.

The EU has taken a simple but clear route. It looks at AI tools based on how much harm they can cause. At the top are tools that are considered too risky to allow at all. These include systems that try to change how people think using tricks, or that take advantage of weak groups like children or elderly people.

For example, if an AI tool pushes a child to buy a product by reading their emotions, or if it makes a person feel afraid or pressured to do something, that is banned under this law. These uses can go deep into people’s minds and take away their ability to think clearly. That’s why the EU does not allow them.

Other banned tools include AI that classifies people based on religion, gender or political opinions. The law also bans what is called “social scoring,” which means giving people a score based on how they live, what they buy or who they talk to. The can lead to unfair treatment. For example, if someone is seen spending a lot of time in a poor neighbourhood, or follows certain political groups online, an AI might lower their score. That score could then be used by a bank to deny them a loan or by an employer to reject their job application.

Also banned are tools that guess someone’s emotions in school or office settings. These tools can be misused to judge or punish students or employees based on what the AI thinks they are feeling, even if the guess is wrong. In some cases, police can use face recognition in public, but only when looking for missing persons or preventing serious crimes like murder or terrorism. Even then, police must get permission from a court or other authority and explain why the tool is needed.

The next group of tools are called high-risk AI, which includes systems used to make important decisions in areas like job hiring, education, healthcare and policing – for example, an AI that filters job applications, predicts student performance, decides who qualifies for a loan or welfare scheme, or flags people as crime suspects. These are not banned but are closely watched. For example, if a company uses AI to scan job applications, it must make sure the system does not leave out candidates just because they belong to a certain caste or gender.

Developers who build these tools must follow several rules. They must explain how the system was built, what data was used and how accurate it is. They also need to make sure the tool is safe from being hacked or misused. Instructions must be clear so that users, like teachers or doctors, know how to use the tool correctly.

Another big part of the law deals with general-purpose AI. These are tools that are not made for one job but can be used in many ways, like the large language models behind chatbots or image tools. Even if these tools are open to the public, developers must still say what data was used to train them and respect copyright rules.

If a general-purpose AI tool becomes very powerful – based on the amount of computing used to train it – then it is seen as having “systemic risk.” These tools must go through even stricter checks, including tests to see how they might fail or be misused. Developers must watch for major problems and report them quickly.

For example, if a chatbot powered by such a model starts spreading wrong health advice, the maker must know about it, report it and fix the issue. They must also keep the system safe from being hacked. All this is meant to stop harm before it happens.

To make sure these rules are followed, the EU is setting up a new AI Office. This office will watch how developers work, check if rules are being broken, and act when problems are found. People who use AI tools made by others can also complain to this office if they think something is wrong.

The law has clear rules but can also change when needed. It keeps companies in check and gives people some power back in a world that’s being shaped by algorithms.

This is where Asia and India must take a closer look. AI tools are already being tested in Indian schools, police departments and welfare offices. But there are no public rules on what is okay and what is not. That makes it easy for mistakes and abuse to happen.

For example, in India’s welfare system, digital tools have already led to people being wrongly removed from benefit lists. If an AI tool is added to this system without clear rules, even more people could be pushed out unfairly – and they might never know why or how to appeal.

In India, where identity, religion and language often affect how people are treated, it is very risky to use AI without some basic protections in place.

Another problem is that many AI tools are built outside India. If India does not make its own rules, it may end up using systems made for other countries. These tools might not understand Indian names, cultures or social realities. That could lead to serious mistakes.

Take facial recognition, for example. If it is trained on faces from Europe or North America, it might fail to recognise Indian faces correctly. This can cause problems in airports, public events or even in police investigations. Regulation can prevent such blind spots.

There is also a risk of losing control. If foreign companies set the standards, Indian companies will have to follow them anyway to sell or use their tools abroad. It is better for India to build its own standards early, so its developers can compete fairly.

You have just read a News Briefing by Newsreel Asia, written to cut through the noise and present a single story for the day that matters to you. Certain briefings, based on media reports, seek to keep readers informed about events across India, others offer a perspective rooted in humanitarian concerns and some provide our own exclusive reporting. We encourage you to read the News Briefing each day. Our objective is to help you become not just an informed citizen, but an engaged and responsible one.

Vishal Arora

Journalist – Publisher at Newsreel Asia

https://www.newsreel.asia
Previous
Previous

India’s Small and Medium Hospitals Need Their Own UPI Moment

Next
Next

Vivek Singh: His Passing, and His People-Centric Journalism