Concerns have been raised over how the European Union's latest bid to regulate artificial intelligence could have "global ramifications", potentially undermining the social safety nets that keep many of its citizens out of poverty.
The EU's AI Act contains loopholes that allow for discrimination and breaches of privacy, according to a report by Human Rights Watch. Credit: gopixa / Shutterstock
A new report from Human Rights Watch reports that algorithms are being increasingly woven into social safety nets, which could violate social and economic rights, alongside other concerns over inadequate protection against surveillance and discrimination.
Read more: More needs to be done for cybersecurity in manufacturing and engineering
One major way in which these kinds of technologies are integrated into the social safety net is for reasons of protection against identity theft and fraud - to allow for the system to detect whether an applicant is who they say they are.
The report suggests some of the methods used inherently breach the duty of privacy and may be "unduly burdensome" on people who rely on benefits.
For it to work, AI collects a large amount of data, and can have a number of benefits from reducing human workloads, being able to operate 24/7 nonstop and aiding in repetitive tasks. It can also be used to discover new drugs, detect space junk, aid cancer treatment, and increase productivity on the factory floor.
However, concerns raised over breaches of privacy regarding AI are not new. It has also been used in more intrusive areas, such as counterterrorism - the extent of which always draws up ethical concerns - and enhancing cybersecurity, which is becoming an increasingly important and heated topic.
In Ireland, for example, the Irish Council for Civil Liberties, a human rights organisation, has criticised the nation's welfare office for collating more data than is necessary for identity checks, such as collecting facial scans when other, less intrusive methods of identification may be used, such as passport checks.
The report also draws from case studies in France, the Netherlands, Austria, Poland and the UK, finding cases where excessive breaches of privacy may be present, discrimination of some form is present, or where it may become difficult for those who require benefits to receive them.
In the case of the UK, the system may be so convoluted, access may be difficult for those on benefits, who typically have fewer digital skills or the online presence required to pass its Universal Credit system.
The EU's Artificial Intelligence Act (AIA) acknowledges the potential privacy concerns associated with AI algorithms, as well as raising issues of discrimination - such as how its safeguards neglect existing inequalities or existing biases in labour markets - but offers no solutions to these problems, although it could be inferred that these are issues the bloc is hoping to address.
The EU explicitly stated when it announced the new AI rules it would be putting a "blanket ban" on social scoring systems, like the one found in China that grades people on their "behaviours", so it is possible it is actively seeking to tackle these concerns, but thus far no information has surfaced as to how it intends to do this.
Human Rights Watch claims the new regulations will "fail to end the surveillance and profiling of those in poverty", claiming the proposal does not do enough to protect them and provide them with the assistance they need to live and find work.
Read more: Cybersecurity relies on talented workers - how do we inspire talent?
Another way automation is being woven into these systems is when assessing people looking for work, and their likelihood of finding employment. The report claims the parameters used within these systems to determine a prospective's chances are "highly subjective", such as "health, education background and interpersonal skills".
However, several studies into this subject have found these systems often discriminate against women in particular. One study found that AI jobseeking systems discounted women's employability after they turn 30, likely due to childcare commitments, without making the same discounts in the case of men in a similar duty of care. Another study suggested some systems homogenised women as a single concept, which often discounted the employability of women, even in fields where women are preferred.
This same study also claims biases are introduced into the system through the use of "course variables" as predictors.
"For example, disadvantage and discrimination are not affecting all job seekers that are part of certain marginalised groups the same way", it added, also suggesting the algorithm is being used to enforce harsh austerity politics in Austria.
The AIA establishes rules of "high-risk" systems and prohibited practices, but leaves it entirely up to providers to decide whether or not their own measures fall within the parameters of the Act. Any systems that are "signed off" are then free to enter the EU's market.
"This embrace of self-regulation means that there will be little opportunity for civil society, the general public, and people directly affected by the automation of social security administration to participate in the design and implementation of these systems", Human Rights Watch said.
"The regulation also does not provide people who are denied benefits because of software errors a direct course of action against the provider, or any other means of redress. The government agencies responsible for regulatory compliance in their country could take corrective action against the software or halt its operation, but the regulation does not grant directly affected individuals the right to submit an appeal to these agencies."
While the NGO claims the regulation is a "step in the right direction", it also claim that loopholes present within the Act itself will also prevent "any meaningful transparency".
Read more: Can a factory be too smart?
It also suggests bans on any AI technologies that threaten rights in ways that cannot be effectively mitigated, while also adding a system of checks and balances - which should constantly be updated as new data and threats arise - that adequately monitors new technologies which pose a risk to human rights.
"More broadly, the regulation should codify a strong presumption against the use of algorithms to delay or deny access to benefits. To prevent government agencies from outsourcing harmful scoring practices, the ban should apply to both public authorities and private actors", it said.
- Click here to read the report yourself. The EU's Artificial Intelligence Act can be found here.
Back to Homepage
Back to Technology & Innovation
Back to Politics & Economics