A group of 149 global investors representing over US$1.6T in assets issues a statement to EU bodies tasked with developing a regulatory framework for safe AI development and use.
Investors Recommend To Strengthen The EU AI Act
NEW YORK, NY, THURSDAY, FEBRUARY 16TH, 2023 – Citing the need for ongoing human rights due diligence to regulate the development and use of artificial intelligence (AI) and mitigate against potential risks, a group of 149 investors today announced they had sent a statement recommending a series of enhancements to the proposed EU Artificial Intelligence Act (AI Act).
The signatories are members of the Investor Alliance for Human Rights and its allies, and the statement was sent to the relevant officials at the European Parliament, the European Commission, and the Council of the European Union responsible for finalizing the proposed regulation.
The statement outlined several additional provisions the investors say should be included in the final regulation to ensure the rights of all people will be protected and civic freedoms and democratic processes will be respected.
While the field of AI has exploded in recent years both in the number of products offered and their sophistication, there is growing anxiety on the part of legislators, civil society organizations, and members of the investment community about the human rights and civil rights harms that are emerging through its misuse, including incidents of invasions of privacy and discrimination.
Investors’ intention in issuing the statement is to state their support for the regulation while also identifying several gaps and recommending areas where it might be strengthened.
“Investors want to make rights-respecting investment decisions in companies that design, provide, deploy, and/or use AI systems within their business operations,” said Louise Piffaut of Aviva Investors.
“Robust regulation like the AI Act, which is poised to incentivize and enable the responsible development and use of AI, will provide needed guardrails that will empower users and reassure all stakeholders that any potential risks associated with its use are being properly managed.”
Some of the more egregious examples of AI misuse include its use for “predictive policing” or AI systems used by law enforcement and criminal justice authorities to make predictions, profiles, or risk assessments to predict crimes.
These systems have been exposed as discriminatory as has the use of biometric AI profiling in a migration context. Also flagged by the investors were the significant risks of AI’s use in a military or national defense context. Here, investors warned that:
Rules and safeguards in the AI Act are relevant to and should apply to AI systems that are to be deployed or used for military; defense; and/or national security purposes. Blanket exemptions from the AI Act for national security must be scrutinized to ensure that national security policy cannot override the rule of law and fundamental rights.
“We are concerned that the use of “security” technology such as biometric recognition may infringe on people’s/group’s rights to free speech and result in the silencing of dissenting and opposition voices,” said Lydia Kuykendal of Mercy Investment Services.
“Understanding which AI applications create the highest risk for potential human and civil rights violations and prohibiting them will be a critical piece in getting this regulation right.”
The investors clearly call out the need for a robust and ongoing human rights due diligence process in alignment with the UN Guiding Principles for Business and Human Rights and the OECD Guidelines for Multinational Enterprises as an important and necessary first step and complementary to the European Commission’s Corporate Sustainability Due Diligence Directive to ensure policy coherence.
The investors recommend that human rights impact assessments be conducted at all stages of the product and service lifecycle taking into account potential contexts for such use or misuse, and resultant unintended harms.
“Investors must understand the risks and opportunities that the development and adoption of AI present to the public,” said Amy Orr of Boston Common Asset Management.
“Integrating human rights due diligence as part of the design, development, and deployment of AI systems into corporate environments supports an ecosystem that promotes global best practices and increases business and investor awareness of the implications of AI adoption.
Advocating for legislative and regulatory measures to enable ethical AI, therefore, means advocating for appropriate transparency, prohibitions, safeguards, and accountability. All the above are required for the just application of digital human rights.”
The investors also made recommendations regarding the enforcement of the regulation, once implemented, suggesting that an advisory board to the European Artificial Intelligence Board comprising relevant stakeholders and civil society organizations focused on AI and human rights be established.
“AI is proliferating and evolving so rapidly and it’s critical we have the right safeguards and oversight mechanisms in place before the technology gets away from us,” said Anita Dorett, Director of the Investor Alliance for Human Rights who organized the investor statement.
“We believe our recommendations to strengthen the AI Act will help ensure that companies entering this field are taking the necessary precautions to develop and design safe, trustworthy products and services so that civic freedoms and democratic processes are protected.”
About The Investor Alliance for Human Rights
The Investor Alliance for Human Rights is a collective action platform for responsible investment that is grounded in respect for people’s fundamental rights. The Investor Alliance’s over 200 members include asset management firms, public pension funds, trade union funds, faith-based institutions, family funds, and endowments.