Needed: Ways for Citizens to Sound the Alarm About AI’s Societal Impacts | Karine Gentelet

By Karine Gentelet

 

 

As part of my job, I give talks about how artificial intelligence affects human rights: to criminology experts, schoolteachers, retirees, union members, First Peoples, and more. Across these diverse groups, I hear common themes. One is that although AI programs could impact how they do their jobs and live their lives, people feel their experience and expertise are completely left out before programs are deployed. Some worry, legitimately, about facing legal action if they protest.

Plans and policies to regulate AI systems in Europe, Canada, and the United States are not likely to improve the situation. Europe plans to assign regulatory requirements based on application. For example, the high-risk category includes technology used in hiring decisions, police checks, banking, and education. Canadian legislation, still under review by the House of Commons, is based on the same risk assessment. The US president has outlined demands for rigorous safety testing, with results reported to the government. The problem is that these plans focus on laying out guardrails for anticipated threats without establishing an early warning system for citizens’ actual experiences or concerns.

Regulatory schemes based on a rigid set of anticipated outcomes might be a good first step, but they are not enough. For one thing, some harms are only now emerging. And they could become most entrenched for marginalized, underserved groups because generative AI is trained on biased datasets that then generate new datasets that perpetuate the vicious cycle. A 2021 paper shows how prediction tools in education systems incorporate not just statistical biases (by gender, race, ethnicity, or language) but also understudied sociological ones such as urbanity. For instance, rural learners in Brazil are likely to differ from their urban counterparts with regards to fluency in the official state language and their access to relevant educational materials, up-to-date facilities, and teaching staff. But because there aren’t enough data on specific groups’ learning and schooling issues, their needs would be aggregated into a larger dataset and made invisible. Given the lack of knowledge, it would be difficult to even predict any kind of bias.

What’s needed are mechanisms that support citizens’ direct engagement with AI deployments to document, from the ground, potentially high-risk impacts on collective equity. There are democratic formats already in place to support citizens’ perspectives. In Canada, for example, the mandate of the general solicitor or privacy commissioner could be strengthened to review AI deployments in the public sector (audits of datasets, mandatory impact assessments, etc.). These mechanisms would provide transparent and accountable standards to keep citizens adequately informed about AI deployments, help balance the civic power dynamic, and strengthen social justice.

Citizens’ direct engagement could also be supported through access to courts. There are few (if any) direct legal recourses available for ordinary people to challenge algorithmic harms in current AI regulatory schemes. Access to courts—and implicitly to justice—could send a clear message about citizens’ power to corporations, governments, and, most importantly, to citizens themselves. In combination with other mechanisms to increase citizen oversight, legal suits would offer not only access to rightful reparations, but also give societal recognition of citizens’ rights.

Sometimes at my talks people tell me they feel illegitimate asking questions about AI’s impacts, given their lack of expertise. What I tell them is that they don’t need to be a mechanic to know how bad it would be to be hit by a car. Harms from AI are bound to be more subtle, but the point stands. Citizens are the ones primarily affected, so they must have an active role within AI governance. Emerging regulatory systems should highlight the role of citizens as social actors who contribute—as they should—to the collective good.

Gentelet, Karine. “Needed: Ways for Citizens to Sound the Alarm About AI’s Societal Impacts.” Issues in Science and Technology (): 83–84. https://doi.org/10.58875/PUPH1340

Télécharger le PDF

 

 

Ce contenu a été mis à jour le 13 février 2024 à 0 h 57 min.