Clear guarantees are needed around the technology planned for border control points – fr

Clear guarantees are needed around the technology planned for border control points – fr

This column is an opinion of Petra Molnar and Jamie Liew. Molnar is a lawyer and associate director of the Refugee Law Lab at York University. Liew is an immigration lawyer and Associate Professor in the Faculty of Law at the University of Ottawa. For more information on the CBC Opinion section, please see the FAQ.
As the European Union presents its long-awaited proposal to regulate the use of artificial intelligence (AI), Canada announces a very different approach to potentially high-risk uses of AI-based technologies . As part of the recently released 2021 budget, the Canada Border Services Agency (CBSA) is receiving $ 656 million which will be spent in part on technologies such as facial recognition systems at the border.

According to the 2021 budget, the significant influx of money will allow the CBSA “to use new technologies, such as facial recognition and fingerprint verification”, and “to develop strategies to ensure fair application in differences in sex, age, mobility and race, promote the safety of all travelers. ”

However, it’s hard to know if any safeguards exist when it comes to this kind of border tech experimentation, or what “fair application” means when we know that AI-based technologies are anything but neutral.

Facial recognition encompasses a class of automated technologies that verify or identify people and analyze behavior based on the biometrics of their faces. As recent reports and studies show – as well as submissions from the Canadian Bar Association to the federal government and a recent report to the United Nations General Assembly by the Special Rapporteur on Discrimination – intrusive technologies like recognition facial and other automated AI systems can exacerbate the systemic system. racism and oppression, dehumanize people and violate various protected national and international human rights.

These technologies can make racist and sexist inferences that have profound implications in immigration contexts, for example.

This was demonstrated by the recently suspended European Union iBorderCRTL pilot project. The AI-powered lie detector deployed at the border has been widely criticized for discriminating against people of color, women, children and people with disabilities, leading to a court challenge. A similar lie detection avatar has been tested by the CBSA.

Meanwhile, in February, the Office of the Privacy Commissioner of Canada ruled illegal the mass surveillance carried out by the private company Clearview AI, which allowed law enforcement and commercial organizations to match billion images of faces in its databases.

He said: “The Commissioners found that this creates a risk of significant harm to people, the vast majority of whom have never been and will never be involved in a crime… These potential harms include the risk of error. identification and exposure to possible data breaches. . ”

Why is this type of technology celebrated and deployed at Canadian borders?

Border spaces have become testing grounds for unregulated technologies, with little oversight of the potentially significant impacts on people’s rights and lives, write Petra Molnar and Jamie Liew. (Illustration photo / CBC)

Canada’s techno-solutionist approach contrasts sharply with the proposed EU regulation, which, while far from perfect, sets out various bans and parameters relating to high-risk uses of AI, including at the border. and in the immigration decision-making process.

Meanwhile, the CBSA operates with little transparency under the guise of national security and border control reasons, and without meaningful oversight mechanisms. The establishment of a watchdog is in the 2021 mandate letter from the federal government.

Technology is not neutral. It strengthens power dynamics and exacerbates the prejudices that are already inherent in the opaque and discretionary decision-making that plagues immigration and refugee claims and border space.

Recognizing these tremendous impacts in various other areas like law enforcement and surveillance, more and more people are calling for a ban on facial recognition and other automated technologies. Even Hollywood has recognized the importance of engaging in these conversations, with the recent Coded bias documentary examining the ways in which algorithms perpetuate inequalities based on race, class and gender.

Given local and global calls for bans and regulations, why is more funding being given to high-risk technologies in Canada?

It is telling that Canada refuses to engage in meaningful conversations about the regulation and governance of technologies such as facial recognition at the border. The context of immigration and borders matters here. In an increasingly racist and anti-migrant world, border areas have become testing grounds for the development and deployment of unregulated technologies, with very little accountability and oversight over the potentially significant impacts on rights. and people’s lives.

Canada is clearly rushing to the front lines of the global AI arms race without giving sufficient consideration to its nefarious manifestations.

The sharpest edges of this technology that we have seen in refugee camps and other border spaces may appear far removed from places like Canada’s international airports. Unfortunately, the easy way in which technology moves from one context to another may mean dealing with immigrant detention or security inferences made against you based on technology that is a manifestation of racism. systemic or nothing more than snake oil.


Please enter your comment!
Please enter your name here