It’s been difficult, if not downright frustrating, to wade through the confusion that the term artificial intelligence and its politics have introduced into global policy discourse.
The technology’s capabilities and related risks are often exaggerated, while those that benefit most from AI theatre—the companies that direct and fund AI’s development—are pushed to the background.
Around this circus, but never in the centre, are the people who have been harmed by bias in data and ill-governed software systems, or who provide the labour that fuels the mythological status of the tech.
AI conversations have the characteristics of a hype cycle, which is one reason why we should slow down how we approach the matter from a policy and regulatory perspective. Unfortunately, Canada’s Ministry of Innovation, Science, and Economic Development (ISED) is operating in urgency mode. ISED has a mandate to establish Canada as a world leader in AI, and, apparently, to accelerate AI’s use and uptake across all sectors of our society.
The confidence with which ISED is asserting societal consensus on AI’s uptake is troubling. Very few of us have had a chance to think about if and how we do and don’t want AI to become installed in our society and culture, our relationships, our workplaces, and our democracy.
Though lacking any type of informed public demand for it, ISED has created a draft bill called the Artificial Intelligence and Data Act (AIDA), which, as part of Bill C-27, is making its way to the Standing Committee on Industry and Technology (INDU) in a few months, on the heels of a successful second reading in the House of Commons.
AIDA is an AI law for the private sector. Canada has an existing policy directive for the use of AI in the public sector, called the Directive on Automated Decision-Making, but this is notably a policy rather than law.
ISED says that AIDA was created to set rules for the design, development, and use of AI. AIDA also includes proposals to prohibit certain conduct in relation to AI’s use. As per ISED: “The framework proposed in the AIDA is the first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses.”
In ISED’s argument for the need for this bill, it points at problems with AI, such as discrimination in hiring software, bias in facial-recognition technology, and deepfake media. Interestingly, in the same document, it heralds benefits of AI that include health care screenings, information accessibility, “smart” products and services, and translation tools.
This framing provides a perfect example of the erroneous thinking behind AIDA—that there is good or bad AI. This isn’t how technology works, and this over-simplification leaves many consequences of technology adoption out of the frame of public and legislative consideration.
Probably the most telling of ISED’s arguments for AIDA relates to industrial development. ISED claims to need to rush this bill so that the Canadian AI industry can stop worrying about future rules about AI that might “unfairly stigmatize their field of work.”
As the owner and drafter, ISED’s approach with AIDA is flawed to the degree that the members of the committee would be wise to stop the bill. ISED itself would likely be relieved at this point. It has no doubt become increasingly clear to ISED, and everyone else watching, that the ministry is operating way outside of its lane and capacity, given the breadth of this file.
Stopping AIDA at the INDU committee will make time and space for us to return to the matter of AI regulation with a more thoughtfully informed, and expansive, approach.
How to think about AI before jumping to legislate
Much of what “AI” does now is pattern matching. It uses past data to predict or create future outcomes or products. The technology is, broadly, mundane. Its impacts on our social relations, however, are not.
AI can replicate, embed, and accelerate problematic power dynamics that run through society. It can take the troubling history of capitalist patterning and crank up the speed. And to say it clearly: AI is absolutely not intelligent. If someone tries to confuse what AI does for how people are, it would be best to move along.
Pouring AI into the existing global capitalist system serves one primary purpose: to make it all move faster. Automation has a close relationship with efficiency, and efficiency has become such a cult that it has bled out into our culture.
Canadian policy discourse in the early 2000s, courtesy of Janice Gross Stein and others, explored how efficiency as a cult moves us away from the fundamentals of good policy conversations. Efficiency as a primary goal elides nuanced conversations about the diversified policies required to grapple with race and class, with reconciliation, with disability and quality of life, with health care and quality of care, with climate justice, and so much more.
The efficiency as a cult problem in Canadian policy discourse remains prevalent. With AIDA it will only get worse—“beneficial AI” is often just a fancy way of talking about automation and efficiency.
Rushing a law to speed up capital flows
AI laws, both in Canada and around the world, are primarily emerging because both industry and governments need this technology to be validated, legitimized, and instantiated in our global economic system. Across all industries, in the public sector, and beyond.
The geopolitics of emerging AI laws differ in flavour informed by domestic positions and industrial dreams.
The U.S. is gesturing at ways to both protect its tech giants and continue to blur the line of relationships between the military and the private firms that own and control a significant amount of global AI infrastructure.
The EU is creating endlessly complex legal and policy categories and subcategories to divide and subdivide the technical harms of automation. It waves its arms around at the need to protect “its values” while the subtext declares its desire to have homegrown versions of the U.S. companies. China is regulating AI in line with its industrial and social desires. And so on.
When we focus on the harms and risks of AI for regulatory purposes, we rarely stop to understand the consequences of the larger expansion of AI use that gets defined as normal.
From an economic development perspective, what ISED is doing is to be expected. AIDA is about harmonizing digital assets (AI) to support global trade. There is no mystery here.
Canada already did this in 2000 with its private sector privacy act, PIPEDA. Through PIPEDA, Ottawa decided that to support economic expansion, data had to move around the world as easily as possible, human rights implications be damned. The current digital economy, over 20 years later, is barreling forward at the speed of AI for the same reason.
This is pattern repetition. These laws are not about harm reduction. Each country will be happy to ban or bar edge scenarios of use so long as the majority of use goes untouched and unquestioned.
Starting the conversation again, properly
The very notion of the need to regulate AI is debatable. Regulating AI in the manner that ISED is proposing is jumping the gun in two distinct ways. Firstly, unlike the rest of C-27, AIDA was never subject to broad public consultation. By skipping this step, it is, by definition, starting with a weak draft.
The broader policy community, and the general public, would bring much more expansive thinking to the process than the narrow set of tech experts and industry that have been participating so far. When the issue is contentious, as technology is, and the approaches to take are uncertain, as they are with AI, this realm is a bizarre place for the normally conservative Canadian establishment to be taking such quick and unformed steps.
Secondly, ISED is asserting the certain future-state cross-sectoral ubiquity of AI in order to drum up legitimacy for the industry. Its position is that because AI exists it needs to be regulated. It would be more accurate to say that we’re at an important point in time to talk about AI. This would include a fulsome look at how pre-existing laws and policies might be brought into regulatory conversation prior to establishing an entirely new set.
It’s easy to write laws. It’s much more difficult, and expensive, to create functional access to justice regimes to make sure they are upheld.
Even if AIDA were to be heavily edited and corrected, it won’t be able to escape its founding intent: the broader goal of normalizing AI’s use across all sectors of society. Canadians have not agreed to this. This is a matter of consent.
Technology feels inevitable because of the type of law that AIDA represents. It’s counter-intuitive in some ways, but when we focus on the harms and risks of AI for regulatory purposes, we rarely stop to understand the consequences of the much larger expansion of AI use that then gets defined as normal. What ISED deems “good” or “fine” or “permissible” for industrial expansion via AIDA may be anything but for workers, students, residents, refugees. For everyone else.
This is not to say there aren’t real and urgent harms to attend to regarding the current use of AI. From bias and racism in a range of AI uses to dehumanizing labour conditions, from art theft to an inability to know if you’re speaking to a machine or a person when accessing a service. But the reality is that most of these issues could be addressed with pre-existing laws if there really was interest to address them.
Beyond these ever-expanding current harms, we also need to think about what it means to live together with an ever-increasing amount of automation and predictive technology woven through our relationships. What it means, socially, to continue to put so much stock in the quantified and efficient life. But because AIDA is being created as part of ISED’s remit, it’s difficult to invoke cultural arguments. ISED isn’t mandated to deal with culture and society.
ISED is right to look at the parts of this conversation that relate to industry. But those parts of the conversation constitute a minority of the broader set of implications of the use of AI. Beyond this structural problem with ISED as convener, the process that ISED followed to rationalize this bill needs intense scrutiny.
What triggered a bill with no public consultation to be written? Why? Who wrote it? What are we doing about the fact that most of the core private infrastructure that AI is reliant upon isn’t Canadian?
We should be talking about what public administrative ethics require of this topic. And we should be talking about what general adequacy in law drafting looks like. If what is being done with AIDA is permissible to our elected officials, we have bigger tech and democracy problems than we might understand. To deal with the social impacts of AI, we have to construct an entirely different conversation than one that has a primary goal of expanding the Canadian AI industry.
This conversation would have to start upstream of ISED’s erroneous assumptions about the ideas of good AI vs. bad AI. AI is a general-purpose technology impacting many sectors and consumer contexts, human rights, culture, and beyond. We need to do things differently with public power—laws—to reshape how tech impacts society.
INDU committee members can practice ethics by revisiting ISED’s work thus far and acknowledging that this is not a topic it should be in charge of framing. I certainly wouldn’t want to be holding the bag for this wide a set of societal consequences.
By exerting a bit of responsibility, and a bit of humility, the committee can propose we consider a fresh start on how to engage on this topic responsibly. The AI industry will be fine. If we don’t slow down on mindlessly repeating the patterns of capitalism, including our failure to govern it appropriately, our society will not be.