An Open Letter to Evan Solomon, Minister of AI

By Teresa Heffernan

Orginally published in the Halifax Examiner, May 16, 2025

Dear Minister Solomon,

Congratulations on your new post as Minister of Artificial Intelligence and Digital Innovation. 

I hope you will use this position to ensure copyright laws are enforced, to stop the theft and sale of data, to protect Canadians from resource-intensive and invasive surveillance technology, and to ensure we have good search engines. 

Above all, I hope you will fight for a democratic and sustainable future, now under threat by the authoritarian technocrats that own the AI infrastructure.

GenAI technology is being aggressively marketed by companies like Google and OpenAI as saving time and money, but studies indicate that companies that have adopted it are finding little benefit. 

The most glaring example of where cold reality meets the hype is DOGE and Elon Musk’s promise to save the US government $2 trillion. Instead, this reckless automation project has increased budget costs, brutally treated federal workers, threatened the security of Americans, and destroyed important government infrastructures that will take years to rebuild

While tech companies are sinking billions into AI, there is little evidence of its effectiveness, indicating that this bubble may well burst. The technology itself, which relies on the unprecedented theft of copyrighted work and on the exploitation of a hidden global labour force that cleans, collects, curates, and edits the information that feeds these corporate machines, is at its core unethical. 

Moreover, in the midst of a climate crisis and despite early climate pledges, the AI industry is driving up energy demands: data centres are sustaining the fossil fuel industry, erasing all the hard fought gains of renewable energy sources; spewing pollution; and sucking up water from places that are already facing severe shortages.

The catchall term AI needs to be carefully interrogated. The AI industry promotes their corporate machines as being human-like and capable of understanding, reasoning, intuiting, learning, collaborating, and creativity — a dangerous myth. It is important to differentiate between the different models of what is being called AI: pattern recognition, computer vision, image processing, optimization, and large language models (chatbots) — all that have their limits. 

Rather than getting caught up in the AI-industry-generated-hype about inevitability, we need to evaluate whether a particular technology is suitable to a well-defined task and to assess the reliability of the training data.

Take the example of health care: during the pandemic, billions were spent on AI-based tools for Covid diagnosis and triaging — none proved useful and some were harmful. Billions in the US have also been spent on health information technology and the digitization of health records, but these systems have been plagued by data breaches, insecure electronic record systems, medical errors, and physician burnout. They have hindered rather than helped productivity and not curved costs as promised. 

The problems stem from 1) the assumption that more information results in better care; 2) the lack of understanding of IT/AI systems by healthcare professionals, who are at the mercy of sales’ pitches by companies that overpromise and have little respect for complexity 3) and the imposition of automation in areas where human-centered care is required. 

However, most concerning is a recent systematic study of computer vision tools and patents that found rather than aiding cancer research, this technology instead was powering a surveillance pipeline.

All these problems are repeating themselves in this current AI hype cycle of chatbots, which are confidently wrong on average 60% of the time. There is no fix for these problems given chatbots generate random responses from large data bases according to a probability distribution and have no “intelligence.” 

It is understandable given this error rate that tech executives, even as they push their use in hospitals, have admitted they wouldn’t want their family members to be treated by chatbots. This continues the pattern of tech executives, who, for decades, have aggressively marketed screens and phones to schools and youth even as tech executives have strictly controlled their own children’s access to these devices

The recent push for TechEd in schools and universities is further threatening the very core of education, while deskilling children and young people, who are losing faith in their own abilities to read, write, and think critically

Even as it advertises its “free” education tools as offering “better learning and brighter futures,” Google is being sued for embedding a concealed tracking device in cloud-based aps, used by nearly 70% of K-12 schools in the US, to spy on children and harvest their data. 

Moreover, these same corporations and tech executives, who sucked up all the ad revenue from media organizations, undermining a pillar of democracy, are now polluting digital platforms with chatbots and cheerfully spreading disinformation. 

Deprioritizing or banning legitimate news while claiming their platforms should not be regulated, they are promoting the false claims of right-wing groups and manipulating bots to fixate on conspiracy theories. Encouraging technology dependence and building a global panopticon fits with the goals and authoritarian dreams of billionaire technocrats who are abandoning democracy while funding their dystopic visions with tax dollars.

Given your degree in English Literature and your career as a journalist, I am sure you are aware of the AI-hype cycles that have long been fuelled by extravagant claims that substitute fiction for science. 

Indeed, the field was launched by Alan Turing, who was a brilliant code breaker, but also a literal reader of fiction. He based his theory of evolving and immortal “child machines” on Samuel Butler’s 1872 novel, Erewhon, a satire that applies Darwin’s theory of evolution to machines, and he took what Butler called a “specious analogy” to heart. 

In what he called the “imitation game,” Turing dismissed thinking and prioritized imitation, which is fine within the parameters of a game. When unleashed on society, however, this industry model has led directly to the lucrative and destructive business of deepfakes and disinformation that now saturate the online world.

There are countless examples of this same problematic invocation of fiction in the AI industry. Elon Musk and Jeff Bezos regularly credit Star Trek for their future-orientated space colonization projects even though Gene Roddenberry, the creator of Star Trek, was influenced by Jonathan Swift’s 1726 Gulliver’s Travels. Roddenberry was not envisioning some literal space future any more than Swift believed there were nations of six-inch people or talking horses or flying islands waiting to be discovered. Whereas Bezos and Musk are invested in inequity, patriarchy and white supremacy, Roddenberry used allegory to envision a more equitable and just world.

We cannot afford to narrowly focus on products that promise various efficiencies, while ignoring the unsustainable model on which the AI industry is built, so I implore you to pay attention to independent academic studies and investigative journalism that is cutting through this current well-funded AI-hype cycle.

Scroll to Top

Discover more from Social Robot Futures

Subscribe now to keep reading and get access to the full archive.

Continue reading