If our destiny was left to politicians, it could easily be death by sheer stupidity โ or humankind’s old nemesis and the incurable hubris affliction. But is a machine really the lesser of the two evils?
โWe should ponder the possibility that computers not just could behave more ethically but should behave more ethically than humans. If we can hold them to higher standards, then aren’t we morally obliged to do so?โ
– Toby Walsh, Machines Behaving Badly: The Morality of AI, 2022
โAlways and everywhere, human beings have felt the radical inadequacy of their personal existence, the misery of being their insulated selves and not something else, something wider, something in Wordsworthian phrase โfar more deeply interfusedโ.โ
– Aldous Huxley, The Devils of Loudun, 1952
Human behaviour is relatively predictable but far from reliably rational. Driven by emotion, irrational behaviour is a consequence of circumstance and disposition. In contrast, rational behaviour conforms to a group’s social norms โ how one ought to think and act. However, this can only be consistently true if the actor and the observer are of the same disposition and worldview at the time, have the same knowledge, and are unaware of one another’s presence, which is unlikely.
Spinifex is an opinion column. If you would like to contribute,ย contact usย to ask for a detailed brief.
Because people behave differently when they are aware of being observed, the notion that individuals should unfailingly respond to the objective reasons they possess does not hold true. Strange, then, as it might seem, much of our behaviour is predictable, both rational and irrational. It would be impossible for humans to interact if it was not.
Considering this, how will the logic of AI interact with the illogicality of human behaviour? For instance, will AI be capable of gossip, the principal mechanism in group social bonding and socialisation?
Moreover, will AI experience emotion, a significant motivator and influencer of reasoning and decision-making? Will AI experience stress and anxiety, sadness and happiness? Also aware that it is life’s trials and tribulations that build character and foster good judgement and decision-making.
Replacing people with algorithms is a precarious enterprise. It is, in effect, an open invitation to game the system. But more importantly, will AI systems be transparent enough to detect a biased algorithm toward unethical, discriminatory, or even dangerous behaviour? What’s more, will a human, not an AI expert, be able to understand such violations and respond accordingly?
In April 2023, Max Tegmark, a physicist and leading AI researcher at the Massachusetts Institute of Technology, said that “rapidly building AI systems that aren’t fully safe isn’t an arms race; it’s a suicide race”.
Other prominent AI researchers, like the “godfather of AI” Geoffrey Hinton and renowned Canadian computer scientist Yoshua Bengio, express similar concerns.
In an open letter to all AI labs in March 2023, a cohort of the world’s foremost AI researchers, including those above, reiterated the following from The Asilomar AI Principles (2017):
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one โ not even their creators โ can understand, predict, or reliably control.”
The letter continues:
“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
No less than a pivotal pause is needed for every human being to ask themselves, “Hey,” have you really, really considered where all this is headed? I mean, are we all caught in a tech-inducing slipstream of subconscious acceptance that “any technology is good technology”? That is, AI is not just another technology; it is a seminal shift from a human-driven society to a posthuman machine-driven one.
Indeed, what is a human’s value in a world where no one pays for labour anymore? Or, in a world that runs on consumerism, what is the value of a human that is no longer a consumer?
The March 2023 open letter was followed by a joint statement published by The Centre for AI Safety (CAIS) in late May 2023 and endorsed by 350 AI experts, senior executives of AI companies, and public figures. It reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Among the signatories to the statement are cognitive psychologist and computer scientist Geoffrey Hinton, OpenAI chief executive Sam Altman, chief AGI scientist and co-founder of Google DeepMind Shane Legg, Anthropic CEO Dario Amodei, Gates Ventures Bill Gates, Schumann Distinguished Scholar at Middlebury College Bill McKibben, and professors of Philosophy David Chalmers and Daniel Dennett.

The repeated warnings by senior executives and AI experts could not be more explicit! The signatories’ primary concern is the lack of a robust regulatory framework, which they argue should be instituted by governments in all applicable jurisdictions as soon as possible. Without this, they fear that as AI surpasses human intelligence, which they believe it no doubt will, there is no assurance that its vision of the world will not conflict with the world that humans envision.
For instance, the possibility of an AI-created world incompatible with human life. One comfort is OpenAI’s credo. It does not follow the profit maxim “move fast and break things”. Instead, its credo is to move slowly or slower and not break anything, especially humanity. Albeit the lure of big profits is a formidable siren, as the Sam Altman furlough and subsequent reappointment demonstrated.
In 2021, the European Commission, responsible for drafting proposals for new European legislation, published a series of documents to provide the first-ever legal framework for AI technology’s development, use, and marketing. In October 2023, US President Joe Biden put a similar long-promised regulatory process in train by signing an executive order (EO) to ensure the “safe, secure, and trustworthy development and use of artificial intelligence”.
The Bletchley Declaration acknowledges the significant risk associated with the misuse of AI and the “serious, even catastrophic, harm, whether deliberate or unintentional, it could wreak”.
Biden’s rather broad EO requires that “developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government”, prompting widespread consternation from developers fearing that it might retard the development of the technology.
In early November 2023, at the AI Safety Summit held at Bletchley Park, England, AI researchers, leaders of major AI companies, government delegates, and civil society groups followed suit by signing up to the Bletchley Declaration, which acknowledges the significant risk associated with the misuse of AI and the “serious, even catastrophic, harm, whether deliberate or unintentional, it could wreak”.
The more comprehensive EC legislation aims to “promote the development of human centric, sustainable, secure, inclusive and trustworthy artificial intelligence” and to “seize the opportunities and benefits” that AI promises. A primary focus is the identification and monitoring of “high-risk” AI systems โ those that represent “a high risk to the health and safety or fundamental rights of natural persons”.
Prohibitive AI practices include, in accordance with the Artificial Intelligence Act, Article 5 (paraphrased):
(a) the deployment of subliminal techniques beyond a person’s consciousness designed to materially distort a person’s behaviour that causes or is likely to cause that person or another person physical or psychological harm
(b) the exploitation of the vulnerabilities of a specific group of persons due to their age, physical or mental disability
(c) the evaluation or classification of the trustworthiness of persons by public authorities based on their social behaviour or known or predicted personal characteristics
(d) the use of “real-time” remote biometric identification systems in publicly accessible spaces by law enforcement unless necessary for the prevention of a specific crime and/or threat and subject to the seriousness, probability, and scale of that crime and/or threat in the absence of the AI system
Arguably, AI has already breached all these prohibitive practices. Moreover, the underlying premise is that AI must not identify as a natural person nor violate fundamental human rights and/or privacy laws outlined in the Act.
The former, of course, is contradictory to what AI aims to achieve, albeit a significantly more intelligent human being than the one that currently exists. What constitutes “more intelligent” is an anomaly of variable contradictory perspectives and goals.
Another awkward aspect of human nature is that we tend to anthropomorphise things, projecting our values, aspirations, and emotions onto things as if they were sentient โ our cars, pets, and now robots, for example. What is sentient and what is not has historically represented a conundrum for humans, as the current AI debate accentuates.
Anecdotes regularly circulate that chatbots exhibit signs of sentience, which humans inherently desire to believe. One is reminded of the prolific British author Arnold Bennett, who writes in his book The Human Machine (1972): “There are men who are more capable of loving a machine more deeply than they can love a woman. They are among the happiest men on earth.” Car lovers, for example.
Establishing trust in an AI system is a vague and mercurial proposition because it only exists in users’ minds. Notably, the AI Act adopts a risk-based approach, and the regulation repeatedly emphasises instituting protocols to protect “natural persons” from the “risks” associated with AI systems.
The AI system should not also identify as a natural person or pretend to be one.
Trust is usually earned; henceforth, we must assume that AI systems will be programed to garner a user’s trust, which is somewhat disturbing considering our current experience with deepfakes and an upcoming US presidential election propagated on disinformation.
The 2022 publication, International Perspectives on Artificial Intelligenceโ the collective AI outlook by an assembly of practitioners and users โ edited by Mark Munoz and Alka Maurya, reiterated the same:
“As AI continues to develop, governments and practitioners must ensure that AI-enabled systems can work effectively with people and hold ideals that remain consistent with human values and aspirations; but this will not be easy. Increased attention has been drawn to these challenges and many believe that AI will create a better and wiser path forward for humankind.
โHowever, realistically one of the greatest challenges will be the risks involved for all citizens as AI continues to evolve and increase its impact on the workforce and society.”
Again, a contradiction exists: AI systems must work well with “people,” and their “values and aspirations” should reflect those of a natural person. However, the AI system should not also identify as a natural person or pretend to be one. All the while, the disclaimer, “this will not be easy,” serves as a necessary override and rather diplomatic recognition of the gravity of the risks.
Ironically, perhaps the archetypal definition of embracing risk, in the context of Ulrich Beck’s influential World at Risk (2009) โ humankind has built its world around risk โ was placing the future of humankind in the hands of humans who would indemnify themselves by irrevocably passing it on to machines.
More overtly, if our destiny was left to politicians, it could just as easily be death by sheer stupidity โ or humankind’s old nemesis and the incurable hubris affliction. But is a machine really the lesser of the two evils?
