I recently undertook an experiment: to build a habit-tracker smartphone app from scratch, without writing a single line of code. I used an AI programme called “Cursor” that responds to natural language prompts, much like ChatGPT, but has access to the entire code as the app gets built and modifies it automatically in response to prompts. I managed to build a functional app, and learnt a few things about AI along the way.
In particular, I began to realise during the process of working with my AI assistant that this tool was very much a double-edged sword: on the one hand, it enabled me to do something that would have been impossible just a few years ago: for a person with almost no coding experience to build a custom-designed app from scratch. On the other hand, I very much got the sense that my assistant had a “mind of its own” and would frequently disrupt my own creative process with its own “initiatives” and “benign” interventions.
It felt like working with a very smart human programmer who wants so badly to impress and to please, but is a bit keyboard-happy and does not understand the consequences of his own actions. My AI assistant (Cursor) would “sneak” code changes past me during a programming session that I had never requested, sometimes more or less inconsequential (a square tickbox changed to a circle for example) and other times positively destructive (entire files blown out of the water).
It felt very much like my AI assistant needed to be kept on a tight leash, much like a small child that has the technical skills of an adult but does not see the bigger picture and is does not know how to apply its knowledge responsibly. My own vision for the project and its implementation were being repeatedly undermined and second-guessed by a machine that had more power than it could handle. When confronted, my AI assistant would apologise and amend its errors. But later, it would return to its old habits.
There are a few lessons that my intense engagement with AI taught me, or at least reinforced for me: first, that AI assistants, whether programming assistants or life coaches or conversation buddies or counselors, can frame our options in a particular way and bias us towards a particular choice, by simply recommending it or explaining what it takes to be its advantages to us. The whole deliberative and creative process can be circumvented if you hand significant choices over to a machine.
For example, if a machine is helping you decide where to go to college, what job offer to accept, how to write a paper, or which country to live in, surely there may come a point in your life when you succumb to laziness, and just decide to “go with the flow” and let your life be run by an algorithm? Surely it could be very tempting to be relieved of the burden of making tough choices and engaging in the creative process, if a machine can save you the trouble?
If that day comes - and it is seems to be just around the corner - then our humanity will be hollowed out insofar as we lose the habit of engaging in serious deliberation and building something beautiful with our own creative energy.
Our AI assistants may act as virtual “life coaches,” with one important difference when compared with real-life coaches: they will be life coaches with tremendous executive powers. They will be able to write perfectly worded emails for us, do our manual labour for us, run our errands, do our shopping, and perform many of our work duties. What will be left for us to do for ourselves? Will we renounce our own agency in return for the comfort of sitting back and allowing a machine to manage our life?
The other day, I requested a refund for an AI programme and got back a friendly email from a gentleman named “Sam.” I was about to reply thanking him for the refund when I noticed he was actually an AI agent. Literally, a nobody. A machine without a ghost, so to speak.
The more discretion and responsibility we give to AI tools, the more we are renouncing the possibility of living in a world in which flesh-and-blood human beings make everyday decisions in society, whether in the professional, social, or artistic domains. This has far-reaching implications for how we live as human beings.
To begin with, if a machine is exercising power over me or making a decision that affects my well-being, I cannot appeal to its humanity, and if an injustice arises, anything I do say will be addressed to a soulless, heartless machine. This will be justified, undoubtedly, by the “efficiency gains” of relegating organisational decisions to AI agents, but efficiency at what cost?
It may indeed be cheaper and more efficient to run an organisation with algorithmic decision-makers, but it radically dehumanises the social fabric, introducing a form of virtual “authority” in which the human author is hidden somewhere deep behind the curtains, but refuses to show himself, and refuses to answer to society for the consequences of his decision to let AI call the shots.
Think of an automated bureaucratic process in which appeals are managed algorithmically, and imagine that being applied in a much more systematic way to the governance of large organisations. Your boss might be an AI agent, who can tell you how to do your job, reprimand you, reward you, or fire you.
In a world in which AI decision-making steadily displaces human decision-making, those who create and employ AI tools will be ethically responsible for the consequences of their choices, yet may not have to answer directly to anyone for those choices.
No doubt ethical “protocols” and criteria will be programmed into AI agents, but zeros and ones will never actually have a conscience or a heart. Power will be exercised by AI agents over human beings, and appeals against such power might even be heard by other AI agents. That would be a particularly cruel and heartless form of oppression.
Sometimes we imagine AI conquering humanity by taking over the world in apocalyptic style. It is a lot more likely that AI programmes will be employed by an elite class of programmers, CEOs and politicians to “streamline” organisational processes in ways that replace human power-holders with impersonal algorithms, dissolving true ethical accountability and relentlessly dehumanising the social fabric.
In short, machines may not take over completely, but they may become so dominant in the life of society that interpersonal relations may be increasingly mediated by machines and the people behind the machines, instead of answering to society for their actions, are like the wizard of Oz, carefully concealed behind a curtain of code. That would not be a nice world to live in.
It is not too late to sign up for The Thinking Society course. All six sessions have been recorded for participants to view at their leisure and divided up into digestible sections.
You can sign up for the full course for 99 GBP or else you can purchase stand-alone access to module 5, “Rethinking Society,” which I delivered on Thursday, June 19th (25 GBP). Both options are available here.
You can also find me on Twitter/X, Youtube, Rumble, and Telegram.
Find my latest book, The Polycentric Republic, which lays out a decentralised vision of politics, here.
My academic profile and publications are listed at davidthunder.com.