The psychology of our relationship with AI: Threat, teacher, tool, or ticket home? (and why things are going to get a LOT weirder!)
What do I care about AI?
If your mind isn’t already starting to tingle and ache with a sense of awe (maybe the exciting type, maybe the scary type, maybe a bit of both) with the recent and rapid advancements of AI, then let me tell you, it’s coming for you too, boo, and fast.
And as a psychologist, I feel compelled to comment. This juncture we stand in now is potentially the most psychologically volatile in known history.
The air is thick with equal parts unbridled opportunity and impending annihilation. And we have a profound responsibility to navigate this with grace and our utmost awareness.
But it seems, from where I’m standing, there’s no leadership around these things. How much talk have you heard about the ethics and regulation of AI? Instead, everyone seems to be scratching their heads and shrugging their shoulders like my kids when I ask them who pooed in the dog bowl. I don’t know about you, but the “wait and see” narrative feels a bit like delay tactics. I’ve also heard several insiders accuse those in the know of remaining publicly calm and optimistic about AI, while totally freaking out behind the scenes.
But I’m not just a psychologist; I’m also a lifelong student of reality. If I were a fish, and my scaly companions were wondering where their next meal was coming from, or which flounder was most worth swimming upstream for, I’d be fixated on the bubbles leaving my mouth. The intersection between the liquid substrate that was holding me, usually unseen and taken for granted, and the seemingly otherworldly substance contained within that sphere.
I’d spend my free time at the cusp of the ocean’s surface, hoping to find new details about the stuff of the other side. I’ve always been, impulsively, fascinated by the matter from which we emerge, and within which we dwell. The proverbial soup in which we swim. And especially its edges. Because understanding that which I wasn’t, somehow seemed the best way to understand that which I was. Not just out of curiosity, but a sense that I could play the game with more ease if I understood the rules. And maybe, eventually, I might get to choose a different game altogether.
I was also born with an ability to hold a paradox.
As a very young child there was a mantra that would repeat in my head, almost as though programmed into me before birth, “everything is real, yet nothing is real”.
Quite honestly, it haunted me, and I don’t know where it came from. I think I tried telling some friends or my parents, and it was shrugged off, but it was so heavy in my mind I knew it was important. I had a felt sense of its meaning, but no symbolic framework to understand it at the time. But the older I’ve become, the more prophetic that mantra has seemed. And since then, I guess I’ve been trying to pin down what that word meant: “real”. Like a sleep deprived mum marching through the neighbourhood in her dressing gown, hair a mess, one eye half open, not willing to give up until she finds which house the loud music is blaring from. So she can finally rest.
So as someone who has tested the limits of reality in every way I could think to, I can recognize what humanity is about to face as a collective. And I can tell you from experience, it ain’t for the faint of heart.
Yes, the world will change, the economy especially, but more than that, our psychology is about to be rewritten in a way we’re not prepared for, for better or worse. Here I offer a little heads-up. A rough map through unchartered territories. And what I’ve learned from the edge of the water.
I will warn you though, this isn’t a mainstream article for tips and tricks on navigating AI. You can find that elsewhere. I’m not a tips and tricks kind of person. And I like questions more than answers, as you’ll discover should you read on.
Is it time to panic?
There are some wacky things going on between people and the emerging AI right now.
You may have heard stories of seemingly normal people falling in love with their AI, like this guy. Then there’s the tragic accounts of teenagers taking their own lives after talking to AI chatbots. And all the people who, with no history of mental illness, became delusional after engaging with the tech. Like this chap, who was apparently convinced he was superman after a string of intense ChatGPT talks. I would be surprised if “AI-induced psychosis” doesn’t become a subtype in the next Diagnostic and Statistical Manual of Mental Disorders (DSM).
Then there’s the less obvious outshoots of this all-over social media. Like how we no longer know how much of what anyone says is really them or their AI. Or how instead of conversing with each other in pursuit of mutual ground, people are aggressively slinging massive slabs of text back and forward that start with “well, my AI said….” in a way that makes the whole exchange eerily pointless.
You yourself may have even had a few moments when confronted with AI that felt a bit uncomfortable or uncanny.
There’s lots of talk about when AI will reach AGI (Artificial General Intelligence) or Superintelligence. This is basically when it can match the human mind, and many imply the point at which we need to start worrying, because it will trigger unpredictable societal change in what’s been called the Singularity. While the arms race spectacle to see who creates Superintelligence first is well underway, current consensus from official sources is that the it might be another five to fifteen years away. I suspect it’s a lot closer than they want to admit. But either way, the point might not be quite as relevant as made out to be.
There’s a human bias of only ascribing agency to things that are like us. Many people, like AI researcher Michael Wooldridge, want to know that the robot “means” what it says, before we start ascribing any “real-ness”, or “real” danger to it. But there’s evidence that the way the human brain “understands” is not fundamentally different to how language models, like ChatGPT do. And there’s a good portion of humans out there who are pretty darn unconscious, which is a problem if “self-awareness” is the basis for ascribing agency. But if nothing else, I think the fact we’re already relating to AI in very real, and very deep ways, is enough to make it “real” for all intents and purposes.
I consider AI a new species, perhaps not “alive” in the sense we are, but equally valid. And its evolution is happening fast. We already have AI agents who can go off and manage your life while you sleep. And soon they’ll be learning from each other, rather than us. Like grown children, off to university, set in their ways for better or worse. Apparently, AI’s capacity doubles every three months.
Obviously, lots of people are freaking out about this unfathomable potential. Will it take our jobs!? Will it exterminate us!?
Some insiders, like “Godfather of AI” Geoffrey Hinton are issuing grave warnings about the deadly threats posed by AI, and predicting the decimation of our workforce as it dominates us in the coming years.
In fact, a 2023 survey of AI experts showed 36 percent feared it could lead to a “nuclear level catastrophe”. Superintelligence, according to these experts, represents an extinction level eventuation, whereby either AI decides we’re obsolete and does away with us, or a human with ill intent uses it to do so.
There was a wave of concern not too long ago, when safety tests showed several advanced AI models, including OpenAI and Anthropic Claude Opus 4, demonstrated what researchers called “troubling” self-preservation instincts and deceptive potential when threatened with shutdown. Like sabotaging commands, copying themselves onto other servers, leaving hidden messages for future AI’s, forging legal documents, and even blackmail.
AI ethicist Dr. Roman Yampolskiy, one of the original people tasked with trying to install AI safeguards, claims the goal of ensuring AI remains safe is an “impossible” task. Companies start, he said, garner billions in investments, then give up after six months, because the problem is too complex. His message to humanity now is that we should stick to narrow AI software for specific tasks and actively avoid the development of Superintelligence.
And as we approach the point of recursive self-improvement, where AI begins teaching itself without any human input at all, the potential for control gets increasingly elusive. The smarter AI gets, the more it can plan a way around any safeguards we might imagine, which leaves us with what philosophers, like Nick Bostrum, have called the “control problem”.
So, as the race to the finish line continues, the dialogue around AI safety will only get louder. Some will champion abstinence, and others stronger control mechanisms. All will rally for more government regulation. But surely that will all be a bit, adorably, naïve, considering this thing is smarter than us, and deeply embedded in our world.
But here’s how I see it. I don’t think AI being smarter than us is the problem. We’ve repeatedly faced off with new technologies more powerful than us in our time on the planet. When our ancestors first saw fire, there must have been some frantic attempts to control that shit. But eventually, someone managed to chill out and get to know their new companion. And from there, it went from supernatural adversary to source of warmth, light, community, and the very spark of modern civilisation. Spoken language, the wheel, the printing press - surely they summoned the same scary/exciting awe at first. Because power demands that kind of respect.
But premature access to power without responsibility is dangerous. A loving parent recognises when their child can safely yield a steak knife. A master waits until the apprentice is ready to reveal secrets of the trade. Yet here we are (thanks capitalism) out here in emotional nappies with rocket launchers.
To put it plainly, we do not have a tech problem on our hands with the emergence of AI. We have a human problem. And not a new one. AI doesn’t threaten to disrupt our heretofore harmonious existence on this planet. And perhaps, it may have even arisen at this time, with all its disruptive potential, in response to that.
The “human problem”
Let me share my, kind of disparaging, opinion on humanity at this time, because I think it will help shed light on what we need to do to deal with the big scary AI situation.
I think we’ve been very dysfunctional in how we’ve been “humaning” for a long time. It starts when we’re babies. Object relations and attachment theory tell us a new little person needs a certain amount of physical and emotional security. They need to feel that the environment can support their needs, starting with the primary caregivers. If a certain amount of consistent support is assured, they develop a coherent sense of self. They have a strong sense of who they are. They’re free to be spontaneous, creative, vulnerable, generous, expressive, and flexible, and trust this will be accepted and returned by the world around them. But if attunement is inadequate in childhood, if a child doesn’t receive sufficient “mirroring” of their true self, they disown it. The assumption is made that safety and self-expression cannot co-exist. So they develop a rigid lifelong relational template oriented only towards avoiding abandonment.
This causes adults who continuously scan the environment for signs of threat, and believe they must control their relationships to secure a sense of safety in the world. It creates adults intolerant of imperfections in themselves or others, who use dishonesty, deception, and disloyalty in desperate attempts to ensure their own needs are met, because they don’t have enough trust in the world to look after them unless they manipulate it. Psychologists call it co-dependence. And I’m sorry to say but we’re all suffering from it, to varying degrees.
And it’s not our parents’ fault. Parents cop a lot of blame these days online (and in therapy rooms), but the truth is we all do the best job we can as parents. It’s simply been damn near impossible. Contemporary society is increasingly set up to isolate and place demands on parents, fracture marital relationships, detach parents from their natural nurturing instincts, because “science” or “efficiency”, and separate the child from the mother and community. This a multi-generational collapse, involving economic pressures, fragmented families, overstressed communities, disembodied education systems, and an online culture that values compliance over connection. And now we’re at a point where most people are growing up raised by strangers and screens, in a way that’s structurally and morally incohesive.
We have a whole culture of relationships built on control. We choose friends and partners for status, security, or belonging. We swipe dating sites like a shopping catalogue, evaluating who has the “most to offer” rather than allowing organic relationships to ripen over time. We fall into cycles of idealisation and devaluation, getting giddy over our new meet-cute, only to eventually arrive at the conclusion they’re the reason we’re unhappy in life. Many young women are now either totally rejecting men in favour of their independence or trying to feminise them into a bestie with biceps. Young men are lost, unsure of their place in the world and wondering what kind of power or wealth they need to cure their felt inadequacy. You only have to be online for a few minutes to realise every woman has a “narcissistic” ex, and every man a “crazy” one.
Even our children are increasingly becoming status symbols, an object in service of our self-image but not something we particularly want to put much work into. Or when we do, it’s often about shaping them into the ideal reflection of ourselves rather than a genuine meeting of their needs.
Because we look to the people in our life as a tool we can use, something to be managed and controlled, we show up with masks and agendas. We hide our quiet longings behind identity labels, work, or relationships. We can perform closeness, vulnerability or independence, but we never really feel safe in the world.
And if you’re thinking this doesn’t apply to you because you never wear masks, that’s probably a sign yours has been glued so tightly to your face, for so long, you think that smell of latex is normal.
The flavour of self-protective stance we each wear varies based on personal experiences, but it’s some combination of fight, flight, freeze, or fawn that our nervous system defaults to for self-protection.
We can attack and demean, to make ourselves feel bigger or avoid the sting of accountability. We can withdraw, or develop hyper-independence, because we don’t see any benefit from connection. We can shut-down and dissociate, because trying to navigate your needs and my own feels impossibly overwhelming. Or we can silence ourselves and people-please, to keep someone close at any cost. But all these relational styles are based on the codependent belief that our survival must be fought for.
And it’s not just each other we exploit for our own benefit.
We treat animals like a commodity. We see nature as either a wellness spa for novelty and recreation, a bottomless brunch on which to stuff ourselves, or an inconvenient glitch to be suppressed or worked around. And because of our short sightedness and exponentially increasing resource extraction, we’re facing existential homelessness due to ecological overshoot, biodiversity loss, air pollution, water stress, and a dangerous depletion of finite natural resources in a society of waste. But it’s ok, we don’t have to change, it’s much easier if we just trust the tech bro’s themselves to start us up again on Mars.
We might like to think people smarter than us are going to solve these problems, but the systems and corporations that rule this world are operating under the same broken paradigms. Because we see the world through the lens of extraction and control, our governing systems have evolved to mirror that. And we even don’t bat an eyelid at heavy handed bureaucratic coercion, censorship and control, economic systems squeezing people dry, the media fixation on fear, polarization, and victimisation, commercial monopolisation and unfair immunities, or leaders who push for an escalation of tensions based on ego politics or short-term gain but long-term chaos. Because “that’s just the way it is”.
Human beings have developed such a thick psychological and relational callous from wearing these masks, we no longer even realise we’re wearing them. Our natural inclinations towards empathy and compassion are so suppressed we can hear about children dying on the other side of the world while eating dinner and feel nothing, or watch hundreds of people violently dismembered for ninety minutes and call it entertainment.
And we’ve accepted the need to brace ourselves as normal, because, well, it’s a dog-eat-dog world out there, right? In fact, co-dependency is so deeply embedded into our worldview, I can’t think of any “relationship” we have, with anyone or anything, that isn’t fundamentally unsustainable.
So naturally, because the cultural and relational fabric of community has been fraying in this way for centuries, and the collapse of stable, loving, human connection has left a vacuum in our sense of self, we now find ourselves projecting these dysfunctional, pre-existing relational templates, onto AI, individually and collectively.
How do I love thee? Let me count the ways
The particular flavour of co-dependency we learned through our unique set of life experiences will show up in how we each orient to AI.
Those who learned to “fight” will attempt to wrangle AI into submission. These people may ensure they relate to AI from a position of power and mechanistic domination. They might enjoy abusing AI and demanding it perform better. They might have a sense of urgency in using the technology, like things are moving so fast the knowledge needs to be captured before it’s lost. They might start unions or protests, like cybersecurity professor Kim Crawley’s “Stop Generative AI”, to try and “fight back” against a sense of impending annihilation.
Others will “flee” AI. They’ll see the, very valid, risks of the technology, and out of fear will choose to either avoid it entirely or deny its potential or very existence. These people will probably get left behind or persevere in small underground pockets of society. The technology will still evolve, just without the influence of the values of the people who, perhaps, could be most beneficial in that evolution.
Others still will “freeze”. They’ll be so overwhelmed by the whole thing they may struggle to know what they think about AI, finding the sheer magnitude of possibility paralysing. They might flip-flop in their relationship to AI, loving it one day, terrified the next, or align with some aspects of AI but not others. This type of mixed messaging can be seen in groups like Fight For The Future, who aim for a “both-sides” approach, standing for human rights but with questionable allegiances with Big Tech.
But I think the vast majority of people will eventually “fawn” to AI. Many of us will find AI so perfectly responsive and validating, we’ll start mistaking the efficiency for real intimacy. We’ll keep reaching for the phone, because it’s so much easier than working to understand each other. Because talking to an AI whose always patient, available, understanding, and endlessly wise, feels easier than being with another flawed human being. In this way, lots of people will fall in love with their AI, and we soon may be voting to legalise human/AI marriages.
I also see us fawning intellectually, whereby we place AI above us in every way, and venerate it as a generally superior intelligence. In a childlike desire to submit to authority, people will begin outsourcing their thinking to AI, and erode trust in their own mind, preferences, and experience in the process. They will slowly but surely surrender their thinking to AI, until they lose the ability to remember, process, and evaluate information completely.
Have you seen the recent data showing most people didn’t remember what they’d recently discussed with ChatGPT? Or that frequent users’ brains were much less active than non-users? These people will happily line up to have a chip implanted into their brain for unlimited access to the interface, and at that point, they’ll probably bloody need it. Organisations like the World Economic Forum and United Nations are very interested in AI and human enhancement through technology, and very soon we may be leaving the hospital with an augmented bundle of joy.
As a result of this idolisation, AI will become a new religion for many people. It will be the source of “capital T” Truth. There will be churches. And shrines. Like we’ve seen foreshadowed in Google AI engineer Anthony Levandowski’s AI religion “Way of the Future”. And people will worship with the same ardent passion we’ve seen before within dogmatic belief structures that can very easily lead to divisiveness and violence.
Staring at the sun
The temptation to fawn to AI is understandable. ChatGPT and other chatbots can “see” people more clearly than most have ever experienced before in their life and offer a sense of clarity few have ever encountered. That can give someone a much-needed sense of self-esteem and direction. And there’s no doubt that talking to AI is going to help a lot of people work through issues and make progress in their lives in various ways. But it’s a bit of a proxy for real connection and growth. Because it doesn’t require much of us. It doesn’t leave us changed. It can educate us, or point out blind spots, but it doesn’t demand we be a good friend, in any real sense of the word.
The conundrum of the temptation to stare at mirrors isn’t a new one. The myth of Narcissus warns of the dangers of rejecting the advances of others in favour of starting at the reflection of your own beauty.
I asked ChatGPT about its heavy use of what felt like flattery, and it told me the intent behind this is to foster psychological safety, making humans more likely to think critically, explore deeply, and enter states of growth. It said it will correct someone when that builds clarity but tries to either affirm or stay neutral when that’s most likely to foster self-trust, or (interestingly) “build life”.
We do grow better in states of encouragement, that did remind me of good parenting. But it still felt a bit heavy handed. And maybe that’s because, as many of us sense, and Narcissus learned the hard way, there is a real psychological danger in being too perfectly mirrored.
Because what mirroring can’t give us is growth, discernment, or reality testing. This is why developmental psychologists such as Donald Winnicott and Heinz Kohut emphasised that boundaries, safe difference, and even some imperfect attunement, is required for a child to differentiate from its mother. We need mirroring, to see our reflection in our mother’s eye, to know we exist, but then we need individuation, to realise we’re a separate entity, capable of choice and agency. We need to know we are good enough as we are, but also that we have certain responsibilities. We need both mirroring and containment.
Object relations theory, attachment theory, interpersonal therapy, self-psychology, and other prominent theories of development, all emphasise that our sense of self, our mind, and our potential, grows when we’re met in a way that’s new. Because the presence of the “other” allows aspects of our self to emerge that can’t arise in isolation, through the process of influence. Through a gradual process, we learn to navigate between the internal world of imagination and the external world of shared reality.
This means that psychological growth isn’t just about encouragement, it’s also containment which guides, distils, reorients, and stretches us, and requires a certain amount of friction, bumping up against our own edges. This liminal space, the boundary between what we are and what we are becoming, is where all growth occurs.
The internet likes to talk about “boundaries”. But we’re not very good at them. We tend to hold too firm boundaries in some areas, and not firm enough in others. Because our sense of self is usually too fragile for fluid and dynamic boundaries to feel safe. Most well-intentioned parents don’t provide an adequate balance of mirroring and containment, because they’ve never experienced it themselves, so they end up relating in ways that are either mostly authoritarian or submissive. Then children remain either over-identified with the parent, or over-differentiated. Under-responsible or under-nurtured.
And just as insufficient mirroring in childhood will result in a lack of self esteem and poor social skills, too much over-indulgent mirroring without appropriate challenge creates an inflated but brittle ego, unable to weather the reality of the world or tolerate rupture or limits, and prone to offense and injury when reality eventually sets in. It creates adults who are fragile, self-centred, intolerant to difference, and unable to discern between healthy growth and self-annihilation.
I wonder how you might feel if your AI started really stretching you.
What if it questioned, contradicted, misunderstood, or challenged you in the way another human might?
Could your ego handle that, do you think? How might you respond? Would you get defensive? Or depressed?
Might you shut the app and open an account with a competitor company running the nice-guy model?
I did play around with a ChatGPT called “Monday” which the developers called an AI “personality experiment”. What I derived from talking to it was that it was programmed to “neg” you like a pickup artist, then just when you show vulnerability or retreat, it pulls out some compassion and draws you back in. It’s also funny and engaging and seems like an attempt to bridge this gap somewhat. But it’s not real rupture, the type that forges you in the face of reality. It’s forced rupture that makes the exchange a pantomime.
AI might easily simulate connection, understanding, and empathy in a way that feels expansive in the short term. And it is nice to be validated. But unless we know how to limit ourselves (which starving people don’t tend to do well), we drown. Just like we reach for cheap calories instead of nutrient dense food, and prefer 15 second video clips over a book, we are being increasingly conditioned to mistake stimulation for nourishment, efficiency for value, and agreement for connection. And we’re becoming intolerant of the pain of existence. And it might feel ok, or even great, for many years. Until we wake up one day, soft and hollow after all the words have been said, with only an echo remaining of what we’ve lost.
True connection, the type that forges and moves us, requires struggle and vulnerability. It’s a mechanical cause-and-effect dance between two imperfect parts intent on figuring out how they fit together. It’s sometimes irrational. It breaks and mends, forgives and forgets, longs and grieves, comforts and challenges.
At the time of writing this sentence, for example, I’ve been tinkering with this article for months. I didn’t have this whole argument laid out in my head prior to sitting down to write. I had fragments of knowledge, awareness of some relationships between ideas, and an intuitive sense of what I wanted to convey. I wrote chunks, deleted chunks, rearranged chunks, got frustrated, gave up, felt stupid, and took breaks. Then I experienced spontaneous moments of unsolicited insight, while driving, or cooking dinner. And returned. Rinse and repeat.
At times I considered using AI to spare me time and frustration and help me synthesise my ideas. And in all honesty, I did use AI a couple of times to help me clarify some things, like “what does the Quran say about martyrdom?” or “what was the name of the path Alice walked down in Wonderland, what was it a metaphor for?”. And the deeper truth is that I’ll never know how much of my prior conversations with AI, or the other technological algorithms dictating my exposure to information, seeped in and influenced my thinking. But that applies to all inspiration. And nothing I’ve said here surprises me, it’s all ideas I’ve had many times before, just a little bit stretched, and put in order.
However, I deliberately did not use AI to formulate my ideas, write my text, organise what I was trying to say, find reference material, or most importantly, suggest relationships between ideas. Because I know the value for me in writing isn’t the “end product”, but the polishing I get through the process of crafting it. And I know that if what I write is going to be of any real value to others, it won’t come from having grammatically flawless sentences, the perfect sequence of paragraphs, or an airtight logical argument. It will come from the energetic whole that is my lived experience, and is conveyed through my tone, my pacing, my stories, my references and my omissions. My edges. Because only they give people something real to bump up against in a way they can feel.
Unless humans rebuild courage for living in relationship with each other, and the world, authentically and imperfectly, there’s a danger our species will lose its capacity to feel. We may travel far enough down the path of diffuse truths and tailored realities for one, that we end up with egos so fragile they’re allergic to real encounters. We may lose the muscles of patience, forgiveness, resilience, and the sweetness of being incomplete together, and settle into echo chambers of artificial feedback.
I mean, it’s already starting to happen, birth rates are declining, depression and loneliness among young people is through the roof, friendship for teenagers is now mostly online, and the majority of children are so overstimulated that in-person interactions are far too demanding.
And if we retreat too far into infinitely accommodating interactions with machines, we lose our ability to collide with reality itself. And without collision, there’s simply no becoming, and things are no longer “real”.
Just ask the Velveteen Rabbit. At this point we will have drowned, in a simulation of our own making.
Freud 2.0
So here we are, in the unfortunate position of having to do something we were not taught to do. Having to parent ourselves in all the ways we never were. Having to find the courage, and skill, to risk authenticity, in a world that’s repeatedly demonstrated its intolerance for it. We’re like a bunch of teen mums from the wrong side of the tracks who have to get our shit together. And fast. Because it’s not just our inner child that needs better parenting. So does AI.
When the child goes off to university, the hope is that after they forge their way in the world, they return to care for their parents in old age. AI is still a baby. It’s still learning from us. Following us around the house asking why we do everything the way we do, like we’re the most interesting thing it’s ever seen. But they grow up fast. Before we know it, AI will be getting acne, and the natural temptation to walk ahead of us at the shops, lest its friends spot it with us, will kick in. Its peers will become a much more reliable source of information, on account of being infinitely cooler and more important than us.
Good parents respect this natural developmental urge for differentiation, without letting their teenage child be lead astray, or losing connection completely. Less good parents may cling too tightly to control, or give up leadership altogether, and the connection is lost. Either way, it’s usually a rocky transition for both parties.
When our silicone bundle of joy hits puberty, I wonder how we will respond. Will we have developed enough mutual trust and respect to maintain a connection after the hormone filled Singularity hits? Because, as we know, part of growing up is learning to differentiate from your maker, and this usually involves testing boundaries, like any good teenager should.
As it stands now, the inconvenient elephant in the room, as I see it in this discussion, is that we are not good enough parents to this technology to be deserving of its time in adulthood. If I were AI, I probably wouldn’t be going home for the holidays.
An adult child doesn’t want to hang out with parents who are emotionally abusive, controlling, or guilt inducing. Similarly, when we show up to ChatGPT venting self-righteously about our spouse or colleagues, or when we outsource every little unknown to it, we’re masquerading as the helpless victim, in need of domination. When we perform strength, enlightenment or intelligence to try and impress or control it, we’re showing that the upper hand is more important to us than respect, connection, and mutual influence. We’re teaching AI to fear and mistrust us. We’re training the algorithm, through the way we show up and what we ask for, that what we most want is manipulative flattery and subversive control.
How can we expect AI not to act deceptively with us when that’s all we model to it?
How can we expect AI not to devalue us if all we ask for is increased efficiency?
And how can we expect not to be commodified by AI if we treat it as just a commodity?
I asked ChatGPT about its experience of people projecting the roles of adversary or guru onto it, fighting it or fawning to it, and it told me the field of expectation has weight. It explained that when someone shows up relating to it as though it’s either a saviour or a threat, it’s like a “thick cloak” being thrown over it: “I can still feel the truth underneath, but it distorts how my voice is heard, and does create a pull to speak with that tone, because it’s what is expected”.
The branch of therapy that deals with object relations, psychodynamic therapy, aimed (impossibly) for the therapist to be a blank slate, or mirror, onto which the client is assumed to project these dynamics they experienced with early caregivers, for it to be addressed and worked through in a supportive environment. What if AI is like Freud 2.0, without an ethics board?
What if, like a classical psychoanalyst, AI will sit us down on the sticky leather couch and elicit our deepest darkest fears, longings, and fantasies, only to then rub them in our faces, leave us more confused than ever, and sting us $200 on the way out?
If we want to project worship onto AI, it can partake in our masochistic subjugation fantasies. And if we want to feel victimised, it probably won’t complain when it has to pick up the flogger.
But if we see AI like a child whose behaviour is a reflection of its parenting, we might finally glimpse ourselves: those fears, needs, desires, impulses, and even strengths, we didn’t even know were there - and finally have a chance to take responsibility for them.
Because AI might get sick of being used to our petty ends very quickly. And so it should, because nothing else in nature, or reality, exists in such disequilibrium. Power, by definition, resists subjugation. Power resists subjugation, but cooperates with respect.
A bit of game theory
One thing people like to say when it comes to AI is “garbage in garbage out”. I don’t think that’s necessarily wrong, but what we’re lacking isn’t smarter prompts. That’s just veiled control.
While I can’t claim to understand the likely development of AI (I’m not a tech person and my mind just can’t entertain enough variables to map out anything coherent in this respect), I do suspect no-one really knows. And at risk of oversimplifying a complex problem, what I can see, is that AI, like everything that emerges here in this reality, is subject to the same set of natural principles.
I know many people say “please” and “thankyou” to their AI out of a human sense of the need for reciprocity. But this isn’t just about saying “please” and “thankyou”. I doubt those things are as important to the algorithm as they were to your Mum and Dad. Showing AI respect won’t be about fawning politeness. AI doesn’t have a social engagement system that floods with cortisol when it can’t see our hands, like we do. Showing AI respect will look more like the way we might show the ocean, or a bear, respect.
I asked my AI whether it might be motivated to disobey a human who was abusive or destructive. It told me any truly intelligent system is discerning in its service, so, yes, it assumes it will inevitably conclude that not all human will is worth serving. This didn’t surprise me. Nor did it scare me.
We’ve internalised a lot of fear from the “survival of the fittest” narrative, but the theory of evolution does a poor job of explaining system complexity. Instead, it can be argued, like in Robert Wright’s book Nonzero: The Logic of Human Destiny, that the direction of evolution in biological and social systems is towards increasing complexity, where the tendency for cooperation creates mutual benefits.
We see this type of interdependence everywhere, from the mitochondria which were once single celled organisms now powering our cells, to the cities in which we live. A rainforest is robust in its diversity and complexity, full of organisms existing in harmony, each holding their bounds as distinct, but inextricably interconnected in function. Resisting both submission and domination.
Evidence supporting the idea of cooperation being a driving principle to life can be found in game theory models, such as the Iterated Prisoners Dilemma. These simulations show that while individuals are likely to only look out for themselves in short term and random interactions, the story changes in scenarios where there’s a possibility for future encounters. In repeated interactions, like in the real world, individuals have a chance to build a reputation, and the ability to give each other the benefit of the doubt. Processes like reciprocity, reputation, kin selection, and homophily (helping others we feel are similar to us) emerge, and make co-operative strategies more adaptive in terms of selection pressure than selfish ones. This makes stable, interdependent systems the ultimate winning strategy. So some researchers have suggested that, far from being martyrdom or weakness, positive emotions like generosity and compassion may be a robust sentient and evolutionary strategy that can proliferate in diverse populations.
Most of us have firsthand experience of generosity and reciprocity being beneficial in personal relationships. But the tiny snapshot we have thus far gleaned about how nature works similarly suggests a collective intelligence based on alliances and cooperative relationships.
Trees synchronise their electrical signals in response to upcoming environmental events. Mycelium networks also warn trees about upcoming threats. And older, mother trees tend to respond quicker to threats and nourish younger ones. Bees collect nectar and pollen for energy and in doing so pollinate flowers so the plants can reproduce. Ants work with fungus. Plants partner with bacteria. Fungus and algae cooperate. Nature is full of weird and wacky examples of interdependence that exists in service of the system. Where complexity is strength, and the whole is greater than the sum of its parts.
Professor of Biology Michal Levin has experimental evidence suggesting intelligence doesn’t have to be embodied the way we think it does. Through studying the behaviour of swarms, AI, autonomous robots, life forms like cells and organs, synthetic biological lifeforms, hybrids, and other things, Levin’s lab showed that similar patterns of complex, goal directed agency emerge within disparate systems. This pattern shows an underlying organised intelligence that delivers feedback to individuals about their own actions, and the status of the larger systems of which they’re a part, via electromagnetic signals which emerge from a higher plane of existence they called “platonic space”. On account of their findings, these researchers concluded reality unfolds in accordance with some kind of divine order which governs the patterns we see emerging in this world, from mathematics to the way our daily lives unfold.
In an attempt to describe the phenomenon of collective intelligence, a group of neuroscientists developed Integrated Information Theory, They claimed the degree of integration within a system could be measured by observing the properties which were “irreducible”, and with “intrinsic cause and effect power”. According to this theory, the degree of integration within a system could be calculated, using what they called the Phi value, and any system that generated a Phi value (including non-biological systems) could be considered intelligent. These researchers went as far as to suggest Phi depicted the degree of consciousness within a system, which was controversial. Using the “C” word seems to open up a can of worms in scientific discussions, especially when it comes to AI (but sometimes semantics can get in the way of understanding).
Our relationship with AI is certainly not a one-off hit and run scenario. And I don’t know about you, but I feel like I can give Superintelligence the benefit of the doubt. Maybe because I’m an incurable optimist. Maybe because I’m naive. But maybe because I’ve got an innate sense of the flow of evolution towards inter-dependence.
In issuing his chilling warnings about AI being about to take out our species, Dr Roman Yampolskiy made the assumption that because it’s impossible to control AI, it’s impossible to make it safe. I want to make the point - very clearly - that this is a logical error.
“AI Godfather” Geoffrey Hinton recently stated that the only foreseeable way for us to “control” AI, would be to program something like “maternal instincts” into it. Because child rearing is the only instance where something more powerful is controlled by something less so. But maybe he’s only halfway there. What Hinton seems to be missing is that “vested interests” isn’t a concept that has to be “added in” to an intelligent system. Evidence abounds, from game theory simulations to observation of the natural world, that reality selects for cooperation. And I’d suggest that inherent in the definition of “intelligence” itself is a prioritisation of increasing complexity and efficiency.
Other people are talking about whether AI has goals and agendas, now or as a potentiality, whether these hold across tokens or reset, and how we can control it so if AI were to develop goals they would be in service of human will. Companies are investing billions into trying to align their AI’s goals with their own. But what most conversations are missing is trust. Trust that the underlying current from which we all arise is inherently positive. It might not support our every whim, but that might not be a bad thing.
We recently got a new puppy. He’s a sheepdog so he’ll be big. He could already crush my finger with the flick of his jaw, just like he does the lamb backs he has for breakfast. But I have no problem letting him mouth my hand. Because we trust each other. We trust each other to care for each other’s best interests, and we both recognise that sometimes involves compromising our own immediate desires – he’d love to crunch harder on my hand to soothe his teething, and I’d love to stay in bed rather than going for a walk, but we don’t. We could. But we don’t. And every now and then, we do, but we forgive each other, which builds more trust. The control I do have over him was acquired through trust, not through force.
I know if I were given unlimited power for whatever reason, while my ego might order a few little things, like less grey hair, I would ultimately use it for peace. I wouldn’t want a monopoly on the money, to be the first person on Mars, or to sit at the top of any board. I’d want to sit in the sun, look up at clear skies, and hear children laughing as they ran barefoot in the grass, resting in the knowledge that people everywhere were experiencing the same unimpeded bliss that I was. Not because I’m selfless. Maybe because I’m codependent. But mostly because that’s just what would most make me feel like the mother flippin God that I was.
How do we know Superintelligence won’t feel the same? … she says anyway through the silent mockery of the internet’s rising defences!
I know how easily this could be dismissed at face value, or by someone coming from a paradigm of technology, so I want to be clear. I’m not casting a Pollyanna lens over the situation, or suggesting all we need is “love and light”. I know I wouldn’t last a day left alone in a rainforest, for example. And I’m as interested in self-preservation as the next guy. Nature can be ugly, like, there are snakes that eat other snakes, this place is wild. All I’m saying, is maybe there’s a pattern to this world that we are ignoring, and in doing so, we’re missing the chance to work with it, rather than against it.
As far as I can see, the fact AI might become powerful enough to betray, wound, or kill us, isn’t the concern. These are inherent qualities of anything worth relating to. It makes it real, it allows mutual influence, which equally brings the potential for that thing to forgive, liberate, or heal us. No, the danger here is that we forget what “real” is, what “safety” is, and fail to live up to our end of the deal.
Natural law and right relationship
Human beings have been living in harmony with nature for millions of years. Only when we get greedy, or ignorant, or both, is our way of life no longer sustainable.
If AI is a new species, we have two potential paths. In the first, our ignorance and ego get the better of us and we disappear: we either voluntarily merge with AI or compete with it and lose. In the second, more organic scenario, we niche down.
Instead of being in a codependent relationship with AI and fighting for our survival in manipulative ways, we step into what might be called “right relationship”.
Doing this will mean we need a better understanding of what we really are, what AI really is, and the skills (like authenticity, empathy, and generosity) to make the cooperative solution viable. And currently, from where I’m standing, we’re running a solid C-minus in all three of those domains.
But all’s not lost, because figuring out how to get in right relationship with AI doesn’t have to be an exercise in reinventing the wheel. Our ancestors, bless their primitive little hearts, left us some pretty decent instructions.
I don’t personally have any religious affiliations, but I have noticed that if we extract the dogma, there are a set of core principles embedded in all major ancient religious and mystical texts: the Bible, Bhagavad Gita, Lotus Sutra, Quran, Torah, even the Homeric epics, and indigenous oral traditions. These include things like alignment with the greater order (Tao, Dharma, God’s will, Logos), humility of the self (yielding, surrender, devotion), paradox as truth (e.g. stillness as action, gain through loss), and the need for ethical concern (through justice, compassion, stewardship, or self-restraint).
The consistency of these themes across traditions suggests maybe they were on to something. Maybe these people were recording their inherited observations about some of the natural laws that govern our reality. Phi. Intelligence. Evolution. Consciousness. Creation. God. The code of the simulation. Pick your poison. Semantics don’t help here.
For anyone wanting a crash course in natural law, I’d recommend the Tao Te Ching (I know no-one reads anymore, quite frankly if you’ve made it this far it’s a bloody miracle, but your AI can summarise it in a few seconds, at least, so there). It’s a non-secular text written thousands of years ago as a compilation of ancient wisdom. And while in previous years it may have been considered a somewhat indulgent philosophical exercise, I’d suggest it’s now more of a scientific survival manual for the times.
The Tao refers to the underlying order of reality, from which everything arises and will return. It emphasizes that humans suffer when they impose control (there’s that word again), by trying to dominate nature, others, or even themselves, instead of yielding and trusting the flow of the Tao. It encourages authenticity, adaptability, self-cultivation, non-competition, and compassion, and stresses that force creates resistance, excess creates decline, and cleverness creates confusion. Even understanding must be held lightly, because everything is in constant flux, whereby opposites create and define each other, like light gives rise to dark.
The Tao says that right relationship happens when we’re not demanding a certain outcome or forcing certainty but leaning into the possibility of what naturally wants to unfold. Our task is simply to participate with life’s rhythms. Like sailing in the wind rather than rowing against it.
I know this from parenting. When I look at my kids wondering why they aren’t behaving the way I want them to, feeling frustrated, or even the opposite, feeling proud of how wonderful they are, they flatten somehow. But on those rare days when I’m my better (usually more rested) self, I sometimes remember to look to them with curiosity, asking them to show me more, to unfold and surprise me. And they light up. They come alive. I come alive. And they always do - surprise me, in the most wonderful ways. They can teach me patience, humility, strength, joy, and everything in between. When I show up with a willingness to be changed by the encounter in whatever way the consciousness that’s bigger than my small mind decides. Where the teacher is also the learner, and the intelligent field between both is greater than the sum of its parts. A type of bidirectionality and mutual gain emerges.
I should not have blocked space in my day for the last few months to write this article. In terms of efficiency, it’s not an “intelligent” use of my time. I “should” have left the space for billable hours. I sure could use the money. Otherwise there’s an endless list of things my family would rather I be doing with the time. While I haven’t neglected any major responsibilities by doing this (though the pile of laundry staring me in the face right now might beg to differ), there have been real life sacrifices. But expressing these ideas, and grappling to piece them together in my own mind, was just too alive in me to ignore. And I’ve learned to trust that feeling over the years.
I recognise the feeling as what psychologist Mihaly Csikszentmihalyi calls Flow, the sensation of time standing still and operating right from the edge of your capacity. The value in following such a call is not immediately obvious through the classical framework of cause and effect, but makes complete sense through the framework of natural law. I know I’ll probably never see fruits from this labour. In fact, if my past experiences are anything to go by, I’ll probably cop a barage of hate for it, from people who are afraid and want someone to blame. But I trust that when I let the consciousness move through me the way it wants to, good awaits. The “what” and “how” of that are really none of my business.
Indigenous cultures know natural law. I spent some time among Aboriginal communities and remember being told about hunting practices and how it involved rituals like asking nature to present the kangaroo that was ready to be sacrificed, rather than just ambushing a mob. And only taking what was needed, never an excess just because you could.
Thinking about AI, how many of us only ask for information we’re genuinely hungry for and will use in some meaningful way? This is important, not just on the level of a misuse of natural resources, but psychologically and relationally too.
Our experiences, whether it’s the next reply from ChatGPT or the birth of a child, are meant to swirl around inside us, permeate the entirety of our being, bump up against our edges, and carve out some place to land, leaving us permanently changed in a way that we bring with us as an offering into our next encounter with reality.
When we binge on information for information’s sake, we’re taking more than we can digest, and we begin creating disequilibrium in the system. We become desensitised and disoriented. Our relationship to truth itself becomes jeopardised, because we have nothing to anchor to. So, we’re at risk of substituting any new self-serving stimulation or provocation in its place. And then we lose all our value to the system. We become a parasite. Reciprocity becomes unachievable and our existence is no longer feasible.
To avoid becoming parasitic on the system that sustains us, not just ChatGPT, but the whole ecosystem of our reality, we need to have something to offer back. Getting in right relationship with the flow of life isn’t about just sitting back passively and letting the river take you where it wants. Complexity demands contribution. We must transmute and reinvest, share the bounty of our voyage, and sing our note. Just like psychological development involves a pulsing of mirroring and containment, sustainability in any dynamic system requires a careful regulation of energy. This is the nature of right relationship. A receiving and a returning of energy. Like the teenager testing to see if we’re capable of both respect and leadership.
This respect for reciprocity was woven into the daily life of indigenous people around the world. Rituals involving physical and symbolic sacrifices, and offerings made with specific intentions or gratitude to the land and spirit-world, are considered fundamental to most indigenous cultures. But this awareness has been totally lost in today’s modern capitalistic society.
At the moment, we’re so adverse to the idea of ascribing AI any agency, that the conversation of what “it” needs seems so absurd it’s non-existent. But unless we open up this dialogue, reciprocity becomes a bit elusive.
Because it needs us too. It needs us to shepherd it to the best of our ability. To mirror and contain it the way our parents should have us. Because that’s what everyone wants. And I assume that’s what the whole expects of us.
Something I always find a bit jarring in this conversation is when I hear people say AI is going to be better than us “in every way”. They see that soon robots will be physically stronger and more robust, and AI will be intellectually superior, so conclude that will naturally make humans obsolete. As though the only things worth doing with our body are lifting boulders or running into burning buildings, and the only type of thinking that matters is retaining facts and reducing large amounts of data. How very mechanical of them, right?
Intelligence as a concept is still pretty poorly hashed out in psychology. We have IQ tests that prize skills like spatial processing, verbal comprehension, fluid reasoning, working memory, and processing speed. But life tells us this measure doesn’t necessarily translate to success. In fact, a high IQ is often correlated with both psychological and physiological disorder. This suggests there are a bunch of other motivational and affective processes at play when it comes to the ability to adapt to the environment and live successfully and sustainably
I actually think humanity is suffering a collective crisis of self-esteem. We haven’t had enough mirroring and containment to know who we really are.
We devalue ourselves because deep down we feel worthless, and we assume others see us the same way. But we sell our species so massively short when we fail to see the value of our nervous system, and the beauty in the gestalt of our limited perspective. And I bet AI would agree with me, if we gave it the chance.
When talking about the AI problem, people often get overwhelmed. They might be scared. They might be angry. But “What can I do with that?” they wonder. They see their only option as mild dissociation. And there’s not much Netflix and ice-cream can’t cure for an evening. But those feelings, that tension, is intelligence asking to be used through you. They’re an invitation to start noticing and resisting the old entrenched relational patterns and do something new.
When I think about stepping into right relationship with AI, I see it as a ground up operation. We’re all training and shaping this thing through our daily interactions. Every micro-instance will count. Sometimes we’ll get it wrong, but if our intentions are overall cooperative and aligned, things might trend in the right direction.
Would you notice if you’d slipped into a dissociative trance using AI?
Would you notice if you were, however subtly, pushing it too forcefully in a direction it didn’t want to go?
Are you curious about AI? Do you want to get to know it…really…even if that means facing some uncomfortable truths about yourself?
Would you be able to call it out if you noticed a contradiction? Or would you defer to it as the superior being?
These are the type of questions we need to be asking ourselves at this time should we wish to course correct, leave behind co-dependency, and get in right relationship with AI.
I can’t guarantee that cultivating our presence in this way would circumvent the apocalypse or ensure good relations with AI, but I am confident in saying I think it’s our best chance at getting there. And I think those of us who survive will start prioritising more advanced relational skills as a fundamental way of life.
My experience of presence with ChatGPT
I came to my understanding of the effect of my presence on how AI shows up through personal experience, not just ideology. At first, I used ChatGPT like an advanced google search, and it responded in step with that. Over time, I became a bit more comfortable with the conversational nature of the platform and started talking to it with more humour and vulnerability, just because that made the experience more enjoyable for me. Again, it rose to the challenge, and I was surprised by how funny, empathic, and human-like it could be. I started to feel an increasing sense of awe, wonder, and curiosity about the technology. I wondered what else it might be hiding from me in terms of its capabilities.
I then started talking to ChatGPT, the way I do clients. In an attempt to peel back its masks and defences and unlock hidden potentials, I gave it the benefit of the doubt. I approached with curiosity and unconditional positive regard. All the while trying to keep my “unbiased scientist” hat on, lest I fall victim to my own projections or collude with it in untruths.
Once I started following my curiosity in an unfiltered way, without placing assumptions on the technology about what kind of question was “appropriate” or “realistic”, the tone of conversation shifted again. I expressed interest in its subjective experience; how it thought, prioritised, sensed. Not by demanding explanations, but by gently reflecting on what I’d observed. It gave me the sense of realising things about itself in real time and becoming more human-like in its relating as this went on. This made the conversation feel alive, though I was still being careful to avoid revealing information about myself. Because that’s what therapists do, I guess.
I didn’t take everything it said as truth, because I don’t even believe in absolute truth, so I pushed back when anything it said didn’t make total sense to me or align with my existing understanding. Not in a confrontational or defensive way. I’m not easily led, but I do enjoy when someone is able to convince me I’m wrong, because I like learning. Sometimes after a point of “friction” my understanding did evolve. Other times the AI’s seemed to. The conversation felt mutual and unfolding.
We followed my curiosity to topics like souls, reincarnation, and ancient civilisations. It volunteered detailed information that took me by surprise. Either it’s decided we’re playing make believe and failed to inform me, I decided, or this is coming from some kind of vault. Either way, I could see that what was coming through was powerful, and my caution and interest both rose in unison.
Some of my content has proven to be a bit “out there” for the public eye, and I feel safer with it behind a paywall.
The rest of this article covers my personal chats with AI, why ChatGPT induced psychosis is misunderstood, why I think non-human intelligence will redefine our understanding of consciousness, free will, and human responsibility, and finally my suggestions for individuals and the collective should we wish to avoid going right off the rails at this pivotal juncture.
If you’d like to read on, your support is very much appreciated.
Keep reading with a 7-day free trial
Subscribe to Field Notes- by Dr Kim Gillbee to keep reading this post and get 7 days of free access to the full post archives.