Why are freaking out about AI? And how do we stop?

I’m not sure it’s worth writing about the use of AI, because if you’re anything like me, your eyes glaze over at the sheer mention of ChatGPT. It’s absolutely everywhere – AI itself, and the conversation around AI. It’s scary, it’s sudden, it’s confusing and, after a few years of ‘aren’t you scared it’ll take your job?’ It’s awfully boring. But not thinking about it and not writing about it means that the capitalistic content-overload to desensitisation pipeline wins, so I’m about to jot down a very scattered and imperfect discussion, straight off the dome.

If I’ve lost you to blurry disinterest already, I understand. But hopefully, just for the next five minutes, we can suspend our humour of the idea of a robot apocalypse and pay attention to what AI means for us here and now. 

It’s an obvious case of moral panic

In my first ever uni lecture, almost 6 years ago, we learned about the phenomenon of a ‘moral panic’ – the unrelenting uproar and push-back us humans have to every major technological and social advancement. We’ve been whinging and whining since the printing press. Those who’ve seen Footloose (1984) will recognise the idea of a moral panic; just swap rock and roll for AI and you get the gist. 

My childhood was speckled with the warning that video games might make us all serial killers. I hear that New Year’s 1999 was quite a nervous party. Around 2018, public health officials issued warnings because some gnarly teenagers thought Tide Pods looked tasty and used YouTube to eat them. In recent years, we’ve all heard whisperings of litter boxes in primary schools and the “damage” drag story time could cause (get real). Basically, something new pops up, people get their knickers in a twist and start shouting about it, and in many cases, a new policy is introduced that not only crushes the tiny “problem”, but validates the out-of-proportion concerns, and stops newer things from popping up for a while afterward. Obviously, generative AI falls into this category. Except AI isn’t a new thing at all.

The idea of artificial intelligence has been around since the 1950s. In more than a decade it has been used to translate ancient languages, fly planes, diagnose and treat disease, unlock phones with facial recognition, generate predictive text, and verse you playing chess on the white box computer your parents had in the back room in 2007. Forms of AI have largely handled pattern recognition, data analysis and cybersecurity since before I learned to tie my shoes. Unless you’re a tech junkie, you’ve had no reason to know that Anthropic’s LLM, Claude, is named after Claude Shannon, who built an AI system in 1950 – a robot mouse that could find its way out of a labyrinth by remembering the paths. Of course, with the emergence of large language models, the everyday person has become aware of AI technology, simply because it’s become accessible and showcased in the pop-culture zeitgeist. But it’s been a part of our lives for a long time – providing convenience so seamless that we never thought to ask about how it all worked, let alone freak out about it.*

Anyway, for the sake of this post, I’ll clarify that when I’m talking about AI, I’m talking mostly about Large Language Models – like ChatGPT, Perplexity, NotebookLM and Gemini (those are the ones I’ve personally used, but if you’re a fan of Claude or something, I’m talking about your particular pet too.)

*[The robot uprising and surrounding moral questions are thoroughly documented in the sci-fi genre – and perhaps I’ve missed out on some background knowledge by writing it off as boy stuff. Until 2023, the phrase ‘Artificial Intelligence’ was merely the title of one of my all-time favourite Spielberg films, and not much more. If you haven’t seen that movie, I beg you to. It’s a 2001 sci-fi retelling of Pinocchio, and it’s one of two robot movies that has made me cry (the other is Pixar’s ‘Robots’, which had me and my teary, 7-year-old face removed from class, because “it’s not fair that some robots can afford upgrades and others can’t”.) The Pinocchio thing solicited tears for completely different reasons. Watch it; it’s awesome.]

The Problem With a Frictionless Life

The big selling point of AI is that it makes things easy, but the biggest strength of anything is its biggest weakness as well. The higher you fly, the further you fall, as they say. 

AI definitely makes things easier now – it boasts an increase in productivity, and I can’t deny the rat race we’re all running, so I understand the temptation. I don’t think it’s wrong to want to keep up, but we don’t win the rat race by running the fastest. Sometimes, we have to step off the hamster wheel (rat wheel? I don’t know, this is a shaky metaphor, but you get it.) I think we could take the temptation as an opportunity to reassess our lifestyle choices. You have to consider whether you want to increase your productivity, or if the marketing has influenced you. Just because you can do more doesn’t necessarily mean that you should. How much would it improve your life? Do you NEED to be more productive? Maybe not! And definitely not at the expense of your own critical thinking skills.

Pros Cons Of Using Ai

Haters (your grandfather) will say that us kids have it easy. And while young people today can get almost any information, product or service at the click of a 2D button, they’re also subject to serious barriers for success and connection at every turn, and ironically, the convenience of everything doesn’t make things any easier.

I’ve got a lot of sympathy for teenagers

The next generation will likely enter a housing market in their 40s, where they’ll find houses cost 14 times the average annual income. Unfortunately, they’ll be left to handle the climate crisis*, housing crisis, cost-of-living crisis and mental health crisis. Factoring in screens and AI, they’ll be facing this myriad of complicated world issues with underdeveloped critical thinking skills, and a pervasive idea that there’s nothing they can do to change things. After all, why would you think to ask questions, learn through mistakes and build the confidence that comes with facing uncertainty when you could get ‘the right answer’ instantly, for free, with little to no stress?

I never really understood why every subject in school and uni had to start with the history of the topic. I can’t tell you how many times I’ve rolled my eyes at the mention of the invention of the printing press. But I understand now how important it is to know that an orange grows on a tree before you pluck it from the supermarket (or have it delivered to your door) and juice it dry. As Scarlett Johanson in ‘Her (2013)’ famously warned us, AI and sex should never cross over. But the ‘if you can’t talk about sex, you shouldn’t be having it’ rule applies perfectly to large language models too.

All in all, I think a social media ban for teenagers has to include age verification for use of AI, and I’m glad to hear that policy is in the works for that. I hope that the kids have to sit through a history lesson or two before they ask ChatGPT to do their homework. 

*[I haven’t touched on the impact of AI on the environment – but there’s plenty of discourse online about that. Listen to this podcast by one of my favourite independent media outlets – The Daily Aus!]

Young adults and Millennials – this one’s on you

I think the AI conversation is a little different among older Gen Z’s and Millennials* – I really think that we’re in the best position to handle AI responsibility – we’re more likely to spot AI generated content and scams than older generations**, for example. But according to an annual AI report by Deloitte, 77% of us are worried about its potential impact on our career prospects, and 70% of us actively use LLMs, anyway. We understand that functional knowledge of AI could provide work for us now, but negatively affect us later. Unfortunately, our focus on work-life balance also reduces our drive for leadership positions. Only 6% of Gen Zs express interest in leadership positions, and economically, that makes sense to me, but socially, I think we could benefit from some leadership skills. For an age group with both high rates of productivity and poor mental health, I trust our work-life concerns are well-founded, but we need to acknowledge our wisdom and use it to help others navigate it all.

We’re in a sweet spot where we grew up in a time before the flat-screen computer monitor, but we understand how to put a link on an Instagram story. We were rewarded with the eventual ease of Netflix after cleaning scratched DVDs in our childhood. We have seen an iPhone without a front camera, we’ve used a home phone (maybe even a corded one), and illegally downloaded MP3 files to our iPod shuffles. Nostalgia aside, we know how things work in a way that our grandparents and younger siblings might not. For the most part, I think we can spot generative AI, being critical of it, and using it to our advantage. That said, it’s getting harder to spot AI content by the second. I guess time will tell. If you’re in your mid-twenties and reading this – I encourage you to have a conversation on AI with people older and younger than you.

* [Sorry to Gen X-ers, you’re the coolest, but your relationship to technology really depends on your personal experiences – could go either way.]

** [While my personal experience is with younger generations, the risks for older adults are also significant. For a deeper look, I recommend this article from The Guardian: www.theguardian.com/australia-news/2025/jul/28/the-heartbreak-of-watching-a-parent-fall-for-dad-this-is-a-scam-have-you-given-her-money]

Finding Optimism in Inevitable Change

It’s natural to be fearful of AI. Things are changing – they always are, and that can be scary. But at its worst, change is only inevitable, and at its best, it’s an opportunity to learn. The good thing is the inevitability of change makes it often predictable and usually, circular. We already know what AI will do to the creative industry*, because we’ve seen it a million times before.

Think about trends in fashion and music, for example – right now, we’re seeing 90s pop and low-rise jeans cycling back around, in their own, slightly different, 2025 way. Or consider physical media like vinyl records and such. At one time, they were the only option. Then CDs replaced them, followed by MP3s, and now people stream music. But record collecting is more popular than ever. There’s money and charm in it because it’s no longer a necessity, but something that is loved for its authenticity.

A few months ago, I was talking with an older journalist and asking for her perspective on the field of work when we got onto the topic of AI and the role of young journalists. She remembered the ‘old days’ of journalism, where teams hired based on specialisation: being great at one particular thing meant you were a good pick for the team. Then, multimedia emerged, papers digitised, and they hired young people for their versatility. She explained that now, the value of specialisation seems to be seeing a return. I think AI will contribute to the re-valuing of authentic human creation and perspective.

This idea of something becoming more valuable precisely because it’s no longer the default is a pretty good argument against AI “stealing our jobs.” Just as photography didn’t kill painting, AI won’t kill creativity; it will change what we value about it. In today’s hyper-productive, profit-over-people capitalistic automation era, I am very welcoming of any appreciation of small, slow, honest human creation, mistakes and all. If we think of AI as the most polished, efficient and soulless development of capitalism, we can start to look at authentic human creative experimentation as a more meaningful contribution to society.

* [In this section, I am talking about AI in the context of creative industries – because that is mostly where my thoughts on it congregate. Also, because creativity, I believe, is the thing that makes us different from robots.]

It’s On Us Now

As it is today, there are pros and cons to AI technology – it can help, and it can harm, and we’re all talking about it, all the time. But, at least in my experience, the conversation often devolves into “what if” scenarios without a nuanced discussion of how to use AI effectively. I think we need to see both sides of an argument to minimise harm and push for positive change, in any case, but especially with something of this magnitude.

So, here’s some practical advice for using AI responsibly:

Easy is not always better. Easy is what we do when we feel like everything is hard (it is) and ultimately the easy thing now makes the thing harder down the line. Don’t use AI for everything; use your own brain as much as you can. There’s a difference between using it as a tool and using it to do your work for you (but is there? I guess time will tell with that as well.)
Know the Thing First. You need to know the thing before you know how to use AI to help with the thing. Don’t replace that knowledge and skill. Think about AI, talk about AI, historically and currently, socially and personally. Don’t use AI if you can’t make a pros and cons list of using it.
Audit Your Use. Track how often you’re using an LLM – what are you doing with it? And can you do that yourself? – and if you can, why aren’t you? Don’t beat yourself up, but work on understanding your relationship to AI use.
Stay engaged with both sides of the argument. Arguing about the ethics of AI is only important if you can see both sides and put them into practice. Use AI if you like, but do it right, and stay connected and engaged in any way you can. Talk to your local health providers, government representatives, tech moguls, educators and community leaders about how they each benefit from AI technologies, and advocate for smart regulation, taking those benefits into account.