Monday, December 2Royal Holloway's offical student publication, est. 1986

Will Artificial Intelligence Change Journalism Forever? We Better Hope Not.

By Eleanor Harvey

A few weeks ago, I received a pretty enticing job offer. It came, as all the best things do, in the form of an unsolicited message from a random person on social media, in this case LinkedIn. This Mr A. told me he was looking for an ‘excellent freelance writer’ and that my profile ‘stood out’. I mean, obviously. The green flags kept on coming, from ‘no interview required!’ to a salary of ‘$15 USD per hour’. I was already totting up how I’d spend this generous amount, perhaps on moving to a country where dollars are legal tender, before I’d registered what the job actually was: to ‘join my team at Outlier, where we train AI systems—no AI knowledge required!’

Somewhat unexpectedly, Outlier is a real company, though admittedly one which has attracted several Reddit pages branding it ‘a scam’. But perhaps I shouldn’t have been surprised. There’s a growing job market which employs trained journalists to work alongside artificial intelligence. For example, Newsquest, one of the largest local news publishers in the UK, now employs seven AI-assisted journalists. The human reporter finds the information, then feeds it into the AI system which writes the story. 

However, the wider media industry is still far from in agreement that this sci-fi-esque human/robot collaboration is a good idea. Whilst Newsquest’s CEO was busy explaining why it is a great time-saver, The Telegraph was essentially banning their employees from using AI to write stories, except when given express permission from their top team and lawyers. They are, however, allowed to use it for certain smaller tasks, such as producing inspiration for articles.

Yet, one publication recently decided to go further than any other, and test whether AI could do away with humans altogether. In September, the London Standard ran an AI-themed special issue featuring a review of the National Gallery’s new Vincent van Gogh exhibition, ‘Poets and Lovers’. So far, so normal—except it was “written” by an AI recreation of notoriously brutal art critic Brian Sewell, who died in 2015. Robot Sewell pulled no punches, dismissing the exhibition as ‘vapid, arrant nonsense.’

Putting aside the questionable ethics of using technology to recreate a dead person who couldn’t possibly have consented to the process (although Sewell’s estate did give permission), the jury is out on how successful this ‘one-off experiment’ was. The Standard is currently experiencing severe financial losses, forcing them to switch from a daily to weekly news format. Therefore making it easy to dismiss it as a slightly desperate publicity stunt. Elsewhere, the Guardian’s Jonathan Jones claimed it ‘prove[d] art critics cannot be easily replaced’, and that the ‘nonsensical’ and ‘tediously cliched’ review ‘lack[ed] Sewell’s authentic voice’. 

So, is the review an accurate recreation of Sewell’s critical voice? It’s difficult to come to a definite answer. But is it a terrifying read anyway? Absolutely. I wouldn’t have a clue it wasn’t written by a human, if it didn’t advertise the fact in the middle of the page. I wouldn’t say it’s a beautiful piece of writing, but it does make sense. Then again, perhaps it’s just me who was taken in by it, being unfamiliar with what terms such as ‘arrant’ mean.

What’s the lesson here? Maybe it’s that people, like me, are stupid, and robots are now cleverer than us. Maybe it’s that, in future, no one will have to work, because there’s no job that AI won’t be able to do. Maybe it’s something much deeper about humanity itself. Or maybe, despite everything this “experiment” would seem to suggest, it shows the exact opposite. Because Robot Sewell is wrong.

You could argue that a review can never be wrong. But if it is possible, then it was. The ‘Poets and Lovers’ exhibition is by all other (human) accounts very good. The Times described it as ‘a once-in-a-century show’ and ‘beautifully put together.’ The Telegraph said it was ‘breathtaking’. Even the London Standard’s human art critic gave it five stars and reckoned it was ‘wonderful’. All this is a far cry from fake Sewell’s dismissal of it as ‘insipid’. Put simply, the AI got it wrong.

It is possible to defend the review here. I don’t know much about the workings of AI, but I am pretty confident you have to tell it what to write. I’m sure, therefore, that the very human journalists who came up with this concept told it to produce something scathing. If you’re trying to recreate the voice of a famously mean critic, your readers will be disappointed by an article saying, “Yeah, it was alright actually.” But that doesn’t mean the review’s tonal inaccuracy doesn’t matter. Instead, it just exposes AI’s one, incurable failing: it’s not human, and no matter how sophisticated it becomes, it never will be. 

Why does this somewhat redundant-sounding statement matter? Because AI cannot have a human being’s emotional response to a piece of art, or anything else. It cannot feel the sadness of Van Gogh having so much talent but so much tragedy in his life which hangs over every exhibition about him. It can’t understand what a good afternoon out is for a human being. It doesn’t have a brain, or a mind, or feelings. It just gets told what to do by human beings who do. All of which leads me to ask: if you still need people to tell the AI what to do, why not cut out the middleman and get them to write it themselves?

I’m well aware this might make me sound like a Luddite, but I’m not. I don’t go around screaming at people for using ChatGPT, and I don’t doubt this technology can be used for amazing things elsewhere. But it does matter to me. When I decided I wanted to be a journalist, I certainly wasn’t imagining what Mr A. from LinkedIn was: helping to develop technology which might one day destroy the need for me to ever write again. I wasn’t even picturing what Newsquest is at this moment paying people to do: feeding information into a computer so that it could write for me. Instead, I imagined a profession which centred around people: talking to them, discovering what they had to say, then writing about it so that other people could know too. Of course, technology always affects jobs: that’s why I’m writing this on a laptop, not a typewriter. But do we really want our newspapers and magazines created by technology which has no concept of what we feel, want, or care about? 

I’ll end by asking you a question: how would you feel if you knew that this article wasn’t written by one of your fellow students, but by AI? If someone gave it specific enough instructions, it’s not impossible it could produce something like this. You might notice the difference, or you might not. Assuming the latter, would it affect how you saw this article? I think so. We surely read pieces like this to find out what another human thinks about the human world we share. If I was just a robot programmed to imitate what a person might think, that central emotional point would automatically be lost. And if you take this human heart out of journalism, then what, really, is left? 

Image: Glenn Carstens-Peters on Unsplash