Will We Lose Creative Spark through AI?

A recent Tech Trends report questioned the data security relating to protecting online data from scrapers and AI training. How can data shared publicly be protected or is it fair game given the fact that it is already exposed on third party platforms. This is becoming increasingly prevalent in the news, with numerous court cases arising as a result already. Using and sharing other outlets news and reporting has made headlines internationally, with some of the biggest corporates and government bodies involved.

Australia was the first when it brought in the ‘News Media Bargaining Code’, meaning that platforms like Meta and Google had to pay media outlets for hosting links to their articles on their sites. The response from Meta being to withhold news from the site, which they then replicated when faced with the same challenge in Canada, despite this being at the peak of the wildfires. Both cases were eventually resolved with the online platforms agreeing deals directly with the media outlets, but it does raise a number of questions regarding the future of the publics right to information, the dissemination of news, ownership and copyright.

With journalism already at a critical point in time, and online reporting overtaking print due to the rapid pace of today’s news cycles, what value does it place on the news and the sources generating it. Will this new funding help to create a new form of journalism, and help to support a struggling industry?

The world is changing, take AI’s need to digest information and learn from this data, by ‘scraping’ from online sources. Should the originators of this content this also be compensated? What means could be used to enforce this and also how can AI distinguish between what is real and fake news online? Take the recent stopping of AI scraping from Twitter/X, as a result of picking up some of the toxicity shared on the platform, which caused the generated output to become argumentative and offensive.

It is happening across the board, the recent Netflix battle and actor/writers’ strike with one off fee’s for performances desired by the platform, stripping the original creators of the work from the financial rights to re-runs. If something original has been created, whether news or film, surely the creators should be recognised for this and rewarded, benefiting for its re-use wherever this takes place from streaming or from ingestion in to an AI platform.

This brings me to the launch of ChatGPT, and the pandora’s box of fall-out that we have seen since its arrival. The use of peoples’ work to create short text, full documents, and more. Surely there is a clear copyright infringement? How can anyone know if the AI has plagiarised from the vast amount of text online, unless it constantly checked? Should there be a requirement to show a percentage likelihood that the output bears a close resemblance to something that already exists or cite references of what was used to create the output?

But is legislation the solution to all of the above? It didn’t work in either Australia or Canada, but it did result in compensation being made to the originators of the material being shared, that has to be seen as a positive outcome. Fundamentally though, legislation only exists for those who choose to be governed by it, and there will always be those that have no intention of doing so. Bad actors will always find ways to utilise technology in ways that it wasn’t intended whether they are state sponsored or otherwise. 

However, what happens to creativity moving forwards, or are we at the point that we are losing creativity if AI can simply churn out something without input from humans. Haven’t we always reused, repurposed and repacked by standing on the shoulders of the giants that came be for us? The initial fallout from automation wasn’t to creative roles, it was in manufacturing and robotics, where machines were able to take over much of the repetitive tasks more efficiently and effectively. Also, they work 24/7 unlike humans, and as to be expected the body corporate put them to full use in the chase of cost reduction and higher productivity. But it was unskilled labour being impacted then, and lower paid jobs. However, today the rise of multiple AI platforms are starting to directly impact the skilled and creative jobs market – mid-tier and above, the traditional ‘white collar worker’. 

The first industrial revolution is offen cited as the proof of change that humanity overcame in the shift from farming to industrialisation. However, what is generally not mentioned is the industrial revolution took nearly a 100yrs for people to adjust to these changes, where AI will bring about change in the next 5-10 years. In a piece written by David Rosenberg and Julia Wendling for the Financial Post they state “Goldman Sachs has estimated one-quarter of all tasks will be exposed to AI takeover, putting 300 million full-time positions globally at risk of automation. For the U.S. specifically, the prediction is that 63 per cent of the corporate sector will see almost half its workload affected by AI”. The response to this is typically, that the new age will produce new jobs to fill in the losses. But that only happens IF the timeframe is manageable. There will be a lot more people out of work before the new ones come along to backfill the lost ones, if they do at all, as we learn to do more with less people yet again. Sam Altman the “was, then wasn’t, then was again” CEO of OpenAI has even mentioned his concern about this very fact in numerous interviews.

When AI was first developed, all the possible impacts were unknown – but, even in such as short space of time we are now beginning to see some ‘known’ impacts start to emerge. We can now start to realise the challenges and fall-outs. These not only relate to what is to come, but what has already happened. When using ChatGPT we don’t think of the human cost in training a Large Language Model (LLM). It has been widely reported about the impact on the Kenyan employees of Sama, the data labelling company based in San Francisco that OpenAI (the creators of ChatGPT) partnered with to train the platform. These human content moderators of the fledgling platform are filing lawsuits for the exploitative conditions they faced reviewing content that ChatGPT now relies on. They have documented the impacts to their mental health from what they were exposed to creating the guardrails for the system before its general release for public consumption. These lawsuits are opening up to us the psychological trauma incurred by the constant filtering inappropriate content, with little to no concern for their welfare and where the promise of $12 per hour was actually only $2.

All this begs the question: is AI a necessary evil? I have to be honest and say that I can see both sides. The reality is that the genie is out of the bottle and no country wants to fall behind while others embrace the new technology, but we have to mindful that there is a scary side to all this as well.

We have to acknowledge that we are in world where the birth rate figures are falling in most countries except India and even China has reversed its one child policy, now actively incentivising families to have more children. With the added point that people are also living longer, naturally leading us to face future with a much larger older population demographic, with a much smaller one following behind. There is therefore, a belief that we will need AI to fill gaps created by the shift in population composition, and there will be an economic need to sustain and expand productivity levels when there are no longer people available to occupy roles. An infill by AI to generate wealth will enable us to continue functioning as a society, regardless of the impact it has in other areas.

Will it create new roles? That remains to be seen, but the generation of children starting pre-school now are likely to be learning to become part of a completely different looking workforce than those finishing school and higher education today. Many careers will no longer exist, courses at University will have to be significantly different, and the future is one we don’t yet know about. All we do know is that change is happening at an ever-increasing rate. We just need to keep monitoring it and weighing up the benefits without ignoring the downsides at the same time.

Richard Hilton

Senior Presale Solutions Consultant - I am a motivated individual with over 37 years of experience in the IT and Telecommunications arena. Extensive managerial experience of IT departments and consultant engineering teams. A foundation of professional experience in many differing market verticals within public and private sectors.

Ab Initio partners with BT Group to deliver big data

Luke Conrad • 24th October 2022

AI is becoming an increasingly important element of the digital transformation of many businesses. As well as introducing new opportunities, it also poses a number of challenges for IT teams and the data teams supporting them. Ab Initio has announced a partnership with BT Group to implement its big data management solutions on BT’s internal...

WAICF – Dive into AI visiting one of the most...

Delia Salinas • 10th March 2022

Every year Cannes held an international technological event called World Artificial Intelligence Cannes Festival, better known by its acronym WAICF. One of the most luxurious cities around the world, located on the French Riviera and host of the annual Cannes Film Festival, Midem, and Cannes Lions International Festival of Creativity. 

Bouncing back from a natural disaster with resilience

Amber Donovan-Stevens • 16th December 2021

In the last decade, we’ve seen some of the most extreme weather events since records began, all driven by our human impact on the plant. Businesses are rapidly trying to implement new green policies to do their part, but climate change has also forced businesses to adapt and redefine their disaster recovery approach. Curtis Preston,...