The economics of transformative AI. 

An image of , AI, The economics of transformative AI. 

2022 has seen dramatic advances in AI, particularly in the space of large language models, with progress being made much faster than expected. For the past couple of months, OpenAI’s chatbot ChatGPT has caught the world’s imagination, demonstrating its impressive writing skills through countless articles, and even outperforming Wharton University students in MBA exams. ChatGPT has gained more than 100 million users in its first two months of existence, marking the fastest digital product launch in history, and is now estimated to produce a volume of text every 14 days that is equivalent to all the printed works of humanity. Its potential is such, and the stakes so high, that Google has decided to launch its own version, Bard, using Google’s existing large language model LaMDA, that can engage in free-flowing conversations with humans.

Against the backdrop of these developments at the beginning of 2023, I would like to share some facts, thoughts and opinions on the implications of the future developments in the field of AI, and what shape the technology might take in the future. 

1. Cognitive automation: This is set to be a major trend in 2023 and beyond. As the capabilities of large language models continue to expand, cognitive workers (such as myself) are increasingly at risk of being automated. This means that economists must abandon the notion that automation only affects routine jobs, and that human creativity is somehow — miraculously — immune to automation. To keep up with this new reality, economic models must be adapted accordingly. 

2. Language models — from assistants to tutors: in coming years, cognitive workers will increasingly incorporate large language models into their daily workflow. In the short term, this will make cognitive workers more productive — they can outsource countless small tasks to their new digital assistants. Humans will have to focus on their comparative advantage: whereas content creation is increasingly commoditized by large language models, discriminating which content is useful is still something that humans are better at. Over time, the digital assistants will increasingly turn into digital tutors – teaching their users new concepts and expanding our horizon. I have recently completed a guide for how cognitive workers can take advantage of large language models. Although I focus on applications to my own field, economics, the lessons apply broadly to all cognitive workers. 

3. Exponential growth in compute: Progress in AI is continuing relentlessly, propelled by a combination of advances in hardware and software as well as ever-growing training budgets. This has resulted in doubling times in compute of about six months for cutting-edge models — much faster than Moore’s Law — a regularity that has held for almost a decade now. Note that the dollar cost of training runs is also growing exponentially for cutting-edge models — and currently in the realm of eight-digit dollar amounts. Since expenditure on compute is growing much faster than the overall economy, an ever-growing portion of our economy’s resources is devoted to compute — the beginning of an AI take-off! 

4. Economic growth: Compared to the macroeconomy, investment in AI is still relatively small, and it will take several more rounds of doubling for the macroeconomic effects to be felt. The economy is like a large vessel that takes a long time to turn, so I don’t anticipate that the recent advances will be reflected in investment, productivity and growth numbers at the macro level in 2023. 

5. Growing public attention: OpenAI’s ChatGPT, released in November, gave the public a firsthand experience of the power of advanced AI and significantly increased public awareness of the abilities of large language models. (If you haven’t already, try it out to get a glimpse of what the personal assistants of the future will look like.) Technically, the system is just one customization 

of one large language model (GPT3.5) among several others that have demonstrated a growing level of general intelligence. But it represents yet another small step forward toward artificial general intelligence (AGI), i.e., toward AI systems that can perform all cognitive tasks that humans can perform. The next generation of language models are rumored to be released soon and will display even greater general intelligence. 

6. Expanding Overton window and AGI governance: As conversations about AGI become more commonplace, the Overton window — the range of ideas and policy options that people view as reasonable to discuss — is rapidly expanding. This will bring AGI governance to the forefront of the public discourse. Important questions include both how AGI will interact with existing governance structures and how AGI itself should be governed. Economists have much to contribute to these topics, providing ample opportunities for cutting-edge research papers and dissertations. Our Oxford Handbook of AI Governance, set to be published in early 2023, will make an influential contribution — and many of the chapters are already available online. 

7. Preparing for the non-existent future of work: In the medium term, our society will have to adjust to a world in which human labor is largely redundant. And this may happen sooner than many expect, perhaps even within the current decade. Cognitive automation is making policies such as a universal basic income more urgent and more appealing. To better understand how to prepare for this non-existent future of work, I recently published a report on the topic. What is more, if cognitive work becomes redundant, we must also re-evaluate the purpose of education at a fundamental level. 

8. Life after cognitive automation: Finally, although it’s crunch time for research on how to govern AGI, I am also taking time to reflect on life after cognitive automation has made me redundant as an economist and to prepare myself mentally so that I’m ready when the time comes.

Addressing Regulatory Compliance in Government-Owned, Single-Use Devices

Nadav Avni • 26th March 2024

Corporate-owned single-use (COSU) devices, also known as dedicated devices, make work easier for businesses and many government agencies. They’re powerful smart devices that fulfil a single purpose. Think smart tablets used for inventory tracking, information kiosks, ATMs, or digital displays. But, in a government setting, these devices fall under strict regulatory compliance standards.

Advantages of Cloud-based CAD Solutions for Modern Designers

Marius Marcus • 22nd March 2024

Say goodbye to the days of clunky desktop software chaining us to specific desks. Instead, we’re stepping into a new era fueled by cloud CAD solutions. These game-changing tools not only offer designers unmatched flexibility but also foster collaboration and efficiency like never before!

What are Multi-core Safety-Critical Avionics?

Wind River • 13th March 2024

A multi-core processor is a type of central processing unit that integrates multiple individual processing units onto a single chip. It supports different cores executing their tasks simultaneously, for quick and enhanced overall performance. Multi-core processors nowadays support safety-critical avionics. Find out more about what multi-core processors are, what multi-core safety-critical avionics are, and how...

Why Transition from 4G to 5G+ vRAN/O-RAN?

Emily Goldshteyn • 13th March 2024

The journey from legacy to 5G doesn’t have to be off-putting. It is a process that, if approached strategically, can make your company a pioneer in the digital age. Virtual and Open RAN, which come with broader choices of technology options and greater flexibility, are giving service providers greater opportunity as they transition their networks....