The National Bureau of Economic Research has published a new paper from MIT’s superstar economist Daron Acemoglu, which attempts to pooh-pooh AI dreams like a productivity renaissance, supercharged growth and reduced inequality.

At this point it almost feels like heresy to say that AI won’t revolutionise everything. A year ago Goldman Sachs economists estimated that AI would increase annual global GDP by 7 per cent over 10 years — or almost $7tn in dollar terms.

Since then Goldman’s forecast has become almost sober, with even the IMF predicting that AI “has the potential to reshape the global economy”. FTAV’s personal favourite is ARK’s forecast that AI will help the global GDP growth accelerate to 7 per cent a year. 🕺

Professor Acemoglu — a probable future Nobel Memorial laureate — is taking the other side. Alphaville’s emphasis below:

I estimate that [total factor productivity] effects from AI advances within the next 10 years will be modest — an upper bound that does not take into account the distinction between hard and easy tasks would be about a 0.66% increase in total within 10 years, or about a 0.064% increase in annual TFP growth. When the presence of hard tasks among those that will be exposed to AI is recognized, this upper bound drops to about 0.53%. GDP effects will be somewhat larger than this because automation and task complementarities will also lead to greater investment. But my calculations suggest that the GDP boost within the next 10 years should also be modest, in the range of 0.93% − 1.16% over 10 years in total, provided that the investment increase resulting from AI is modest, and in the range of 1.4%−1.56% in total, if there is a large investment boom.

As Acemoglu says, that’s “modest but still far from trivial”. But as he notes, we also need to take into account the fact that some of the most common AI use cases are bad — ie deepfakes etc.

Fighting those may boost growth in the same way that rebuilding a hurricane-ravaged town boosts growth, but it still detracts from overall welfare. Alphaville’s emphasis below.

. . . When we incorporate the possibility that new tasks generated by AI may be manipulative, the impact on welfare can be even smaller. Based on numbers from Bursztyn et al. (2023), which pertain to the negative effects of AI powered social media, I provide an illustrative calculation for social media, digital ads and IT defense-attack spending. These could add to GDP by as much as 2%, but if we apply the numbers from Bursztyn et al. (2023), their impact on welfare may be −0.72%. This discussion suggests that it is important to consider the potential negative implications of AI-generated new tasks and products on welfare.

Acemoglu is also sceptical that AI will have a major effect on inequality — neither significantly worsening nor improving it. But on the whole, his work suggests that “low-education women may experience small wage declines, overall between-group inequality may increase slightly, and the gap between capital and labour income is likely to widen further”.

The scepticism is interesting, as Acemoglu is one-third of an influential trio of MIT economists spearheading the university’s ponderously named Shaping The Future Of Work initiative.

The professor does stress that the potential of generative AI is great, but only if it is used principally to give people better, more reliable information rather than hallucination-prone chatbots and mechanically reconstituted images.

My assessment is that there are indeed much bigger gains to be had from generative AI, which is a promising technology, but these gains will remain elusive unless there is a fundamental reorientation of the industry, including perhaps a major change in the architecture of the most common generative AI models, such as the LLMs, in order to focus on reliable information that can increase the marginal productivity of different kinds of workers, rather than prioritizing the development of general human-like conversational tools. The general purpose nature of the current approach to generative AI could be ill-suited for providing such reliable information.

To put it simply, it remains an open question whether we need foundation models (or the current kind of LLMs) that can engage in human-like conversations and write Shakespearean sonnets if what we want is reliable information useful for educators, healthcare professionals, electricians, plumbers and other craft workers.

Further reading:
The manicure economy (FTAV)
Year-ahead investment outlook note or ChatGPT? Take the quiz (FTAV)
Generative AI will be great for generative AI consultants (FTAV)

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Comments