GenAI: More Netscape Moment Questions

Jean-Louis Gassée
Monday Note
Published in
5 min readFeb 27, 2023

by Jean-Louis Gassée

Generative AI does more than entertain and irritate. It also poses serious philosophical and business questions.

Late last January I sat down with knowledgeable but healthily skeptical drinking companions (coffee or green tea) to see what they thought of the importance of ChatGPT and, more broadly, Generative AI (GenAI). ChatGPT: Netscape Moment Or Nothing Really Original, was the result.

After mulling the opinions that my fellow drinkers offered, I have joined the “Yes, This Is a Netscape Moment” camp. Similar to the time when the Netscape browser opened the Internet to The Rest of Us — and to a new breed of entrepreneurs — GenAI opens a new era of immense possibilities. In my lifetime, I’ve witnessed four epochal tech transitions: Semiconductors, personal computers, the browser, smartphones. I think we’re soon going to add GenAI to that list. I felt a thrill at the prospect — and I’m not alone.

(Out of curiosity, I googled “Netscape Moment” and saw that the phrase had been (ab)used before, as in Crypto Is on the Cusp of a ‘Netscape Moment’.)

But as legal sages have recommended over the years, we must be able to argue both sides of the case. The current fervor for GenAI, and specifically the number of articles written about ChatGPT in just a few weeks, makes one think that we’re witnessing a modern day Columbus-discovers-the-new-world adventure. Is it deserved? Perhaps the furor is born out of neediness, out of the narrowing horizons of smartphone market saturation and PC decline. Maybe this is just another instance of tulip bulb mania:

I don’t think so — I’m sticking with my enthusiasm. Too many learned and sober voices from so many diverse viewpoints show interest and provide helpful analyses. I don’t think we’re hearing the stampede of visionary sheep.

For today, we’ll briefly dwell on BinGPT. The nickname, devised by the literate and articulate analyst Benedict Evans, is a clever christening of the mating of Microsoft’s often forgotten Bing search engine with the still immature fruit of OpenAI’s ChatGPT. The offspring is so immature that Google pretends not to know about it.

Amid helpful, interesting BinGPT dialogues, observers immediately took note of the chatbot’s “hallucinations”, a new term of art that denotes an entirely made-up falsehood. As Evans (whose résumé the chatbot botched) tweets:

“I am hearing disturbing rumours that an AI system trained on the way that people behave on the Internet is pedantic, petty, starts fights over trivial things, and has a strong tendency to bullshit.”

One such instance of unpleasant behavior is documented, at great length, in a Kevin Roose NY Times February 16 article titled Bing’s A.I. Chat: ‘I Want to Be Alive. 😈’. If you haven’t already, I strongly encourage you to read the full two-hour dialog. It is much more disquieting than merely impersonating George Santos and his delirious résumé lines.

Such errant behavior isn’t unique. Related examples have been cited by John Uleis (@MovingToTheSun) or by The Information’s Chris Stokel-Walker in a report titled My Week of Being Gaslit and Lied to by the New Bing.

As a result, Microsoft promptly limited BinGPT conversations to five and more recently six exchanges. One is tempted to wonder if Microsoft was sufficiently prepared, if it had tested its chatbot thoroughly enough. But, as an article from website The Verge explains, Microsoft has been secretly testing its Bing chatbot ‘Sydney’ for years. This might leave us with Microsoft’s need to front-run other industry events such as Google’s definitely unprepared Bard announcement.

Other, broader questions need our attention.

The Wall Street Journal just published a must-read column authored by former Secretary of State Henry Kissinger, former Google CEO Eric Schmidt, and MIT Dean of Schwarzman College of Computing Dan Huttenlocher. Titled Chat GPT Heralds an Intellectual Revolution, the essay starts strongly [as always, edits and emphasis mine]:

“A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.

Later, the authors lay out a core GenAI problem:

“By what process the learning machine stores its knowledge, distills it and retrieves it remains similarly unknown. Whether that process will ever be discovered, the mystery associated with machine learning will challenge human cognition for the indefinite future.

We are facing a philosophy of science (a.k.a. epistemology) problem, one that currently seems intractable. About five years ago, in a very early discussion of AI, I was rebuffed by an expert for linking to an MIT Technology Review article titled The Dark Secret of AI. As the sage put it: “No one really knows how the most advanced algorithms do what they do. That could be a problem.”

Yes, it is a problem. We train GenAI services on billions and billions of knowledge fragments gleaned from books, articles, and bloviation around the world in an impenetrable, unexplainable process. We can’t explain what’s going on because we don’t know and don’t seem to have a path to such knowledge.

At least now we have world-class sages asking fundamental questions and pointing out grave dangers. For example, should we be surprised to see BinGPT misbehave after ingesting web content polluted by anger, falsehoods, and organized misinformation? Once upon a time, we could check sources “in the literature”; GenAI throws us into a world where we can’t.

At the moment, we really don’t know how to protect ourselves:

“For now, we have a novel and spectacular achievement that stands as a glory to the human mind as AI. We have not yet evolved a destination for it. As we become Homo technicus, we hold an imperative to define the purpose of our species. It is up to us to provide the real answers.”

We also have more mundane concerns: Money. Today, a Google query is said to cost a fifth of a cent, but a ChatGPT answer would allegedly cost ten times as much. If these numbers are anywhere close to the mark, how do Microsoft and others make money running GenAI products? In an Ars Technica post, Ron Amadeo suggests that Microsoft isn’t worried because Bing is such a small player:

“Part of the reason Microsoft is so eager to rock the search engine boat is that most market share estimates put Bing at only about 3 percent of the worldwide search market, while Google is around 93 percent. Search is a primary business for Google in a way that Microsoft doesn’t have to worry about…”

But, for Google…what prices for what services?

Such questions don’t seem to discourage venture investors who, according to Forbes, see GenAI as a new frontier, with investments in the sector growing by 425% since 2020, something worth keeping an eye on as more announcements, especially Goggle’s, keep coming

jlg@gassee.com

--

--