43
$\begingroup$

Lisp is often claimed to be one of the "[original] favored programming language[s] for artificial intelligence (AI) research" (source, additional reference, cross-site related question that focuses more on the AI part).

What language design features made Lisp apt for the task of AI research?

$\endgroup$
3
  • 3
    $\begingroup$ I'm a stranger to this stack, but I did play with the source code to ELIZA in Lisp in the 1980s just for fun. I hope someone here more knowledgeable than I provides an answer that highlights Lisp's strengths in unlimited recursion without creating a stack overflow and lack of support for looping, at least in the version that I was exposed to, AlphaLisp for the Alpha Micro platform on an S-100 WD16 CPU. $\endgroup$
    – MTA
    Commented Jul 19, 2023 at 13:32
  • $\begingroup$ @MTA WD16 on S100? Wow, those were rare! In fact I don't think I ever actually saw one. $\endgroup$ Commented Jul 20, 2023 at 1:07
  • $\begingroup$ @MarkRansom Yes indeed! Eight users on RS-232 terminals bank swapping 32KB per user at a time over a 32KB O/S. Six 14" hard drive platters of 15 MB each with one removable. The entire O/S and all software fit on one platter. Company owner picked a sexy-looking terminal but there was no device driver for it so I learned assembler and wrote one. Interesting time. More: en.wikipedia.org/wiki/WD16 $\endgroup$
    – MTA
    Commented Jul 20, 2023 at 1:52

6 Answers 6

64
$\begingroup$

By modern standards: absolutely nothing. The field that today we call “artificial intelligence” has almost no connection to the artificial intelligence research of the 1960s. It involves different ideas with a completely different heritage. In order to understand what Lisp could possibly have to do with AI, you must transport yourself back in time to an earlier era with an idealistic, somewhat more naïve vision of what computing would achieve.

First-wave AI research was a catastrophic failure

In the late 1950s, computers were new and exciting, and people were only just beginning to explore what they could do. Naturally, some researchers decided to investigate artificial intelligence, an idea that was certainly present in the collective sci-fi subconscious at the time. However, being research, nobody was really sure what exactly real AI systems might look like.

From this formative time emerged two broad schools of thought: classical AI and connectionist AI. If we can call any of modern AI descended from the 60s’ work, it unambiguously descends from the connectionist school, which was based around the now-familiar idea that complexity could organically emerge from small, neuron-like units of computation, then known as perceptrons. Unfortunately for the connectionists, computers of the time were simply nowhere near powerful enough to actually explore this idea in practice, so it wouldn’t manage to make the leap from theory to practice until the early 1980s.

In the meantime, the classicists were taking a completely different, more immediately practicable approach. Rather than start from tiny units of computation, this branch of AI research concerned itself with the design of systems that could perform much higher-level symbolic manipulation based on the emerging theory of formal languages. It’s easy to see how the ability to mechanically understand language—of almost any kind—could have been seen as a vehicle towards some form of intelligence at the time. Natural language was something machines had not yet been able to puzzle out, but even human languages have a great deal of syntactic structure. Therefore, it seemed like it ought to be possible to write programs that can recognize and produce it, and perhaps this would be enough to achieve intelligence.

This project initially produced some promising results. Some of its most famous artifacts are ELIZA and SHRDLU, as well as the concept of so-called “expert systems” (the latter of which retain some niche relevance today). All of these were designed as symbol-manipulation systems, and the language formation rules were explicitly written by humans. These researchers therefore needed programming languages in which to express large quantities of symbol and syntax tree manipulation rules, and some of these rules utilized concepts like unification or backtracking search. Necessity is the mother of invention, and these necessities begat Lisp and Prolog, which each baked many of the researchers’ essential tasks into their core design.

Winter comes for the classicists

As history shows, the classical approach did not end up yielding systems that today we would ever consider “artificial intelligence”. When projects that imminently promised machine translation systems failed to produce working results, wide-eyed hopes began to sour, and their failure is often considered the beginning of the end for the classicists. Where there had previously been nearly boundless hype (and therefore nearly boundless funding), the young field of artificial intelligence research soon found that funding was drying up at an alarming rate.

The failure was so severe and so complete that the very concept of AI research became something of a dirty word for almost a decade afterwards. This rather grim era is now known as the first AI winter (and there would be a second winter two decades later, though it is not relevant to our story). As funding evaporated, so too did the research projects that depended on it, and researchers found themselves in the uncomfortable position of needing to find some way to pick up the pieces of their careers and carry on.

The classicists rebrand to save their careers

Sadly, it turned out that the things early AI researchers were building were not very useful for constructing artificial intelligence. However, faced with a dead end, these researchers were naturally compelled to step back and take a broader perspective of the thing they’d spent the past decade studying: mechanical recognition and translation of formal languages. Forced to switch careers or face certain irrelevance, many classical AI researchers realized that perhaps they didn’t need to thoroughly abandon their research if they could simply find some new way to marketing it.

Enter: the newborn formal theory of programming languages.

These researchers diligently filed all the “artificial intelligence” labels off of their projects and got back to work studying just what it was that they’d all spent so many long hours hacking on. As it turns out, in their pursuit of artificial intelligence, they’d invented an awful lot of incredibly useful programming language features and concepts, from the if-then-else conditional expression to first-class functions to inductive data structures to garbage collectors. Today, these ideas may seem obvious, but in the era of FORTRAN, it took a radically different perspective of what computing could be to be able to produce them.

So although the modern field of artificial intelligence belongs to an altogether different family tree from the one that begat Lisp, the classical school of thought did not die out. In fact, its direct descendant remains alive and well to this day; that field just happens to be called programming languages.

$\endgroup$
12
  • 28
    $\begingroup$ I think viewing the first-wave as a "catastrophic failure" rather than useful and necessary exploration is a sad view of science and technology. Science and technology need negative results as well as positive. $\endgroup$ Commented Jul 19, 2023 at 12:20
  • 11
    $\begingroup$ Yes, you could tone down the headlines a bit. As you write yourself, the classicists' research actually yielded a lot of highly useful stuff, just not AI. (Stuff that IMHO is much better than AI...) $\endgroup$ Commented Jul 19, 2023 at 12:23
  • 12
    $\begingroup$ Also I am not sure you've answered the question as you have not mentioned any of LISPs signficant features. For example meta-programming and the ability to create DSLs were very useful features in the early days of AI. $\endgroup$ Commented Jul 19, 2023 at 12:56
  • 8
    $\begingroup$ I would not say the first wave was a "catastrophic failure", as it gave us CAS (such as MACSYMA). $\endgroup$
    – SK-logic
    Commented Jul 19, 2023 at 15:38
  • 9
    $\begingroup$ Folks: I understand that you do not like my headline writing. But I intend to keep it. The project really was a catastrophic failure in the sense the answer describes, in that it catastrophically failed to deliver on its research goals. But as the answer also describes, this does not mean it was not useful. Many failures are very useful. Let’s try not to argue about this further in the comments, please. $\endgroup$
    – Alexis King
    Commented Jul 19, 2023 at 15:40
23
$\begingroup$

While I broadly agree with the answer by @Alexis

I disagree with a couple of points. It also does not attempt to relate features of LISP to their uses for early AI experiments.

Despite the modern hype centring around connectionism and large language models* in particular modern AI is still a broad church.

For instance the wikipedia page mentions:

  • Reasoning, problem-solving
  • Knowledge representation
  • Learning
  • Natural language processing
  • Search
  • Logic

You can get similar lists from textbooks on AI.

Connectionist models are dominating NLP and learning but they were a long way off when LISP was more dominant in AI.

So what features of LISP made it good for these kinds of problem?

Homoiconicity

Program data equivalence is a useful feature for meta-programming. A solution to a problem can be written as a LISP program possibly as a DSL.

I great exampe of this is Koza's genetic programming where the genomes are based directly on lisp S-expressions.

Macros

LISP has a powerful macrosystem making it easier to decompose problems and also express them as DSLs.

Abstraction

LISP is (by somes measures) a high level langauge. You do not have to worry so much about machine level issues. For example garbage collection reduced the need to think about memory management.

Rebranding

The other answer is right that lots of early experiments in AI have been rebranded as just software engineering and some things LISP was good at have moved to language design.

Parsing algorithms aren't really considered AI anymore. Sorting isn't either. Some kinds of search are and others are not.

The AM program used heuristically guided best first search.

But lots of these algorithms were pioneered in the days of LISP.

Actually an answer to your linked question expresses this very well.

*I feel the term Large Language Model is almost prejorative as these models include a lot of knowledge and so go beyond a 'mere' language model. Though they are incomplete in many subtle ways.

$\endgroup$
2
14
$\begingroup$

(To the other fine answers I would add:)

LISP was very plastic. Everything you coded and used looked exactly like the base language. New data structure definitions and builders, new control structures, everything. Many AI projects back in the day looked like what we would call today "embedded DSLs" - totally seamless integration of whatever they were doing right into the base language, producing, in effect, an entirely new language ideally suited to your research goals. Things like PLANNER and MDL and OPS5 are still known to us today as milestones in AI research of the time but there were many many more.

Control structures especially were much experimented with. Coroutines, backtracking, nondeterministic choice, continuations, multi value return, exceptional return, different argument passing semantics, and more, were all experimented with as first-class baked-into-the-language facilities.

Contrast that with how you would write your new control structure in your current language. Does it look exactly the way your for-loops, your switch statements, and so on look? Do you get to add your own keywords and your own funky syntax (funky: for (int i = 0; i < 10; ++i) ..., in LISP: loop and series packages), your own constraints (concept Swappable = requires(T&& t, U&& u), in LISP: see G.L.Steele Jr's doctoral thesis), your own fully integrated class system (Flavors)?

And your current language is (nearly certainly) a modern language, that's had the benefit of the last 60+ years of language development - much of it based on LISP experiences, as mentioned in other answers.

Now imagine experimenting with your new control structure in FORTRAN, BASIC, or ALGOL.

Easy first-class DSLs.

Plasticity: That's another thing LISP brought to the AI movement.

$\endgroup$
13
$\begingroup$

In many ways, LISP was the first true “high level” language, pioneering features that we often take for granted today, but weren't common back in 1958 when LISP was competing against the likes of FORTRAN and COBOL. In particular:

  • metaprogramming (thanks to LISP's "homoiconicity" allowing treating code as data)
  • dynamic typing
  • automatic garbage collection
  • user-defined functions (contrasted with, say, BASIC's GOSUB)
  • recursion (essential for search and pattern-matching algorithms)

So, if you were an academic type who wanted to spend your time studying algorithms (for pretty much anything other than simple calculations) instead of fiddling around with low-level details like memory management or manual call stacks to simulate recursion, LISP was the language to use. Even if LISP isn't ideal (by modern standards) for AI research, back then it was pretty much the only choice.

On the social level, it helped that John McCarthy, who coined the very phrase “artificial intelligence”, was involved in LISP's development. And then once AI systems were implemented in LISP, it created a self-reinforcing tradition.

$\endgroup$
2
  • 3
    $\begingroup$ Right. Though (in line with Alexis King's answer) it should be remarked that "modern AI" not only doesn't need Lisp to get these features, it doesn't need the features at all! It's quite possible to implement a convolutional neural network with backprop gradients in FORTRAN II (in fact it's more suitable for this than Lisp). The reason this wasn't useful back then is not the language but the hardware and data (un-)available. And the reason Python is used now (which happens to have all of these features) may have its reasons but not that its inherently particularly suited for neural networks. $\endgroup$ Commented Jul 20, 2023 at 18:01
  • 10
    $\begingroup$ I will continue to fight a losing battle against the tide by reminding people that modern AI is not just neural networks 😠. Modern AI still needs programmers not just data scientists! $\endgroup$ Commented Jul 20, 2023 at 19:33
4
$\begingroup$

To answer the original question, I feel a necessity to ask a number of related questions and then attempt to give some answers to those. I will then get to the essence of the OP question, which is very interesting, at the end.

Question--Let's start with: What is AI Research?

AI research might well have started with Alan Turing, even before the first computer had ever been built. Turing asked questions of what tasks a machine could be made to do that were considered to be the province of human thought. Early AI research included aspects of search, for example solving complex puzzles, and solving mathematical problems in an automated manner. (Later playing checkers and chess would be examples from this line of thinking.) Turing, unbeknownst to most of us, was working on complex search algorithms during World War II before the first computer had ever been built, in order to improve decoding of the German "Enigma" encryption. Other aspects included parsing and of course processing of human written language (in computer coded form).[1][Other ref needed?]

Turing speculated not only on algorithmic means to solve problems that humans could solve, but also speculated on connections of nodes similar to what we call "neural networks" today.

We note that depending on who one asks, the meaning of "AI Research" has evolved considerably over time. I will speak about some of the problems with the original methods in a bit, because the assumptions that were somewhat incorrect in older research were important.

Question: What does early AI research have to do with modern computing?

LISP was apparently the second oldest computer language which has any continuing usage today. (FORTRAN being the oldest.) It was developed to aid in what was considered AI research at the time.[2]

"Lisp became the 'pathfinder' for many ideas which found application in the modern programming languages: tree-like structures, dynamic typing, higher-order functions and many others."[2]

The importance will become apparent below, I note that the Python language[3]:

"...uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. .....

"Its design offers some support for functional programming in the Lisp tradition. It has filter, map and reduce functions; list comprehensions, dictionaries, sets, and generator expressions."

One of the most important early developments in LISP was interactive debugging--when a failure occurred you could interactively investigate the state of the computation, and this was primarily because LISP was largely interpreted language and you can invoke LISP functions to examine the data right at the point of stopping the execution. I also note that LISP was an example of interpreted language that later became compiled and can combine aspects of both--important for later...

Many modern elements of computer languages were developed in the AI programming communities. As the nature of AI computing changed and features became available in other languages, the usefulness of LISP itself as compared to other languages decreased.

Question: What were early assumptions that later AI research improved on?

One of the most important assumption was that "crisp logic" based search could solve some of the most important problems. I will note that in the early days there were essentially two entirely separate camps of research in AI: Symbolic and programing approaches, and "Neural Networks" approaches. These two communities pretty much did not talk to each other. [Ref needed]

The main issue is that completely exacting crisp logic and search is so "brittle" that finding answers in large bodies of stored information would most often fail because very small differences could not be resolved because they did not exactly match the same form or representation. More "fuzzy" representations and matching processes work better. Later research, both "AI" and "Machine Learning" often combine or utilize concepts that hark from the "neural network" approach that allows for matching that is less exact to the pattern which may be of interest, and furthermore can form its own matching links based on data fed to the system rather than being preprogrammed or based on human coded data structures. In essence the "Neural Network" aspects and the funcional/procedural aspects of AI research are no longer separate communities. [Ref needed]

Question: What computer language is the most commonly sited example of AI today, ChatGPT, written in?

One of the major languages ChatGPT is written in is "Python" (and several derivatives and supporting or related packages attached to the language).[4]

I bring this up to show that the history of LISP feeds into the modern tools needed for today's AI implementations. The LISP language itself no longer has the relevance it once had, in modern approaches, because the features that were most helpful are implemented in other languages and tools now, and LISP has a readability problem with all the nested parentheses. Even features of early tools like "EMACS" editor, which helped format the language to be readable, have been implemented in modern development environments for a plethora of languages.

Now to the OP question: What language design features made Lisp apt for the task of AI research?

First it was easy to work with parts of a program because of the interpreted and immediate online nature of the language. (By "online" I mean that even when run on a mainframe of the day, the user would have time sharing "terminal" or other means to work almost as though one had a personal computer as we do today.) Other useful programming aspects in LISP were already explored in the above questions.

The language naturally parsed words into lists ("LISt Processing.") Interpreting of the list of words immediately connected to the dictionary of those words so the "properties" lists could be searched and connected immediately to the stored data associated with the word. Recursion was easily supported, which matched well with search operations that involved recursive aspects during pattern matching operations of one sort or another -- noting these were almost always "procedural" sequences even if implemented in "functional" form. Since the data structures could be stored in the same form that the language naturally utilized for coding, the data/algorithm combination was easily supported.

Other aspects will be found in the rest of this discussion of the other questions I ask and partly answer.

Please note my references were the first hits on each search, I'm sure someone can do more extensive research on references...

[1] https://news.harvard.edu/gazette/story/2012/09/alan-turing-at-100/

[2] https://typeable.io/blog/2021-10-04-lisp-usage.html

[3] https://en.wikipedia.org/wiki/Python_(programming_language)

[4] https://botpress.com/blog/list-of-languages-supported-by-chatgpt

$\endgroup$
0
-2
$\begingroup$

A self-modifying program seemed like a good model for the brain.1,2 No other language could do that.


1https://ieeexplore.ieee.org/document/8756913

2https://irp.cdn-website.com/1924a8c5/files/uploaded/the_elements_of_artificial_intelligence_using_common_lisp.pdf

$\endgroup$
4
  • 4
    $\begingroup$ As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center. $\endgroup$ Commented Jul 22, 2023 at 7:47
  • $\begingroup$ "No other language could do that" Citation Needed. Many languages are capable of self-modification, even outside the functional category. $\endgroup$
    – mousetail
    Commented Jul 22, 2023 at 8:00
  • 1
    $\begingroup$ @mousetail We are talking of a time when Cobol and Fortran were the alternatives. $\endgroup$ Commented Jul 22, 2023 at 11:19
  • $\begingroup$ @Community Which part of my answer is unclear, robot? $\endgroup$ Commented Jul 22, 2023 at 11:20

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .