10

I am reading a bit about AI research lately. One major criticism of current wave of AI boom is that many high profile papers or projects, including Google’s famous AlphaGo, have not yet found any real applications. So Most of them are basically “gimmicks” that are impressive in lab but fail miserably in real world. What does philosophy of science say about these sort of research?

Disclaimer: my personal opinion regarding the application perspective of AI and deep learning is not relavent here. I am just curious about philosopher’s opinion regarding the value of curiosity driven research.

10
  • 6
    What does common sense tell you?
    – user14511
    Commented May 23, 2020 at 12:34
  • 18
    As much as instant gratification is the religion of our times, even food in a refrigerator does not have an application until it is cooked. And it took over 2000 years to convert steam engines from toys into the engines of industrial revolution. But it was worth the wait, wasn't it? AlphaGo hasn't been around ten years yet. Patience is a virtue, and virtue is its own reward, even aside from deferred benefits.
    – Conifold
    Commented May 23, 2020 at 13:04
  • For a take on how to focus scientific research, related to your question, but not a direct response to it, see my; Deductive Theory/Inductive Method recently posted to Academia.edu. CMS
    – user37981
    Commented May 23, 2020 at 13:57
  • 3
    With your argument, you could "prove" that all the research which lead to GPS, lasers, nylon, photovoltaic panels or nuclear power plants was nothing but useless gimmicks. Commented May 23, 2020 at 22:43
  • Why climb Everest? Commented May 24, 2020 at 3:22

8 Answers 8

15

I accept the framing of your question, on basically all levels, since this is a topic of great interest to me.

You will find that the history of physics contains some very nice examples of where completely "abstract" mathematical inventions (i.e., having no apparent relation to reality) were later found to be exactly what was needed to furnish a rigorous framework to a new branch of physics. Something that looks "useless" today may become vitally essential fifty or one hundred years from now.

Three examples that come first to mind are 1) Riemann's invention of noneuclidean geometry, which was later found to be exactly the formalism needed by Einstein to describe curved spacetime in general relativity, 2) Abel's invention of one branch of group theory which was later discovered to provide a consistent framework for organizing families of subatomic particles, and 3) Noether's discovery of the deep connection between the mathematical concept of symmetry and conservation laws in physics, which provided a tool of great usefulness in the search for the fundamental laws of nature.

8
  • Some examples could be illustrative.
    – user14511
    Commented May 23, 2020 at 18:13
  • will edit to include two. -NN Commented May 23, 2020 at 19:55
  • @user3451767, third example added. more may be on the way, as they occur to me. Commented May 24, 2020 at 1:57
  • You might add complex analysis and its profound relevance in theoretical physics Commented May 24, 2020 at 9:18
  • I seem to remember prime numbers - originally just an oddity - and their use in cryptography?
    – Tim
    Commented May 24, 2020 at 17:43
7

I object to the framing of your question, on basically every level.

Among "many high profile papers or projects" obviously comparable to AlphaGo, (or more importantly AlphZero), is Watson, because it was the first computer system to beat human supremacy at Jeapardy. From the Wikipedia page on Watson:

"Current and future applications:

  • Healthcare

  • IBM Watson Group

  • Chatterbot

  • Building codes

  • Teaching assistant

  • Weather forecasting

  • Fashion

  • Tax preparation

  • Advertising"

Each with a whole section. Natural language processing has been behind the wave of voice-input devices that dominate tech company agendas.

Look at Deep Mind the company that developed AlphaGo. They list their algorithms (using the same weighted monte carlo trees & conformal neural nets as AlphaGo), as improving Google's: data centre cooling, recommendation engines, and adaptive battery use & brightness as part of the Android operating system since 2018. There are massive applications to healthcare for things like evaluating scans. And for understanding protein folding, a giant new field in it's own right that could open up world-change catalytic enzymes, for things like crop-waste to fuel.

This is all long after the giant leaps they made with 2D image processing, and now being made on 3D+ info exactly using conformal neural networks. I am really excited by applications of AI to understanding the physics of higher dimensional spaces.

You didn't ask for a survey of the field so I won't go on. No offence but, you haven't done the most basic research, like reading about Deep Mind, and it sounds like you haven't even heard of Watson.

Artificial general intelligence is likely to dominate the future of life on Earth, as the key technology of at least the next century. Either as independent beings, or hybrids with humans using interfaces like Neuralink, a new kind of being/s will supplant (unmodified) humans - in philosophy that's discussed as transhumanism. And the supremacy issue has been called 'the gorilla problem' by Stuart Russell & others by analogy to how gorillas have lost self-determination/autonomy to humans.

On 'blue sky' research I'd say quantum computing is at the level of usefulness of AI maybe 40 years ago. It's notable no one is saying, what's the point of researching that - but actual applications are substantially less clear than for AI 40 years ago. There are only 2 quantum algorithms, and only 1 that's practical. As a current application it's pretty much just cryptography, and clear unique one's for the future I have only heard of evolutionary algorithms. But the point is, it will be an accelerating technology, wherever it gets applied it will have compound impacts . There is a known human cognitive inability to grasp the effect of things like compound interest, and exponential and logarithmic change. But in the long run, really nothing else matters as much.

So we already have huge applications, this quickly. Certain technologies we know will define the future, because they have cumulative compound effects. And the last point I'd make, is from game theory. In WW2 & the Cold War, Nazis & Soviets respectively had the best tanks, planes, and conventional military advantages, at the start. But looking at how radar, decryption, ICBMs, space-based warfare, developed & the costs associated, it now looks inevitable they would lose. And it was the mix of relatively meritocratic universities (Nazis lost all Jews including many who joined the Manhattan Project, USSR lost many brilliant minds to purges & politics), and to the finances of the bigger trade network. In hot or cold wars, or just geopolitics, we already know AI will be the major tool of propaganda, as modern Russia is proving, and it will control increasingly autonomous drones as USA is proving. Having the infrastructure and the minds for AI, even if not currently doing it, could likely be the deciding factor in winning, or stopping, wars of the future. So it will decide who's politics, who's world view, who's philosophy, shapes our future.

6
  • 1
    Sorry if you feel offended that I frame the question like this. It was my observation about criticism that I have read and it was not my personal opinion. Commented May 23, 2020 at 20:12
  • 5
    As for the particular example Watson that you give, there is plenty of skepticism about its usefulness out there hn.algolia.com/?q=watson Commented May 23, 2020 at 21:25
  • Speech recognition is stupid. What could we possibly use that for. The only reason it was a 'gimmick' for 30 years is because no one could make it work, so no one bothered to think of a way to sell it to you, until there was a way to sell it to you. And I suppose winning Jeopardy and beating man in chess is debatably useful... but I wouldn't turn my back on it.
    – Mazura
    Commented May 24, 2020 at 6:56
  • 1
    Some people would pay quite a lot of money for good propaganda bots. Commented May 26, 2020 at 19:51
  • @Mazura: You sound like someone observing an infant and saying, what will they ever amount to, they can barely understand words, and they would still be stupid if they could' 😂
    – CriglCragl
    Commented May 27, 2020 at 16:09
6

First of all, using the example of AlphaGo for this question is interesting, as it is research made and funded by a for profit corporation. Arguably then, Google's executives do see some values in that research for the future of their company. As many other answered, you can see it as an investement: there may not be immediate return on it, but if it can be used for a crucial application, the technological gain would be huge. Together with the knowledge gathered and technologies developped in the process, and the marketing stunt assiocated, it is likely to make the investement worth it.

However you did not ask for the answer a CEO would give to their investors, but for the one of a science philosopher. The two view are of course not in contradiction, and the potential of technological advance (be it as a direct consequence or as a side effect) clearly gives value to the research for both.

However from the persepective of the science philosopher, there is a bit more. One can indeed argue that knowledge has an intrisic value [1]. If we accept that, the question to answer in order to determine whether these researches have value thus become: do we learn something from them?

The answer to that question is unambiguously yes.

First we learn something about the algorithms. The very fact you mention that they "fail miserably in real world" is a knowledge we only gained through testing. Determining what algorithm and architecture works on what kind of problem, and why, is currently at the core of AI science. As far as I know a robust and widely applicable theoretical framework is still missing, so empirical experiment on problem of increasing difficulty is currently the main course of reasearch.

Secondly, we also learn things about human intelligence in the process. Beating leading human world champions at chess (DeepBlue), Go (AlphaGo), Jeopardy (Watson), Starcraft 2 (AlphaStar) or DOTA 2 (OpenAI Five) are all interesting because humans tend to play these games rather instinctively. So looking at which kind of evolution are needed to achieve these victories, beside increasing computing power that only partially allow them, provide valuable hints about how our own intelligence may be structured.


[1] Discussing whether this statement is true and why it is rarely used to convince people to fund science is beyond the scope of this answer.

3
  • "all tasks thought to be impossible for computers" Is that really true? Chess & Go are games with complete solutions if you have enough computing power, so in a sense solving them was inevitable. And speech input has been widely expected, it just turned out harder than first thought, like optical object recognition. Any reference for how widespread alleged scepticism was?
    – CriglCragl
    Commented May 27, 2020 at 20:43
  • 1
    @CriglCragl Very good point. After some research, it looks like computer scientists were always very optimistic about AI abilities. I think I was mislead because I kind of remember some philosoph arguing about the difference between human and artificial intelligence, and what should not be possible to the latter, but I can not find back who their were. So I edited my answer and reformulated the last paragraph and remove this claim.
    – Kolaru
    Commented Jun 3, 2020 at 10:05
  • 1
    I think you will find a division continues, with philisoohers arguing artificial general intelligence will be very difficult, eg The Hard Problem Of Consciousness, while computer scientists are invariably very optimistic, eg The Human Brain Project. This split in temperament frequently obscures meaningful assessments of progress.
    – CriglCragl
    Commented Jun 3, 2020 at 15:20
5

Sometimes research does not have a direct practical objective. Sometimes research is done simply because people find it interesting, it captivates their imagination and gives them a sense of purpose. I would include in the later category space exploration. Some could ask Why would you invest billions of dollars into space telescopes when you could spend it battling climate change? I do not think that the answer implies what we call rationality, but rather a more subtle property related to the human nature. A very simple answer might be: Just because we can!. From this perspective, science research is not very different from art.

AlphaGo was an experiment to test the limits of current computer technology and push our knowledge about AI a little further. It also inspired the new wave of researchers and gave them a new point of reference. The debate should not be centered around what AlphaGo is, but around what it could become.

1
  • my answer is not linked directly with the field of philosophy of science, but maybe it can add to the debate
    – s.dragos
    Commented May 23, 2020 at 20:23
3

There are two aspects to such things.

The first is that you are discounting the value of pure knowledge. Science (and still more mathematics) studies things that are not of practical value merely to find out the truth of them.

The second is that Edison's opinion of all the filaments that did not work before he invented the incandescent light bulb was that he had not failed with them -- he was trying to find out whether they would work and had successfully determined that they would not.

1
  • 1
    @ablmf May we switch from science to technology, as the next step for any but pure research for its own sake? Look back at the invention of the brief-case-size mobile phone for very rich VIPs then the text medium for comms engineers to report to HQ. Compare them to this world where many countries have more mobiles than people, most used by teenagers asking each other what they're up to. 40 years ago, I tried to study a mix of law and computing. Possibly Europe's top law school saw no need. Now look at the furore over copyrights and taxes, to name but two. Commented May 24, 2020 at 22:46
3

Eventually, there is use of it! Take for example number theory, it was not directly applied to any fields for a long time, since history started, although there were indirect applications. It was not until the 1970s, that cryptography make extensive use of it! It is prominently seen in the most popular public-key cryptographic algorithm, RSA.

2

I am pretty sure they said that about computers and electricity (in the 1800s).

See how little practical applications those, basically parlor tricks, have these days.

5
  • Computers never were seen as impractical — Charles Babbage got a great deal of funding to build his Analytical Engine from the British government because they thought it would be useful. Unfortunately, he never completed a working model because machining weren't good enough in those days.
    – Peter Shor
    Commented May 25, 2020 at 17:12
  • @PeterShor I think some at the time thought so. > “If you put wrong figures into the machine, will the right answers come out?” His response: “I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.” I wasn't there but I think the asker was trying to hint that the machine is no used without lots of effort (programming) to use it, so what use is the machine? In hindsight totally worth it. But I think some at the time didn't think so. Commented May 25, 2020 at 20:14
  • @DarcyThomas: being charitable to the questioner, they were pointing out some of the limits of the machine as compared with a human calculator. The machine doesn't have any opinion whether what you're asking it to do is what you should be asking it to do. Being uncharitable, they flat didn't understand (because they had no frame of reference to understand) that the machine does not "tell you the answer to your problem", it literally just does what you tell it to. So, they actually couldn't tell what happens if you ask the wrong thing. Which is kind of the same problem AI seeks to address... Commented May 26, 2020 at 3:14
  • @DarcyThomas I suspect that person was just asking whether the machine was fake. If you put the wrong numbers in, and you get the same right answers out, then obviously it's just built to always output those answers. Or if you put the wrong numbers in, and no answers come out. Commented May 26, 2020 at 19:56
  • "In the early 1940s, IBM's president, Thomas J Watson, reputedly said: "I think there is a world market for about five computers." " theguardian.com/technology/2008/feb/21/computing.supercomputers
    – CriglCragl
    Commented May 27, 2020 at 20:54
1

For humanity as a whole:

Obviously, if there are no immediate applications, the possibility of future applications. It might take a long time and for most research we may never find an application, but you don't know for which research (field) this is the case. If you did, it would have immediate (or at least near-future) applications. It takes a whole lot of people playing with fire, for humanity to learn to cook.

For individuals:

The thrill of discovering something new. The knowledge that you have furthered humanity's knowledge.

Either of them should be sufficient to motivate research which is looking for more than the next big app.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .