170
$\begingroup$

I am trying to think/know about something, but I don't know if my base premise is plausible. Here we go.

Sometimes when I'm talking with people about pure mathematics, they usually dismiss it because it has no practical utility, but I guess that according to the history of mathematics, the math that is useful today was once pure mathematics (I'm not so sure but I guess that when the calculus was invented, it hadn't a practical application).

Also, I guess that the development of pure mathematics is important because it allows us to think about non-intuitive objects before encountering some phenomena that is similar to these mathematical non-intuitive objects, with this in mind can you provide me historical examples of pure mathematics becoming "useful"?

$\endgroup$
23
  • 35
    $\begingroup$ Newton invented his fluxions (i.e. his calculus) in order to compute the orbits of celestial objects that move according to his law of gravitation. The foundations of calculus as pure mathematics were not established until the 18th century. $\endgroup$
    – Ron Gordon
    Commented Jan 17, 2013 at 5:42
  • 27
    $\begingroup$ @JavaMan: I think there might be some debate as to whether string theory is useful ... $\endgroup$
    – Henry B.
    Commented Jan 17, 2013 at 5:49
  • 3
    $\begingroup$ @HenryB or whether an application of pure mathematics to pure mathematics is what the OP had in mind. $\endgroup$ Commented Jan 17, 2013 at 8:48
  • 10
    $\begingroup$ Two similar posts under the heading "Useless math that became useful" here and on MO. $\endgroup$
    – Martin
    Commented Jan 17, 2013 at 8:52
  • 4
    $\begingroup$ @Brad I think rglordonma was answering the OP's point about calculus being invented without a practical application, which is false as it was invented exactly for a practical application. $\endgroup$
    – user50229
    Commented Jan 17, 2013 at 20:02

34 Answers 34

113
$\begingroup$

Here are few such examples

$\endgroup$
8
  • 14
    $\begingroup$ Public-private key cryptography, the currently typical crypto for SSL connections, was also created by mathematicians: en.wikipedia.org/wiki/Public-key_cryptography#History. You could say "every time you shop online, your security is protected by math." $\endgroup$ Commented Jan 17, 2013 at 14:18
  • $\begingroup$ @NathanLong I would only say "may be protected" , since it wasn't proven that factorisation is NP... Maybe someone actually knows a better algorithm... $\endgroup$
    – N. S.
    Commented Jan 17, 2013 at 16:17
  • $\begingroup$ @N.S.: we know integer factorization is in $\mathcal{NP}$ (it's trivial to show), although we're not sure where exactly it fits in $\mathcal{NP}$. The real issue is pinning down exactly which complexity class factorization fits in, as well as proving $\mathcal{P}\ne \mathcal{NP}$. $\endgroup$
    – Reid
    Commented Jan 17, 2013 at 16:36
  • 2
    $\begingroup$ To add to the first bullet, many types of cryptography are based on pure number theory that was developed long before the cryptography. I also think vectors started out on the pure side before physics started using them, but I don't have a reference off hand. $\endgroup$ Commented Jan 17, 2013 at 17:14
  • 2
    $\begingroup$ @N.S. Of course it's NP - I think you mean "We don't know it's NP-complete." In fact, it almost certainly isn't, since we (rather quickly) devised a way to quickly factor using quantum computers, but there is no known algorithm for quickly solving any NP-complete problems with quantum computers. In any case, I don't see how that's at all relevant - the fact is, RSA is the most widely used public-key crypto algorithm in use today, is still considered secure, and relies on the difficulty in factoring, so it fits the question. $\endgroup$ Commented Jan 18, 2013 at 5:57
44
$\begingroup$

Negative numbers and complex numbers were regarded as absurd and useless by many mathematicians prior to $15^{th}$ century. For instance, Chuquet referred negative numbers as "absurd numbers." Michael Stifel has a chapter on negative numbers in his book "Arithmetica integra" titled "numeri absurdi". And so too were complex/imaginary numbers. Gerolamo Cardano in his book "Ars Magna" calls the square root of negative numbers as a completely useless object.

I guess the same attitude towards Quaternions and Octonions would have been prevalent, when they were initially discovered.

This is from my answer to a similar question here.

Below are some uses of negative and complex numbers.

$\endgroup$
15
  • 5
    $\begingroup$ By the time of Quaternions things had actually changed, and they were sought for for a long time in the hope they would be as good for modeling 3d movements as Complexs are for 2d. Unfortunately they came a bit too late and Linear Algebra had already eaten most of the cake $\endgroup$ Commented Jan 17, 2013 at 11:52
  • 2
    $\begingroup$ I hope people that have difficulties to accept complex numbers know why they accept real numbers, those are much harder to describe. $\endgroup$ Commented Jan 18, 2013 at 11:28
  • 3
    $\begingroup$ This answer doesn't really say what is "useful" about complex numbers. Or negative numbers, for that matter... $\endgroup$
    – AShelly
    Commented Jan 18, 2013 at 13:54
  • 2
    $\begingroup$ @HerngYi The point is that (at least once you have the reals) the complex numbers are just a 2-dimensional vector space over the reals (a complex number $a+bi$ is easily described as the pair of reals $(a,b)$ with a strange-but-simple multiplication rule. By contrast, the reals have to be constructed in a fundamentally infinitary manner, and almost all reals have no finite descriptions. $\endgroup$ Commented Jan 19, 2013 at 16:52
  • 2
    $\begingroup$ @N.S. You got a point there, perhaps we should vote for a change there? How about better numbers? :) $\endgroup$ Commented Jan 27, 2013 at 6:37
40
$\begingroup$

Here are some examples of pure mathematics that has shown to have real applications - however I am not sure of the origins.

$\endgroup$
21
  • 7
    $\begingroup$ Stochastic analysis came from finance. Louis Bachelier was the first one to treat Brownian motion mathematically in his thesis on speculation. I would also be curious where optimal control is supposed to have originated outside applied math. $\endgroup$ Commented Jan 17, 2013 at 8:55
  • 13
    $\begingroup$ I am doubtful that the subjects of PDE or (discrete) Fourier transforms could be considered pure math, historically. $\endgroup$
    – KCd
    Commented Jan 17, 2013 at 10:50
  • 3
    $\begingroup$ In the case of the heat equation, I thought Fourier presented a method without a solid foundation and his paper was rejected. Riemann then provided a basis, including refining the definition of integrable (the birth of the Riemann integral now standard in introductory calculus). This seems more like the reverse direction, in which the application led to developements in pure mathematics. $\endgroup$
    – Michael E2
    Commented Jan 17, 2013 at 15:05
  • 2
    $\begingroup$ @MichaelGreinecker, you're splitting hairs - it doesn't really matter if its BM versus geometric BM versus long-tail Levi flights etc. These are all models, they're not the real world. - A recent graphic in Economist magazine showed that every year starting in 2005, on every continent, as many hedge funds fold as are created. So are they hedging or speculating? $\endgroup$ Commented Jan 18, 2013 at 16:31
  • 2
    $\begingroup$ @MichaelGreinecker, the models are applied to decision-making in the real world. I can give you several examples where, after a crash, people realize, gee we were confusing the models for the real world, but not before the crash. $\endgroup$ Commented Jan 19, 2013 at 20:11
33
$\begingroup$

The discussion of conic sections by the ancient Greeks, see the wikipedia article, gave the basic definitions required by Kepler to formulate his law of planetary orbits. Of course the Greeks did not have term "pure mathematics".

An example from pure mathematics of the 20th century is the applications of category theory to computer science.

People also forget that the notion of the graph of a function was invented by Descartes and of course is now ubiquitous in our daily papers, to show clearly how bad things are getting! For more information on the invention of Cartesian coordinates, see the wikipedia entry on Descartes.

$\endgroup$
3
  • 8
    $\begingroup$ This might be the example that had the most direct impact on the largest number of "normal" people. Who would have ever thought that Microsoft would add Monad Comprehensions to BASIC? $\endgroup$ Commented Jan 17, 2013 at 13:44
  • 3
    $\begingroup$ Just a comment on the term "abstract nonsense": abstraction is about analogies, and thus about saving work, doing several things at the same time. The term "abstract nonsense" comes from those who think maths ought to be hard, about solving hard problems, whereas others think one job of maths is to make difficult things easy, by developing the "right" language. It was said by Bott that Grothendieck was prepared to work very hard to make things tautological! $\endgroup$ Commented Jul 26, 2013 at 8:47
  • $\begingroup$ @RonnieBrown: I think your interpretation of the term "abstract nonsense" is overly negative. According to en.wikipedia.org/wiki/Abstract_nonsense the term was coined by Steenrod and promulgated by Eilenberg-MacLane, all of them using it in a self-deprecating but affectionate way. $\endgroup$
    – Nefertiti
    Commented Feb 27, 2017 at 13:53
23
$\begingroup$

Euler's Theorem from pure number theory is at the heart of the RSA open key encryption system.

$\endgroup$
1
  • 21
    $\begingroup$ Actually, it's Fermat's little theorem that is the basis of RSA. There is a widespread misconception that it is based on Euler's theorem (even though the original paper used Fermat's little theorem). The problem with relying on Euler's theorem is that it suggests the encoding and decoding procedures may not be inverses on messages that are not relatively prime to the modulus. But in fact there is no problem, by the proof using Fermat's little theorem. See the last section of the Wikipedia page on RSA at en.wikipedia.org/wiki/RSA_%28algorithm%29 $\endgroup$
    – KCd
    Commented Jan 17, 2013 at 10:46
19
$\begingroup$

Complex numbers are very useful in electrical engineering. An imaginary number is a hare-brained idea if you think about it: square of this thing is -1?????!?

And yet, it's very valuable when calculating alternative currents.

The "trouble" with pure mathematics or ideas is that empirical world is open world (not closed like in mathematics), and as we build newer and newer practical things on top of it, you never know what's useful.

Say, lambda calculus and functional programming. If you asked SW engineer 30 years what's functional programming good for, you'd most often get an answer "feh! silly academic toy! useless!".

Fast forward 20 years to MapReduce applied by Google and it turns out that yes, it's actually quite practical.

Werner von Braun: "research is what I'm doing when I don't know what I'm doing". Combine that with Einstein's "there's nothing as practical as good theory". Result of this combo is: since we do not know which theory is good, we have to test them; but how do you test something that you have not even formulated as pure theory first?

"Bottom up" is such an approach, but not everything can be worked out this way.

Although I feel you focus on the wrong problem: applicability of pure theory is trivial, just check if it works in practice, try to apply theory of gravity by Aristotle to shooting cannonballs and see it doesn't work (a stone goes up on a curve and at the highest point of trajectory falls vertically down to the ground - has Aristotle never thrown stones or smth?).

A harder problem is when pure theory deceives us into wrong representation of real world, for example classical logic has done huge conceptual damage to knowledge representation in AI and the way we think about the problems (all those silly logical rules that don't work, akin to the "witch" skit from Monty Python's Holy Grail).

P.S. Certain paper on fast resolution of big Horn clauses is theory behind pattern matching used for programming in Prolog and Erlang (maybe there are more applications I don't know of), although I can't remember the name of the paper.

$\endgroup$
8
  • $\begingroup$ Could you provide more information/links re: the ways in which "classical logic has done huge conceptual damage to knowledge representation in AI". I'm genuinely interested. $\endgroup$ Commented Jan 18, 2013 at 0:15
  • $\begingroup$ Hmm there are not many papers on this I'm afraid, my claim is mostly conclusions I drew after writing my master's thesis on knowledge representation and engineering. I wrote a summary of this to organize my thoughts on subject, but did not publish it anywhere - not that I'm alone on this, see this article by Shirky: shirky.com/writings/semantic_syllogism.html (he has made some errors in this article and I know he writes partially for publicity but many of his arguments hold). $\endgroup$ Commented Jan 18, 2013 at 10:44
  • $\begingroup$ Most of the useful contributions to AI have never had anything to do with classical logic: fuzzy sets, semantic graphs, CYC's "sea of assertions", Roger Schank's CD formulas, story understanding, etc. I don't think this is an accident, useful extensions to logic like constraint-based programming (see ECLIPSE extension to Prolog) mask a deeper problem: the hard AI part is conceptual work, that is, working out sound premises, not inferencing afterwards once we pretend the premises are true. My bet is that the road to hard AI leads through fancier clustering or smth similar, not through logic. $\endgroup$ Commented Jan 18, 2013 at 10:51
  • $\begingroup$ Although not direct critiques of logic, following works are useful to understand context: "Knowledge Level" by Allen Newell, "Structure of Intelligence" by Ben Goertzel, partially with a good of somewhat weird philosophy is "Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian" by H. Dreyfus. Those works illuminate the context that from my viewpoint demonstrates that apart from very narrow niches classical logic is mostly inadequate as basis of modelling human knowledge. $\endgroup$ Commented Jan 18, 2013 at 11:00
  • 1
    $\begingroup$ @Dan-GeorgeFilimon: here it is: dropbox.com/s/o6mb944e5phsdh6/… , although it only its fraction concerns issues I have described, since the thesis subject is mostly "what has worked in AI re knowledge representation and processing" and so it is not direct critique of logic in concext of KM as such. $\endgroup$ Commented Jan 20, 2013 at 18:11
18
$\begingroup$

Most of our current mathematical knowledge was developed to explain something already observed empirically. Going way back, many early civilizations had no concept of "zero" as being a numerical quantity; however, the concept of "nothing" or "none" existed, and eventually the Babylonians, around 2000 BC, began using symbols for "none" or "zero" alongside numerals, equating the concepts. Newton laid the foundations of what we know today as calculus (also developed independently by Leibniz) in order to mathematically explain and calculate the motion of celestial bodies (and also of projectiles here on earth). Einstein developed tensor calculus in order to establish the mathematical backing for general relativity.

It can also, however, happen in reverse. Usually, this is when "pure" math exhibits some "oddity", such as a divergence or discontinuity of an "ideal" formula that otherwise models real-world behavior very closely, or something originally thought of as a practical impossibility. Then, we find that in fact the real-world behavior actually follows the math even in these "edge cases", and it was our understanding of the way things worked that was wrong. Here's one from physics which touches on some of the most basic grade-school math and yet challenges those very foundations of thought: negative absolute temperature.

Temperature, classically, is the measure of thermal energy in a system. By that definition, you can never have less than no energy in the system; hence, the concept of "absolute zero". Most "normal" people hold to this concept and think of zero degrees Kelvin as a true absolute; you can't go lower than that.

However, the theoretical, more rigorous, definition of temperature has as its defining character the ratio between the change in energy and the change in entropy. As you add total energy to a system, some remains "useful" as energy, while some is lost to entropy (natural disorder). It's still there (First Law of Thermodynamics), but cannot do work (Second Law of Thermodynamics).

The graph of temperature using this definition has computable negative values; if entropy and energy are ever inversely related (entropy reduces as energy increases, or vice-versa), then this fraction, and thus the temperature, is negative. Even more interesting is that the graph of temperature as a function of energy over entropy diverges at absolute zero; the delta of entropy approaches zero for deltas of energy around absolute zero, producing infinitely positive or negative values with an undefined division by zero at the origin. That graph, therefore, predicts that absolute zero is actually a state not of zero energy, but of zero change in entropy, regardless of the amount of energy in the system. Absolute zero, therefore, could in fact be observed in systems with extreme (even infinite) amounts of energy, as long as no additional energy added was ever lost to entropy.

This used to be discounted out-of-hand; until recently, every thermal system known to man always exhibited a direct relationship between energy and entropy. You could keep adding all the energy you wanted, to infinity, and entropy would continue to increase as well. You could keep cooling a system all you wanted, until you took out all you could possibly remove, and entropy would decrease as well. Again, this is borne out by our everyday observations of the world; solid, crystalline ice, when heated, becomes more chaotic but generally predictable water, which when further heated becomes less predictable gas, and eventually decomposes into its even less predictable component atoms, which would further decompose into plasma.

However, work with lasers, and the theoretical behavior of same, gave us a thermal system that has an "upper bound" to the amount of possible energy we could add that remains contained within the system, and moreover, that limit was pretty easy to reach. This allows us to observe a system that actually becomes less chaotic as more energy is added to it, because the more energy that is in the system, the closer it gets to its upper limit of total energy state, and thus the fewer the number of particles in the system that are at a state less than the highest state (and thus the ability to accurately predict the energy state of any arbitrary particle is increased).

On the other side of the spectrum, recent news has reported that scientists have produced the opposite; they can get entropy to increase by removing energy from the system. Work with superfluids at extremely cold temperatures has demonstrated that at a critical point of energy removal from the system, particles within it no longer have sufficient energy to sustain the electromagnetic force that attracts them to and repels them from each other in their lowest energy state (which is also their most ordered state). They lose the ordered structure that defines conventional matter, and begin to "flow" around each other without resistance (zero viscosity). At that critical point, you have increased entropy as the result of removing energy; the particles become less predictable as to position and direction of motion when they're cooled, instead of our classical idea that things which are cooled become more orderly. At this point, we have reached "negative absolute temperature".

Thus, temperature seems to exhibit a "wraparound"; as energy increases to infinity, eventually the amount of it that can be in entropy will decrease, seemingly breaking the First Law of Thermodynamics and allowing us to get more energy back from the system than the incremental amount we added (but not more than the total amount of energy ever introduced to the system, so the First Law still holds). Because that threshold is attained (in an unbound system) at infinite energy states, we'll never get there with most of our everyday thermal systems, but we can see it in a bound system, and we can "wrap around" from the low end by removing energy to reach a negative absolute temperature. This is backed up by observance of the reciprocal of temperature, which is the thermodynamic beta or "perk". This fraction, by placing the zero entropy delta in the numerator, is perfectly continuous for all real values of the domain, including zero.

$\endgroup$
1
  • 1
    $\begingroup$ That is pretty damn cool. I was trying to wrap my head around this news of negative temperature for two weeks now. It seems that zero Kelvin is like a discontinuity of $-\frac1x$ at $x=0$. Approach from the positive you get colder and colder, but then... bam! You're back from the top! :-) $\endgroup$
    – Asaf Karagila
    Commented Jan 31, 2013 at 20:38
11
$\begingroup$

Group theory is commonplace in quantum mechanics to represent families of operators that possess particular symmetries. You can find some info here.

$\endgroup$
9
$\begingroup$

Algebraic Topology has found applications data mining (thus to cancer research, I believe), in the field of topological data analysis. See http://www.guardian.co.uk/news/datablog/2013/jan/16/big-data-firm-topological-data-analysis

$\endgroup$
8
$\begingroup$

Turing's development of computability which led to the theoretical basis of computing.

As a personal note, I take pride in dealing with models of ZF without the axiom of choice and all sort of strange consistency results. The only way an amorphous sets and D-finite combinatorics could be utilized for "practical uses" is when we prove that the universe is actually a good model for an infinite D-finite set, and we can apply all sort of crazy non-AC theorems to argue about properties of the universe.

The only reason this would turn out to be really awesome is that it may invalidate parts of quantum mechanics (see The Axiom of Choice in Quantum Theory).

$\endgroup$
1
  • $\begingroup$ Why not also remove the principle of Excluded Middle? - to be truly useful you should probably also not assume Axiom of Infinity, Hausdorff separation and move from equivalence classes --> tolerance relation. $\endgroup$ Commented Jan 18, 2013 at 3:56
7
$\begingroup$

Too many to count, many "pure mathematics" in the past become "applied mathematics" now.

The problem with pure mathematics is that it has advanced too much for science and engineering to catch up now.

Btw, doing a PhD in any serious science and engineering discipline (even some social science subjects) is like doing some mathematics in the end, and of cause many of these mathematics used there were regarded as "pure mathematics" 100-200 years ago.

$\endgroup$
1
  • $\begingroup$ If you go back a couple of centuries, they were just called scientists or "natural philosophers" - eg Newton. Universities up to the late 19th century didn't even have departments as such. $\endgroup$ Commented Jan 18, 2013 at 4:02
6
$\begingroup$

I had a teacher that once told me that Riemann's new idea of measure (that the way we measure has to change depending on the manifold) opened the door to relativity's theory.

Also, one of the calculus pioneers was by Francois Viete, who allowed Leibniz and Newton to develop the machinery of classic mechanics.

$\endgroup$
4
  • $\begingroup$ I don't know anything about Francois Viete, but I do know that A. Einstein originally developed both special and general relativity without the idea of a manifold with a metric - those were later used to refine the exposition. $\endgroup$
    – wolfen
    Commented Jan 17, 2013 at 20:38
  • $\begingroup$ @wolfen, Minkowski developed a lot of the geometry that Einstein applied to special relativity. $\endgroup$ Commented Jan 18, 2013 at 3:58
  • $\begingroup$ @alancalvitti - I attended a class by O'Neil (RIP: palisadespost.com/obits/content.php?id=6764) at UCLA in which he said that Einstein felt self-conscious about his "observer A, observer B" approach, but went back to it again in his research after trying unsuccessfully to use more modern mathematical tools (metrics in particular). Nowadays, of course, relativity theory is used as a motivating example for studying differential geometry and Riemann manifolds - but it seems that those weren't the tools that Einstein actually used when doing his research. $\endgroup$
    – wolfen
    Commented Jan 19, 2013 at 17:29
  • $\begingroup$ Yeah, Im not saying that he used those tools, or that Riemann inspired him, just that the change of perspective was in the air, not just in Einstein's head... $\endgroup$
    – Miguel
    Commented Jan 20, 2013 at 22:13
6
$\begingroup$

The implementation of the Fast Fourier Transform by Cooley and Tukey and maybe Shor's Quantum Algorithm to factor number in polynomial time, using the Quantum Fourier Transform...at least it might become useful somewhen...

$\endgroup$
1
  • 1
    $\begingroup$ FFTs were always useful for other things though $\endgroup$
    – jk.
    Commented Jan 17, 2013 at 11:05
6
$\begingroup$

Just look at the field of quantitative finance, financial mathematics(Brownian motion, Fourier Transformation ect.)

$\endgroup$
1
  • $\begingroup$ Useful or severely discredited by now? $\endgroup$
    – Drux
    Commented Jan 18, 2013 at 13:05
6
$\begingroup$

Just to add another example:

Boolean algebra was developed in 1854: its abstract and maybe boring, but it set the basis for the development of digital circuits.

So all the digital devices that you use right one, are heavily based on abstract mathematics from 1854.

$\endgroup$
0
4
$\begingroup$

I'd say that basically all technological achievements are founded in pure mathematics. The relationship is often long and distant, but I'd say without pure mathematics they wouldn't be possible. In fact, I think it'd be rather hard to find a technological achievement that wouldn't be based on results of pure mathematics.

To give a few examples:

  • Computer science. Computers are based on Turing's and Church's research about what mathematical functions are computable in some sense. At that time, it was pure mathematics, yet now it's the basis of what we use every day. CS uses many concepts from pure mathematics, starting from binary numbers, number theory etc.
  • Physics. Physics evolved hand in hand with mathematics. Things that used to be purely mathematical were subsequently used in physics,. Without this pure math, we wouldn't have many achievements in physics, simply because physicists wouldn't have the required theoretical tools to work with. And this means, we wouldn't have engineering achievements that use them. To give some examples:
    • Without calculus and infinitesimals, we wouldn't have statics, which is indispensable for most today's complex architecture.
    • Lie groups, a purely theoretical idea, become very useful in particle physics, which is the basis of many nowadays technological advancements.
  • Probability and statistics are used everywhere. All empiric research is (or should be) validated using statistical methods.
  • And something less serious - without topology, we wouldn't have so many ways of putting on a necktie.
$\endgroup$
4
$\begingroup$

IMO any pure mathematics which is generated by a human brain (and there probably exists and most certainly will exist other kinds in the near future) is at least motivated by something which actually exists in the world of human experience.

But once the work actually gets underway on a new idea in some area it takes on a life of its own and will, when polished & refined, look very different from how it did at the outset. Calculus is a great example of a very refined area of mathematics - you can see this in the notation, which has been polished smooth by generations of heavy usage and is very powerful & expressive (and typically takes students a long time to learn well).

And the magic is that every time a human brain learns a new piece of pure mathematics, it monitors its own (human) experience for any relevance/connections and the chances increase for the discovery of a new application.

So I'm not sure it has ever happened that a piece of pure mathematics was invented for no reason and was absolutely useless until an application was discovered later. And conversely, I'd be willing to bet that almost every aspect of applied mathematics has been the inspiration for pure theoretical work of some sort (whether it led to any significant advances or not).

I guess what I'm trying to say is that in mathematics (as in all of science) the dialogue between theory and practice goes in both directions and never stops.

$\endgroup$
4
$\begingroup$

Topology helps understanding the molecular structures. See this book, When Topology Meets Chemistry: A Topological Look at Molecular Chirality written by Erica Flapan. I skimmed a few chapters of the book and it was very interesting.


Related: Real life applications of Topology

$\endgroup$
3
$\begingroup$

Coding theory is mainly based on algebra; see for example the Goppa code which uses algebraic geometry tools.

$\endgroup$
3
$\begingroup$

Fractals were invented specifically to explore areas of geometry which were thought to only exists in the world of imagination of pure mathematics. They failed miserably, because it turned out that the world is chock full of fractals. Nowadays, fractals are used heavily in computer graphics and describing the patterns of nautilus shell, pine cones, coastlines, lightnings, among many other natural phenomenons.

According to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity (although he made the mistake of thinking that only the straight line was self-similar in this sense). In his writings, Leibniz used the term "fractional exponents", but lamented that "Geometry" did not yet know of them. Indeed, according to various historical accounts, after that point few mathematicians tackled the issues and the work of those who did remained obscured largely because of resistance to such unfamiliar emerging concepts, which were sometimes referred to as mathematical "monsters". (Wikipedia)

$\endgroup$
3
$\begingroup$

Error correction codes!

When CDs were first being discussed, the engineers from Phillips were in discussion in Japan with the company Sony on standards, and those from Sony said they were not happy with the error correction standards set by Phillips. So their engineers went back to Eindhoven and called people together to ask who was the best expert in Europe on this new science of error correction. They were told it was a Professor of Number Theory, J. van Lint, in Eindhoven! I did check this story with him.

I have been told that the high quality of the pictures from the Voyager space probes would not be possible without error correction, because of the weak signals, and the noisy space.

Error correction is quite widespread, from hard disks, to simple ones in the ISBN, and the advanced ones, see for example the wikipedia article, use sophisticated pure mathematics.

The first such code, the Hamming code, was invented by a researcher at Bell Labs, when he ran programs over the weekend, and came back to find "your program has an error". He swore to himself, and thought: "If it can find an error, why can't it correct it?"

$\endgroup$
3
$\begingroup$

Counting processes and martingales are objects I view as purely mathematical/probabilistic objects. Nevertheless, they fundamental objects when describing the theory of survival analysis - survival analysis being a branch that is used in many registry-based studies in e.g. epidemiology.

A simple model (a model without censoring) of survival analysis is the following: Let $X_1,\ldots,X_r$ be iid random variables with values in $(0,\infty)$, where $X_i$ is the lifetime of the $i$th individual. Let $X_i$ have density $f$ and distribution function $F$ with $F(t)<1$ for all $t\in (0,\infty)$. Put $$ N_t^i=1_{\{X_i\leq t\}},\quad i=1,\ldots,r, $$ and $$ N_t=\sum_{i=1}^r N_t^i, $$ i.e. $N_t$ is the number of individuals dead before $t$. Then $(N_t^1,\ldots,N_t^r)_{t\geq 0}$ is an $r$-dimensional counting process and $(N_t)_{t\geq 0}$ is a counting process. Now, theory of local martingales and predictable covariation can be used to derive estimators such as the Nelson-Aalen estimator of the cumulative hazard rate, i.e. the function $$ \Lambda(t)=-\log S(t), $$ where $S(t)=1-F(t)$ is the survival function.

$\endgroup$
2
$\begingroup$

Group theory in physics Standard Model.

$\endgroup$
2
$\begingroup$

Wavelet and Fourier transforms are used in a very long list of medical equipment (MRA, blood pressure monitor, diabetis monitor, just to mention a few), in audio-video compression (mp3, jpeg, jpeg2000,h.264 et al) and audio-video effects (audio equalization, image enhancing, etc). Linear-Algebra is the basis of the Google Page-rank algorithm, and some face-recognition algorithms. This is not by any means an extensive list of applications, just a few that I remember.

$\endgroup$
2
$\begingroup$

Coming from software development background I can say that functional programming languages were influenced to some degree from Lambda Calculus, a formal system. Lambda Calculus was introduced by mathematician Alonzo Church in the 1930s

$\endgroup$
2
$\begingroup$

How about in calculating orbital patterns (i.e. before the first satelite was ever launched). Without the work of pure mathematics laying the ground work for astro-physics, Apollo 13 would have been lost.

$\endgroup$
2
$\begingroup$

Matrix

Not the the movie, the "array of numbers"... It was invented before computers and it's now used for all heavy 3D stuffs (real-time or not) and more.

It started as pure theory and it's now used in most of your favourite movies and all 3D video games...

Pure research is important ;)

$\endgroup$
1
  • 1
    $\begingroup$ I'm fairly certain matrices didn't at all start as "pure theory". They were developed to study systems of linear equations, which surely show up by the thousand in physics and applications. $\endgroup$
    – Jack M
    Commented Mar 4, 2014 at 21:15
2
$\begingroup$

The classical example of this for me it is just binary numbers and its properties (boolean algebra)

$\endgroup$
2
  • 1
    $\begingroup$ It would be nice to explicitly say when this became "useful". $\endgroup$
    – robjohn
    Commented Jan 18, 2013 at 10:37
  • $\begingroup$ welcome to SE. Please note that your answer is quite vague. To improve your answers you might want to roam around the site some more and appreciate the style of answers. $\endgroup$ Commented Jan 18, 2013 at 10:49
2
$\begingroup$

Matrix operations were used by Pauli to model electron spin.

Pauli spin matrix

$\endgroup$
1
  • $\begingroup$ There is a classic toast allegedly used by Pure Mathematicians: "To Pure Mathematics - may it never be any use for anything!" The boundary between Pure and Applied will of necessity be a moveable thing. $\endgroup$ Commented Jan 30, 2013 at 19:13
2
$\begingroup$

Here are examples of Applied mathematics :

enter image description here

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .