65

While enjoying the response to "Why did CPU designers in the 70s prioritize reducing pin count?" In 1979 IEEE was hard at work at coming up with a standard for handling floating point numbers, and math functions that dealt with them. I remember several early uses of external math chips i.e. 6502, Z-80, 8088.

Examples are AMD AM9511, AM9512, Motorola MC68881/2, Intel 8231, 8232, 8087, National 32081, TI TMS1018, Weitek WTL1032/1033, Micromega uM-FPU, etc.

Some are being re-purposed while others are still in production for current micro-controllers like the Arduino and Z-80 i.e. Micromega uM-FPU

What was the rationale? Heat, pin count, size, tooling, marketing & cost, die failures, something else?

18
  • 3
    I've never heard of a maths coprocessor for the 65023 or Z80.
    – JeremyP
    Commented Apr 4, 2018 at 8:40
  • 34
    Why did some early CPU's use external graphic processing units? Oh wait, some still do.
    – pipe
    Commented Apr 4, 2018 at 13:53
  • 20
    @pipe The history of how and why graphics went from on to off chip, while math went from off chip, to integrated co-processor, to fully on board (with a few limited exceptions) would be interesting (maybe not on-topic for RC.se though...)
    – mbrig
    Commented Apr 4, 2018 at 14:56
  • 15
    It is not just math chips, there are other type of external chips used such as a MMU (Memory Management Unit) which would easily manage things like swap/memory paging, graphics, sound, storage, IO, etc. Integrating more components into the ICU makes it cheaper and more expensive at the same time. Cheaper if those parts are needed, more expensive due to design, testing, and manufacturing. There are also MCM (Multi-chip-modules) where there may be two chips on a single module in a single socket. In a PC, there used to be a north bridge, south bridge, and a lot more...
    – MikeP
    Commented Apr 4, 2018 at 20:43
  • 4
    Never liked the names. I would have called it the 8.087E+3 or 6.8881E+4 for instance. Commented Apr 9, 2018 at 1:53

14 Answers 14

53

Another point not addressed in the existing answers relates to the latency associated with accessing an external coprocessor.

The first math coprocessors, while much faster than doing the same work on a CPU, still took many clock cycles to complete each operation. The overhead (bus cycles) associated with transferring data between the two chips was "lost in the noise". In other words, there was no penalty for putting those functions in a separate chip.

However, as coprocessors got faster, this overhead became a significant bottleneck on throughput, and this is what drove integrating them directly into the CPU chip.

4
  • 7
    This somehow answers why today FPUs tend to be on the same chip as the CPU, but not quite why they were external back in the days.
    – tofro
    Commented Apr 6, 2018 at 8:04
  • 12
    It is still wrong. It explains why there was no penalty, but not why they were separate. Raffzahn's answer is correct - economy. Also, in the beginning not every computer got one - which means they where not NEEDED to be integrated. Apps back then used a lot less math ;)
    – TomTom
    Commented Apr 6, 2018 at 9:28
  • 5
    Modern multi-core CPUs have an FPU built-in to each core. (Minor variation: AMD Bulldozer-family uses clusters of two weak integer cores each sharing a FPU / SIMD unit, as kind of an alternative to Intel's Hyperthreading (SMT) where a strong integer core can act as two weaker cores.) This is why Bulldozer has 10 cycle latency for movd xmm0, eax while modern Intel has only 1 cycle latency for moving data between GP-integer and SIMD xmm registers. agner.org/optimize). Of course x87 has to store/reload (~5 cycle latency) for eax->st(0) Commented Apr 7, 2018 at 0:51
  • 2
    The first mainstream program to really put focus on the FPU was Quake, which ran much better on Pentiums than on 486'es. Commented Jan 24, 2019 at 17:03
151

Simple: Complexity.

A 8088 had about 29,000 transistor functions, while an 8087 with 45,000 is almost double that. Integrating the FPU within the CPU would have made it three times as big, putting production at a >5 times higher failure rate, resulting in a price tag way higher than 3 times the CPU alone. More like 5-8 times.

When closing in to what is possible in production, it's more efficient to do two smaller chips than one big. This greatly reduces the cost for a complete setup. Having two parts further allows you to skip the additional cost for users not needing the FPU - a majority at that time.

5
  • 41
    "skip the additional cost for users not needing the FPU" That's the important part. You move the less used parts to a separate chip.
    – RonJohn
    Commented Apr 4, 2018 at 13:54
  • 3
    That should be the answer.
    – TomTom
    Commented Apr 6, 2018 at 9:28
  • 6
    @TomTom Darn right. And that's my answer: I put cost right up front. I was there coding those platforms and selling those platforms. I know what people bought and why they bought it. I also coded to those platforms for major retail products and was in the product-marketing meetings where we determined how to engineer the product for presence/absence of math co. based on who had them. Precious few, being the answer. Commented Apr 10, 2018 at 0:34
  • 1
    Also at that time not very many had the need for math done in hardware. It was not until Quake came out and required Pentium math speed (486'es simply weren't fast enough) that this really got important for the typical user. Commented Mar 10, 2020 at 16:35
  • 1
    A modern example of this production efficiency technique would be the chiplet-based architecture used in modern AMD CPUs.
    – ssokolow
    Commented Jul 26, 2022 at 10:35
66

So these math chips (I assume you're talking about floating point units, such as the 8087 and other coprocessors) were not always/usually included in the CPU because they were not required by most users. When you don't need floating point maths, you also don't need the FPU, and that was the commonest case. So to make the CPU cheaper they leave it out.

Then as an upgrade/optional extra, you might add an FPU to do functions that would be a bit slow in software.

5
  • 7
    The FMA units in Haswell and Skylake(desktop) are probably the largest single chunk of logic (not cache) on each core, and the FP dividers are probably considerable too. Those chips have a throughput of 2 per clock for SIMD vector FMA, on 256-bit ymm registers. So that's 2x 8x 32-bit single-precision FMA, and 2x 4x 64-bit double-precision FMA. (Probably sharing some of the multiplier transistors between single / double precision mantissa multipliers). AVX512 makes Skylake-X even heavier, doubling the width of the FMA units. Skylake did drop the separate adder unit, running it on the FMAs. Commented Apr 7, 2018 at 0:58
  • 1
    See agner.org/optimize for instruction throughput/latency / execution port numbers. The extra 512-bit-wide FMA hardware is so big that some SKL-X models only come with one working 512-bit FMA unit, but still always two per clock throughput for 256-bit FMAs. And some funky combining of the FMA units on port 0/1 and powering up the extra one on port 5 when 512-bit instructions are in flight. anandtech.com/show/11550/… Commented Apr 7, 2018 at 1:01
  • 5
    Separate doesn't make sense today because the performance wouldn't be acceptable; lots of code these days depends on having fast FPUs. The FPUs need access to the L1d cache, which is built-in to each core. Also, modern CPUs have more transistors than they can power at any one time without melting (en.wikipedia.org/wiki/Dark_silicon), so having lots of dedicated hardware for stuff you might need (to run it fast when you do need it) but don't use all the time is less of a burden than in the past. Commented Apr 7, 2018 at 1:05
  • 1
    Adding the math coprocessor was not cheap back in the day. Having the option to not get one was nice for anyone buying on a budget who wasn't going to do a lot of calculations. On the other hand, sometimes my father would run a spreadsheet calculation that would take more than 24 hours without a math coprocessor. Commented Apr 7, 2018 at 13:30
  • @ToddWilcox I wonder how much time it would run with a coprocessor.
    – Ruslan
    Commented Jul 10, 2019 at 12:08
39

It seems to me there are a number of factors involved, some of which have been addressed in other answers:

  • design complexity
  • cost (which largely results from the complexity)
  • feature necessity
  • time-to-market

Regarding complexity, as Raffzahn explains, early FPUs were much more complex than the CPUs they complemented. This meant that they needed more transistors, and had lower yields, which both contributed to higher costs, both for the manufacturer and at retail. Ken Shirriff has been documenting some of the complexity involved recently — the 8087 was pushing the envelope in a number of ways.

In the late seventies and early eighties, micro-processors didn’t have floating-point support, so software was developed without needing hardware floating-point. Even once FPUs became available, the cost meant that few users bought them, which meant that (most) software had to continue working without them, which allowed most users to do without them, and so the cycle continued. Some markets used FPUs extensively, e.g. for CAD; but even after Intel added them as standard to 486s, there continued to be a huge market for FPU-less CPUs (which Intel capitalised on with the 486SX). The killer app for FPUs in the x86 world ended up being Quake...

The time-to-market aspect is significant IMO. The 8086 was released in 1976, but the 8087 only became available in 1980; it was supported thanks to its designer’s foresight — even though the FPU was nowhere near designed, let alone ready, Bill Pohlman (who was in charge of the 8086’s development) added features to the 8086 so that it would be able to support a co-processor (not just FPUs too — see Were there 8086 coprocessors other than the 8087?). Likewise, the 386 was released in 1985, but the 387 only became available in 1987 — and at this point, users did know the value of an FPU (which wasn’t the case when the 8086 was released). If Intel had had to wait for their FPUs to be ready, they would have missed a lot of market opportunities (and we wouldn’t be running x86 PCs).

3
  • youtube.com/watch?v=9ymBGXdl4mA Yay for fpu's. Commented Apr 5, 2018 at 8:20
  • 1
    Time-to-market was important because the 8086 was Intel's catch-up processor after they determined that the iAPX 432 wasn't going to be ready in time to address the competition (68000, Z8000 and 32016). N.B. that the competition didn't have built-in FPUs either.
    – TMN
    Commented Apr 6, 2018 at 14:30
  • @TMN good points. The 8087 really pushed the envelope in a number of ways, the whole FPU concept was rather new in the micro-processor world... Commented Apr 6, 2018 at 14:37
13

This simple answer is not enough room on a chip for the total transistor count given the limitations of the process technology of the day.

As per Wikipedia on the Intel 8087:

The 8087 was an advanced IC for its time, pushing the limits of period manufacturing technology. Initial yields were extremely low.

This for the coprocessor all by itself.

At 45,000 transistors, attempting to integrate this on the same chip with the main CPU (29,000 counting all ROM and PLA sites) would have been beyond the maximum practical transistor count for the process technology of the day.

1
  • 1
    While I don't doubt initial yields of the 8087 were low, there were already larger and more complex ICs on the market - notably the 68000, named for its approximate transistor count, which appeared in the mid-1970s. I read somewhere that the 8087 was even smaller than most competing IEEE-compatible FPUs - though AMD's 9512 was still smaller.
    – Chromatix
    Commented Apr 7, 2018 at 5:36
11

Like anything else, in the end, it was "ease of use" for some value of "ease" and "use".

The primary motivator was performance, the specialized processors are just an extension of the maturity of micro electronics and CPU design.

Recall that the original computer were just discrete components. Then, as the ICs evolved, gates out of transistors, and flip flops out of gates, the discrete components fell naturally in to functional integrated circuits. The "first" math co-processor is arguably the 4-Bit adder. Many old computer designs strung those off the shelf adders together as part of their ALU (Arithmetic Logic Unit).

Obviously, the first CPU IC combined much of the decode logic, counters, and the ALU on to a single IC. It's not that computers were new, but these ICs combined them all on a single chip. Thus the "micro" processor. Saving, well, everything: space, cost, heat, complexity, and gaining performance.

These math co-processor were just a natural specialization. The CPUs didn't have the real estate to combine all of those features on a single die. Plus the market wasn't mature enough or willing to spend the money for extravagances like FP Math, or multi-precision multiplication.

History is full of compromises and work arounds to get around "doing math" in CPUs. Fixed point math is a fine substitute for floating point math in many situations. Lookup tables if you can afford the memory. Shifting and adding instead of a general purpose multiplier. Also, most CPUs were, and still are, used for control purposes. Don't need a lot of trigonometry to run a sprinkler timer.

Today, of course, most of the major CPUs bundle FP hardware on the die, but we still have things like GPUs now, off chip, doing their own thing. But there are some CPUs that integrate them today. Why not, it's "Drag and drop" in Verilog -- tada...CPU with integrated low end GPU.

10

We're so used to hugely complex and dense chips these days it's easy to forget that there was a time when the 8086, with "only" 29,000 transistors (no on-chip cache of any kind), was at the edge of what could be done.

To put it simply:

  • It wasn't feasible because the chip would be too big to produce cost-effectively (die size, yield)
  • Everything a math chip can do could be done in software, just slower
  • Most applications weren't that math-heavy; having a dedicated math chip to speed it up wasn't important enough to most users to justify the extra cost of even adding a co-processor, let alone the probable cost of a CPU with math integrated into it.

It was only as Moore's law allowed for much larger transistor counts that engineers put it use by integrating the math co-processor, adding logic for instruction prefetching/pipelining/branch prediction, cache, and multiple cores that we see in today's chips.

9

Many small CPUs available and used today for embedded designs do not have an onboard floating point unit - most of the AVR and PIC series, MCS51, some ARM ...

8 bit single-chip microprocessors were meant at least as much, if not more, for the same market that microcontrollers and embedded CPUs target today. In that market, cost and power efficiency are paramount, and a lot of such applications are about controlling equipment and not doing heavy computing.

The same might even apply to early 16/32 bit CPUs - the users that could afford the early examples would not that often have built general purpose desktop computers with them - but had aircraft, industrial equipment, laboratory equipment to equip.

The first CPUs that seem to be tailored ALMOST ONLY to desktop/server/workstation computing are probably those of the 486/early Pentium era: Complex electrical and cooling requirements, multimedia optimizations, consumer-focused marketing...

Mind that floating point can ALWAYS be done in software on an integer only CPU, but with significantly lower performance.

Also, in an embedded application, you would usually have all your data inputs (eg from ADCs driven by sensors, or video data from a digitizer) represented as integers, and try to scale all your calculations accordingly so that floating point math is not NEEDED - and once you can break the problem down to fixed point math, you can simply scale your values by multiplication with either powers of 10 or powers of 2 (the latter being very efficient, but needing some more computing to make end result numbers presentable to a decimal-system user) and work with integers.

Using floating point numbers for controlling equipment is sometimes a bad idea anyway, since there are more complex, bug-prone rules about which exact values a floating point data type can actually represent - which can give you interesting surprises when trying to make decisions by comparing numbers: Two seemingly different inputs (or a too-large number vs the same number given a too-small increment) might suddenly be represented identically, and always compare as identical, which is just perfect if nonequality is your loop exit condition. Also, some floating point formats know one or more "undefined" values, which also might be treated in ways that break expectations like "(a<b && a>b) will always be false" . Add some of these features being subtly implementation dependent, and the fact that you are using someone else's implementation when you use hardware floating point, and try to make code reliably portable... and there is your perfect storm.

The same is true of early PC applications - games could work with fixed point math since they did not really have to work with arbitrary number inputs or high precision, database handling software did not need floating point for anything, spreadsheets were usually not used to handle large AMOUNTS of floating point numbers - and currency can be well expressed in fixed point formats (->integer calculations behind the scenes). Same applied to early multimedia formats. Users that DID need the capability - eg for precision CAD, scientific software, large spreadsheets - bought math coprocessors.

Also, there was likely a chicken-and-egg effect for a while: Since coprocessors weren't that widespread, software authors (of software that did not NEED the performance) often did not bother including support for them even if there was some performance advantage - so no one bought a coprocessor since most of their software made no use of it.

Also, increasing the die size of an IC can cause you big problems when your processes are not yet really high yield. Say you put 10 combined chips on one wafer. You put 10 CPUs and 10 coprocessors each on another. Now you discharge a shotgun at both wafers, each with a load that will blast 10 random holes through each wafer (and silicon defects seem to work just like that). You will not be selling many combined chips.

5
  • IEEE754 FP does make (a<b && a>b) always false; NaN makes comparisons false, not true, so a better example would be that a < b || a== b || a > b can be false for FP but not for integer. If either of a or b are NaN, then they're unordered with respect to each other, rather than one of the usual three of greater, equal or less. So if (a == a) tests for non-NaN. Commented Apr 7, 2018 at 1:19
  • Were all the popular coprocessors compliant with IEEE754? And you kind of prove the point that FP is messy enough to make rockets crash... Commented Apr 7, 2018 at 14:57
  • To your point about modern CPUs: the first set of Android smartphones, shipped in 2008, used ARMv6 without an FPU. Some ARMv6 SoCs had VFP support, but Android didn't try to use VFP unless you built for ARMv7TE. (This is one of those occasions where having your app compiled just-in-time on the target system provides a substantial benefit.)
    – fadden
    Commented Apr 7, 2018 at 18:02
  • This ARMv.... nomenclature, how confusing: I was in disbelief, thinking "it used an ARM6? Now that is ancient" :) Commented Apr 7, 2018 at 21:40
  • @rackandboneman: I don't know anything about early FPUs that weren't IEEE754. I doubt that any would make (a<b && a>b) true, though. ARM NEON isn't IEEE754; but only because it doesn't support denormals (it flushes them to zero instead of supporting gradual underflow). I don't know what kinds of non-IEEE flavours of FP exist, but probably most of them are broadly similar and have extra / unexpected ways for compares to be false (NaN), not for them to be true. Commented Apr 8, 2018 at 3:08
6

One simply did not need a math co-processor. I could do double precision floating point operations in Basic on a computer with only an 8-bit Z80 CPU (at 1.66 MHz). Yes, it was slow, but we compensated for that by writing efficient algorithms !

Later I got a 16-bit 8088 (at 8 MHz). It was only when I started using AutoCad that I felt the need for a 8087 co-processor, so I paid for one.

With the 486DX came the built-in co-processor, though you could still opt out with the 486SX and then back in with the 487. All your programs would still run without. But by then the software became dependent on more speed, so you had better opt in. Startiung with the Pentium, math was always included.

You could argue the same for the graphics co-processor. We didn't have any but still we could play games, in 128 x 48 x 1 bit resolution...

I am still surprised that my quad-core Xeon plus GTX feels no faster than my Z80 used to feel, despite running 4 x 2000 times faster. Apparently Windows has eaten up all progress. ;-)

1
  • 1
    I think you have forgotten how slow your Z80 was.
    – boatcoder
    Commented Apr 7, 2018 at 11:51
6

Professional production coder / former IBM dealer here.

High Cost is the answer

And the fact that fairly few users needed them. That's it.

Now, the rationale behind the high cost is that math coprocessors were more complex and harder to make, even than the main CPU, as discussed in other answers. To the consumer it boiled down to a match coprocessor or a printer. Not a hard decision.

PCs were a game-changer for small business, but in organizing and filing (not in hard scientific computation). Killer apps were simple databases, word processing, early accounting, and spreadsheets. Spreadsheets sound like math but they're mostly organizing.

Remember, a lot of spreadsheets deal in money, and floating-point for money isn't necessarily a good idea especially in 32-bit with only 7.225 significant digits. Correct money arithmetic is what keeps COBOL relevant, after all.


FP was perfectly possible without a math coprocessor. It was just done in software, slower.

Keep in mind most programmers were very accustomed to not having hardware FP. We coded a lot closer to the bare metal back then, and one simply did not carelessly ask for a bunch of floating point calcs the way you do today -- instead you figured a way to do it in integer math (or to be more precise, fixed point, no disappearing pennies).

0
5

IIRC, improved yield was another factor.

The number of properly functioning chips from a given wafer was confidential information, so this is anecdotal - yet, they were not that great.

A 486 could be made as a 486DX with a CPU and FPU, 486SX (CPU only) or 487 (FPU). 486DX chips that only passed 1 of the 2 tests could be sold as 486SX or 487.


Re-use/leap-frogging Another factor was that the new CPU and FPU halves need not become available at the same time.

Case: With a new 386 (CPU), lacking the FPU of the later 387, some motherboards allowed re-using a 287. Although not as fast as the later 386/387, a 386/287 was certainly faster with FP than a 386 alone emulating FP math or a 286/287.

2
  • 1
    There was no physical 487. 487s were just 468DX with a different pinout (only a few pins where redefined). When inserted the existing 486SX was disabled and the '487' took over. So if at all, the yield issue was only about producing a DX or an SX.
    – Raffzahn
    Commented Apr 6, 2018 at 21:25
  • @Raffzahn Right you are! A 486DX with a faulty FPU was still a good 486SX and a later 487 was just a full 486DX disabling the 486SX. It was still a good deal as the later upgrade cost to a 486SX/"487" cheaper than the original 486. Commented Apr 6, 2018 at 21:35
4

My very first job was an in-line weighing system using an 8085. At 6.14MHz it just did not have the speed to perform the necessary floating point calculations needed for doing the stats. I cannot remember the chip used but it was a co-processor.

3
  • 2
    I wasn't involved in your task, so I can't say. But doing the impossible on 8-bit platforms was my job for about 7 years. From that background I am very reluctant to put "floating point" and "necessary" in the same sentence. There's always a way if you're backed into a corner designwise. Commented Apr 10, 2018 at 0:19
  • 1
    Agreed, fixed point could have been used but this involved standard deviations and variance calcs over large populations so not your run of the mill stuff really. Just a shot in the dark, you aren't Mr K. Harper are you? Commented Apr 13, 2018 at 17:05
  • Most likely AMD's 9511/12, also second sourced by Intel as 8231/32
    – Raffzahn
    Commented Jul 26, 2022 at 16:54
2

There are already several excellent answers. I'dd add that floating-point operations can be implemented in software. For example, the BASIC ROM on my old Commodore 64 -- which was actually written by Microsoft -- included subroutines to perform floating-point arithmetic. The VIC-20 had a nearly identical BASIC, so I'm assuming it had similar support.
... and the interfaces were well documented. So the FP support was available to assembly language and (in theory) to any compiler.

IIRC the Atari 400/800 models also had a Microsoft BASIC ROM, so I'm guessing they would also have had software-based floating-point support. I can't speak to that generation of Apples.
Per the comments, the Atari 400/800 also had a Microsoft BASIC, but it was disk (not ROM) based. Per Wikipedia "AppleSoft BASIC" was also authored by Microsoft. AppleSoft BASIC had FP support, I'd guess the Atari version did as well.

Last but not least, software floating-point support via an external library was also a common option on the PC (x86) compilers of the day.

Performing floating-point operations in software was obviously much slower than a numeric co-processor, but this was "good enough" for most users, especially compared with the additional cost of a dedicated "co-pro" -- if that was even an option with your hardware (I'm pretty sure it was not on most "home" computers).

... and if there wasn't a compelling case (for many/most users) for an external numeric (co)processor, there was even less incentive for CPU manufacturers to integrate those functions onto the main/primary CPU. As other answers have pointed out, there were engineering difficulties involved in doing so ... but my point is, there was very little incentive to try.

5
  • This really answers the question "why did some early computers not supplement the CPUs with external math chips?" which is the opposite of the question. So technically it probably shouldn't have been posted here, but it does provide useful and relevant information.
    – wizzwizz4
    Commented Apr 5, 2018 at 18:46
  • @wizzwizz4 My point was, if most users could get by without an (external) FP chip, that's even less reason to try to integrate it onto the primary CPU.
    – David
    Commented Apr 5, 2018 at 19:28
  • 3
    Atari's BASIC was not derived from any Microsoft product, and indeed does many things quite differently. Microsoft BASIC uses binary floating point, while Atari BASIC uses packed decimal.
    – supercat
    Commented Apr 5, 2018 at 22:25
  • 1
    Atari did have a Microsoft Basic, but it was available on disk, not ROM. They originally bought a license from MS but couldn’t fit it into 8k so they had a third party write a new Basic from scratch. Then they later released the “real” MS Basic port on disk.
    – mannaggia
    Commented Apr 6, 2018 at 2:27
  • Why the minus 1? Having a separate coprocessor on the motherboard specifically allowed the option of leaving the coprocessor socket empty.
    – riderBill
    Commented Apr 9, 2018 at 18:37
1

Since my reputation score at this point isn't high enough to reply directly yet, I wanted to mention 2 things...

The "Math Box" in relation to Atari [Inc.] was found in their vector-graphics based arcade games and it used AMD chips in said box. Such as with the game "Red Baron". I'm pretty sure the much more popular "Battlezone" also used it.

As for Atari BASIC - for the Atari 8-bit computers, not Atari Corp's later ST computers with its much-loathed disc-based BASIC from Metacomco - it was originally designed by Shepardson Microsystems and later revised by OSS. Atari wanted to use Microsoft BASIC and have it contained in an 8K ROM which was to be cartridge-based for the 400/800 computers. That's what the contract with Microsoft stipulated. Microsoft was unable to deliver it and it run within 8K. Shepardson was hired to get it to work and after much labor, recommended scrapping it and going with a custom-written version of BASIC instead. As others have mentioned, Microsoft BASIC for Atari 8-bit was later released on floppy disk but never replaced the ROM based Atari BASIC in the Atari 8-bit line.

1
  • 1
    And now you have enough reputation to comment, well done (and welcome!). The next step would be to turn this into comments, or even suggested edits on the relevant answers... Commented Jan 28, 2019 at 19:05

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .