49
\$\begingroup\$

As we know, the colour of a particular beam of light depends on its frequency (or wavelength). Also, isn't that the information which is first captured by digital cameras? Then, why do we use formats like RGB (or CMYK, HSV etc.) to represent colours digitally?

\$\endgroup\$
15
  • 25
    \$\begingroup\$ Have you ever compared the price of a spectrometer that can measure every wavelength of visible light independently to the price of a colorimeter that measures total light filtered by three different colors? \$\endgroup\$
    – Michael C
    Commented Jun 4, 2017 at 8:03
  • 7
    \$\begingroup\$ Mentioning it because it hasn't been mentioned in other answers: we don't just use RGB to represent colour in computer systems. It's the most conventional one since it matches the "native" behaviour of most capture and imaging systems, but there are two other representations that are commonly used: HSV, and YUV. It's also worth looking at the details of CIE: human-percieved colour and spectral colour are not the same! \$\endgroup\$
    – pjc50
    Commented Jun 4, 2017 at 19:57
  • 5
    \$\begingroup\$ @pjc50 That's good information that should be in answer. Sounds like you have an answer just begging to be created. Care to create it? \$\endgroup\$
    – scottbb
    Commented Jun 4, 2017 at 22:22
  • 20
    \$\begingroup\$ Your question seems to imply that any colour can be described by a single frequency/wavelength. However, this is not the case: all greys (including white), and many colours such as pink or brown cannot be described by a single frequency, they are necessarily a combination of several. \$\endgroup\$
    – jcaron
    Commented Jun 5, 2017 at 7:34
  • 15
    \$\begingroup\$ So it would be a set of (wavelength, intensity) tuples. Given that us poor humans only "see" three of those wavelengths (crude approximation), we can then filter out that set to only matching wavelengths. Oh, darn, we end up with three tuples (red, intensity), (green, intensity), (blue, intensity). Commonly know as RGB :-) \$\endgroup\$
    – jcaron
    Commented Jun 5, 2017 at 9:52

8 Answers 8

12
\$\begingroup\$

I think there are some misconceptions in prior answers, so here's what I think is true. Reference: Noboru Ohta and Alan R. Robertson, Colorimetry: Fundamentals and Applications (2005).

A light source need not have a single frequency. Reflected light, which is most of what we see in the world, need not have a single frequency. Instead it has an energy spectrum, i.e., its energy content as a function of frequency. The spectrum can be measured by instruments called spectrophotometers.

As was discovered in the nineteenth century, humans see many different spectra as having the same color. Experiments are done in which light of two different spectra is generated by means of lamps and filters and people are asked, are these the same color? With such experiments, one verifies that people don't see the spectrum, but only its integrals with certain weighting functions.

Digital cameras capture the response to light of sets of photodiodes covered with different filters, and not the fuller spectrum that you'd see with a spectrophotometer. Three or four different types of filters are used. The result is stored in a raw file output by the camera, although many people suspect that raw files are "cooked" to a greater or lesser extent by camera manufacturers (camera sensors are of course highly proprietary). The physiological responses can be approximated by applying a matrix transformation to the raw data.

For convenience, rather than using approximations to physiological responses, other types of triples of numbers are employed to name colors, e.g., Lab, described in https://en.wikipedia.org/wiki/Lab_color_space (but note warning on page). One must distinguish triples which can express the full range of estimated physiological responses from others, like RGB, which can't. The latter are used because they do express the colors which computer screens can display. They are the result of conversions from triples like Lab, or from raw data. CMYK is for printers.

\$\endgroup\$
6
  • \$\begingroup\$ Correct and succinct answer! A light source need not have a single frequency. \$\endgroup\$
    – user63664
    Commented Jun 7, 2017 at 16:27
  • 1
    \$\begingroup\$ Also, not every shade of color could be reproduced with a single wavelength light source! Send your apprentices to an electronics store to get a brown LED at the next opportunity :) And a cheap tunable light source to reproduce your wavelength-encoded image, too :) \$\endgroup\$ Commented Jun 8, 2017 at 1:31
  • \$\begingroup\$ RGB is not a singular term that could or could not describe the full range of colors. sRGB is the defacto standard and cannot describe all human perceptible tristimulus values - colors - but scRGB is a trivial extension to sRGB that covers the full set by allowing negative values for the three primary colors. #ffff00 is not a pure color, but you can get one subtracting blue from it. \$\endgroup\$ Commented Jun 8, 2017 at 12:56
  • \$\begingroup\$ @rack if we drop the "cheap" requirement, an electrically controlled thin film might be able to pull off the trick. I don't think the technology exists yet, but I'd love to see it done. \$\endgroup\$ Commented Jun 8, 2017 at 12:59
  • 1
    \$\begingroup\$ CYMK is a pigment optimization for CYM. K is a cheaper pigment than the level of CYM, and K text and line art dominates in the print community. So using "4 color" vs three saves money and makes a less reflective black. \$\endgroup\$
    – mongo
    Commented Feb 12, 2020 at 9:38
47
\$\begingroup\$

The goal of the imaging engineer has always been to capture with the camera a faithful image of the outside world and present that image in such a way that the observer sees true to life picture. This goal has never been achieved. In fact the best images made today are frail. If this goal were to be achieved, you would need sunglasses to comfortably view an image of a sunlit vista.

You are asking why cameras don’t capture the entire span of radiant energy that created the human visual response. Why does the modern camera only capture three narrow segments that we call the primary light colors which are red, green and blue?

The answer falls in the category of how we see, namely the human visual response. Over the years there have been many theories proposed regarding how humans see color. So far all have failed to give a satisfactory explanation of every aspect of how we see colors. The wave lengths span that our eyes are sensitive to covers the range of 400 to 700 millimicrons. It is no accident that earth’s atmosphere is transparent to this range.

When we stare at a light source, we cannot distinguish any one particular wave length unless it is presented alone. When we look at a white light source, we are unable to isolate and identify any specific color. Our eye/brain combination interprets the color of the light without analyzing what makes up the mix of frequencies. Capitalizing on this, scientists have proven by experimentation that by mixing only three colors in varying proportions, almost all colors can be produced. In other words, presenting to the human eye, in varying intensities, a mix of red, green and blue, most spectrum colors can be reproduced, not exactly but a close approximation. This was the work of Thomas Young (British 1773 – 1829) titled the Young Theory of Color Vision.

Building on Young’s theory, James Clerk Maxwell (British 1831 – 1879), showed the world the first color picture photography produced. In 1855 he used three projectors and superimposed the three images projected on a single screen. Each projector was fitted with a colored filter. The three images were each one of the three light primary colors, namely, red, green, and blue. The film images projected were made by taking three separate pictures on three pieces of black and white film, each exposed thru one filter of the three light premiers.

Since that day in 1855, innumerable methods to make and display color pictures have been explored. Early color motion pictures projected feeble color images using just two colors. Edwin Land (American 1909 – 1991) founder of Polaroid Corp. experimented making color pictures using only two primary colors. This has remained a laboratory curiosity. So far, the most faithful color images are made using the three color primaries. However, one man, Gabbriel Lippmann (French 1845 – 1921) made beautiful color images that captured the entire visual light spectrum. He devised a method that employed black and white film with a mirror backing. The exposing light penetrated the film, hit the mirror and was reflected back into the film. Thus the exposure was made via two transits of the exposing light. The image comprised of silver arranged with a spacing equal to the wave length of the exposing light. When viewed, the film only allowed light to pass that matched the wave lengths of the exposing light. One could behold a full color picture that contained no dye of pigment. Unique and beautiful, the Lippmann process remains impractical. Our film and digital cameras fall back to the method used by Maxwell. Perhaps, if you study human vision and color theory, maybe you will be the one that advances our science and obtain the first truly faithful image.

\$\endgroup\$
11
  • 6
    \$\begingroup\$ R,G,B systems are not three narrow or specific colors, they are each a relatively broad spectral range and their relative proportions allow additive color mixing. \$\endgroup\$ Commented Jun 4, 2017 at 16:18
  • 7
    \$\begingroup\$ @ BlueRaja - Danny Pflughoeft - Medical science has just identified small groups of humans with four cone cells. Color images can be visualized on black & white TV by specialized rapid flashing of the image. Color blind individuals can regain color vision using special colored glasses. Science progresses day by day. \$\endgroup\$ Commented Jun 4, 2017 at 20:01
  • 3
    \$\begingroup\$ @AlanMarcus even the green filter has a bandwidth of 125nm, when we define visible to be 400-700, including ONE THIRD of the spectrum for your "narrow, specific color" is not correct. One third of the free range is not a narrowly defined, specific color. \$\endgroup\$ Commented Jun 4, 2017 at 23:19
  • 6
    \$\begingroup\$ @BrandonDube: It's different depending on whether you are capturing or displaying an image. When you're capturing an image, the each R, G, B component must have a broad range to mirror human perception. When displaying an image, it's better to have each component be a narrow range in order to achieve a wider gamut. \$\endgroup\$ Commented Jun 5, 2017 at 3:31
  • 2
    \$\begingroup\$ "Unique and beautiful, the Lippmann process remains impractical." - Explain why. Or is it just because silver is expensive? \$\endgroup\$
    – aroth
    Commented Jun 5, 2017 at 7:32
36
\$\begingroup\$

You said,

this is the information that is captured at first by digital cameras.

That is not correct. By themselves, sensors on most digital cameras respond to a broad band of frequencies of light, beyond what humans can see into the infrared and ultraviolet spectrum. Because sensors capture such a broad spectrum of light, they are terrible discriminators of light wavelengths. That is, roughly speaking, digital sensors see in black and white.

For most camera sensors¹, in order to capture colors, colored filters are placed in front of the sensor, called a color filter array (CFA). The CFA turns each sensor pixel (sometimes called a sensel) into a primarily red, green, or blue light sensor. If you were to view the raw sensor data as a black and white image, it would appear dithered, somewhat like a half-toned black-and-white newsprint image. Zooming in at high magnification, the individual pixels of the image would have a checkerboard-like appearance.

Interpreting the individual squares of the raw image data as red, green, or blue as appropriate, you will see a color dithered version of the image, similar to a color half-toned newsprint article.

Bayer color filter array, from Wikimedia Commons
Bayer color filter array, by user Cburnett, Wikimedia Commons. CC BY-SA 3.0

Through a process called demosaicing either when saving the image data in camera, or in post-processing on a computer, the array of color data is computationally combined to create a full-resolution RGB color image. In the demosaicing process, the RGB value of each pixel is computed by an algorithm that considers not only the pixel's value, but the data in nearby pixels surrounding it as well.

Then, why do we use the RGB format to represent colours digitally?

We use a trichromic color model because that's how humans perceive colors. From Wikipedia'a Trichromacy article,

The trichromatic color theory began in the 18th century, when Thomas Young proposed that color vision was a result of three different photoreceptor cells. Hermann von Helmholtz later expanded on Young's ideas using color-matching experiments which showed that people with normal vision needed three wavelengths to create the normal range of colors.

Thus, we build cameras that capture what we can see, in a somewhat similar fashion to how we see. For instance, for typical photography that aims to capture and reproduce what we see, it makes little sense to also capture infrared and ultraviolet wavelengths.


  1. Not all sensors use a CFA. The Foveon X3 sensor, used by Sigma DSLRs and mirrorless cameras, relies on the fact that different wavelengths of light penetrate silicon to different depths. Each pixel on the X3 sensor is a stack of red-, green-, and blue-detecting photodiodes. Because each pixel is truly an RGB sensor, no demosaicing is required for Foveon sensors.

    The Leica M Monochrom is an expensive black-and-white only camera that does not have a CFA on the sensor. Because there is no filtering of incoming light, the camera is more sensitive to light (according to Leica, 100%, or 1 stop, more sensitive).

\$\endgroup\$
1
  • 1
    \$\begingroup\$ Except that Bayer filters don't actually use 'Red', 'Green', and 'Blue' filters, all of the diagrams like the one above that are floating around the internet notwithstanding. Our cones aren't actually most sensitive to the same wavelengths we use for RGB color reproductions systems, either. Calling our cones R, G, and B is left over from a time before we were able to measure to what wavelengths each type of cone is most sensitive. The colors of Bayer masks more closely align with the peak sensitivity of our cones than with the wavelengths we use in RGB color reproduction systems. \$\endgroup\$
    – Michael C
    Commented Sep 14, 2020 at 1:48
13
\$\begingroup\$

The reason cameras and displays work in RGB is because our retinas work that way.

Since our eyes encode colors with those components (RGB), it is a very convenient system (although certainly not the only one) to encode not only pure-wavelengths (which form a more or less deterministic combination of retinal response for each chromatic component), but also mixed colors.

The rationale would be "if any color combination can only be delivered to the brain as a combination of three components, I can cheat the visual system by presenting only a given combination of those isolated, pure components (via RGB display) and let the visual system decode them as if they were the real thing.

It is interesting to note that, since we are trichromats, most color systems are three-dimensional in nature (Lab, HSV, YCbCr, YUV, etc.), not because of intrinsic physical properties of color, but instead because of the very way our visual system works.

\$\endgroup\$
0
12
\$\begingroup\$

An attempt to answer simply:

  • We cannot practically capture enough information to store a complete breakdown, frequency by frequency, of all the different wavelengths of light present, even just within the visible spectrum. With RGB we can describe the colour of a pixel using just three numbers. If we were to capture the entire frequency spectrum of light, every single pixel would require not 3 numbers, but a graph of data. The data transmission and storage would be immense.

  • It's not necessary for our eyes. Our eyes don't just see three single wavelengths, but instead each of our "red", "green" and "blue" receptors capture partially-overlapping ranges of light:

The overlap allows our brain to interpret the relative strengths of the signals as varying colours in between the primaries, so our vision system is already pretty good at approximating an actual wavelength given only the relative signal strength of the three primaries. An RGB colour model reproduces adequately this same level of information.

\$\endgroup\$
11
  • 3
    \$\begingroup\$ +1 But you could stress adequately a bit. I mean you get a lot of the colors with a tricromatic system, but by no means all possible colors. It is also worth noting that cameras with more wavelength bands do exist abd the imagefiles they produce are huge . In fact we are lucky that tricromatic stimulus works if it were not so we could be in orobkems with media storage \$\endgroup\$
    – joojaa
    Commented Jun 5, 2017 at 21:08
  • \$\begingroup\$ Indeed, though if the response of the 3 sensor primaries matched the response chart of the color receptors in our eyes, then it would in theory still achieve accuracy in terms of reproducing everything we can see. \$\endgroup\$ Commented Jun 5, 2017 at 23:32
  • \$\begingroup\$ No, the curves overlap in a way that makes certain combinations wavelength distribution send a unique signal. That can not be reproduced with anything other than that exact combination. So unfortunately a tristimulus input will never get you the entire human visual range. \$\endgroup\$
    – joojaa
    Commented Jun 6, 2017 at 4:58
  • \$\begingroup\$ "That can not be reproduced with anything other than that exact combination." - that's kind of what I meant, in theory if your sensor primaries were sensitive with exactly the same curves then it would be 1:1. Say if you got a human retina and put it in a camera and captured the signals coming out of the retina. \$\endgroup\$ Commented Jun 6, 2017 at 5:33
  • 2
    \$\begingroup\$ @ChrisBecke found an explanation here: "The erythropsin in the red-sensitive cones is sensitive to two ranges of wavelengths. The major range is between 500 nm and 760 nm, peaking at 600 nm. This includes green, yellow, orange, and red light. The minor range is between 380 nm and 450 nm, peaking at 420 nm. This includes violet and some blue. The minor range is what makes the hues appear to form a circle instead of a straight line." Source: midimagic.sgc-hosting.com/huvision.htm \$\endgroup\$ Commented Jun 8, 2017 at 1:33
8
\$\begingroup\$

There are two interacting reasons.

Reason (1) is that the eye (usually) receives multiple wavelengths of light from any given point [so to speak]. White light, for instance, is actually [as a rule] a mixture of many diverse wavelengths; there is no "white" wavelength. Similarly, magenta (often called "pink" nowadays (via "hot pink") ) is a mixture of red and blue, but without green (which would make it appear white). Similarly again, something that appears green might have some lime and some cyan components.

Reason (2), then, is that RGB is how the human eye works — it has red, green and blue sensors.

Thus, combining (1) and (2): to get the human brain to interpret the light signals the same way as it would interpret the original signals, they have to be encoded in its terms.

For instance, if (conversely) the original were (what a person would perceive as) white light, but it were encoded using, say, violet and red sensors — just the two — the reproduction would appear to the human eye as magenta. Similarly, but more subtly or finely… white light that was a mixture of a full range of colours… if this were encoded using, say, violet, yellow and red sensors… this reproduction would appear to the human eye as not a pure white — as (offhand) a yellow-ish off-white. Conversely, it would appear as a pure white to an imaginary alien [and indeed possibly to some real animal] with the same sensors (viz. violet, yellow and red) in its eye.

By the same token… if the original were white — that is, a mixture of a full range of colours — then a human eye perceiving this would encode this in terms of only red, green and blue… and a reproduction using only red, green and blue (in the same proportions) would appear to human perception as a pure white — the point being that information is lost in both cases, but the end result appears perfect, because the losses correspond. Unfortunately, they will correspond exactly only if the sensors [RGB] in the camera have sensitivity curves exactly the same as the sensors [RGB] in the human eye [noting that each sensor is activated by a range of colours] — if, for instance, a lime colour activated each of the red, green and blue sensors by exactly the same amount, across the two cases.

\$\endgroup\$
7
  • \$\begingroup\$ I believe that a mixture of light representing every wavelength — let's say in nanometer increments — within the range of most human sensitivity would have a stronger response between the red and green than between the blue and green due to the larger integral summation under the curves near the yellow wavelengths than near the cyan ones: it would appear yellowish. \$\endgroup\$ Commented Jun 5, 2017 at 11:25
  • \$\begingroup\$ @can-ned_food You're forgetting that our brains interpret those signals from the cones in our retinas based on what it expects to see. That is how we can tell a white object is white under both full spectrum sunlight centered on around 5500K and under fairly full spectrum (but not as full spectrum as sunlight) light centered on 2700K such as the light from a tungsten bulb. Only when a significant portion of the spectrum is missing do we have trouble telling a light blue shirt from a white shirt ( in such a case because there is no red or green light present). \$\endgroup\$
    – Michael C
    Commented Aug 30, 2017 at 7:10
  • \$\begingroup\$ @MichaelClark Hmm. Well, even if our vision recognizes the profile of black-body reflection off a perfectly white object (and not merely apparently white for a given incident spectrum), and thus always perceives that object as white, then such a hypothetical ‘egalitarian’ spectra would differ from the expected black-body profile, would it not? \$\endgroup\$ Commented Aug 30, 2017 at 13:48
  • \$\begingroup\$ @can-ned_food Under very limited spectrum light the response from the cones in our retinas can be identical for two different objects with different 'colors' when viewed under fuller spectrum lighting.That's the issue with limited spectrum lighting. In order to perceive 'white', which is not a 'color' but rather a combination of all colors, there must be broad enough spectrum light to create a response in all three sizes of the cones in our retinas. Only if that is the case can our brains, and not eyes, interpret the object as 'white'. \$\endgroup\$
    – Michael C
    Commented Aug 30, 2017 at 22:43
  • \$\begingroup\$ @MichaelClark Yes — or, almost the same, as one surface could be perceived as darker than the other. Anyways, I'm yet not certain that I understood your first comment; I'll need to research that. \$\endgroup\$ Commented Aug 30, 2017 at 23:41
3
\$\begingroup\$

The short answer: Because wavelength is a single value, and the entire range of colors we can perceive is not representable by a single value, any more than the dimensions of a rectangular solid can be represented by a single measurement.

To continue the analogy - you could quote the solid's volume, but there are many different solids with the same volume.

RGB, CMY, HLS, etc., all use three "dimensions" because that's now many you need to adequately describe colors as seen by humans.

Wavelength equates to Hue in the HLS system, but it can't tell you lightness or saturation.

Re "Also, isn't that ([wavelength]) the information which is first captured by digital cameras?", no, it isn't.

As others have noted digicams capture relative intensities of red, green, and blue. (And some have used at least one additional color to give better discrimination in the critical red-to-green region.) Directly measuring the frequency of incoming light would be far more difficult. We just don't have cheap sensors that can do that, certainly not ones that we can make in a grid of several million of them. And we'd still need a way for the camera to measure lightness and saturation.

\$\endgroup\$
0
3
\$\begingroup\$

tl;dr: It is way much easier to detect light on three broad parts of the spectra than analyse the frequency accurately. Also, the simpler detector means it can be smaller. And third reason: the RGB colourspace is mimicking the principles of opperation of human eye.


As Max Planck proved every hot body emitts radiation with various frequencies. He sugested and proved that the energy is radiated in bursts, called photons, not contiuously as was supposed before. And from that day, physics was never the same. The only exception is ideal LASER/MASER that emitts radiation of only one frequency and discharges (neon bars,...) emitt radiation with several isolated frequencies.

The distribution of intensities over the frequencies is called spectrum. Similarilly, the detectors also have their spectra, in that case it is distribution of the detector's response to a radiation of normalized intensity.

As it was already noted, the white light is white because our eyes are evolution-callibrated to see the sunlight, ranging from far infrared to ultraviolet, as white. Leafs, for example, are green because they absorb all the frequencies except for the part, that we see as green.

Of course, there are detectors that can gather the spectra and extract the information. They are used in optical emission spectroscopy and x-ray diffraction and fluorescence techniques, where the chemical composition or microstructure is evaluated from the spectra. For a photography it is overkill; except for the astrophotography, where we want to evaluate the "chemical" composition but the images are "translated" to fake colours. These detectors are accurate and huge or small but inacurrate and you need much more computation power to analyse it.

Human eye, or any other eye, is not that case. We do not see the chemical composition, or bonding states, of the object. In the eye there are four different "detectors":

  • colourless: These are the most sensitive and they work for all visible frequencies. Without them you would not see anything in night.
  • reds: These are most sensitive in low frequency region. That's why hot things are glowing red first.
  • greens: These are most sensitive in higher frequency regions. That's why the hot things turn from red to yellow when heated further.
  • blues: These sre most sensitive in high frequency region. That's why the heated things glow white when heated much more. If you could heat them more and more they will start glowing light blue.

If we look at rainbow, or CD or DVD, we will see colours turning from red to violet. The light beams for a given part of the rainbow are of one perticullar frequency mostly. The infrared beams are invisble to our eyes and they do not excite any cell in the retina. Increasing the frequency, the beams starts to excite the red "cells" only and the colour ic seen as red. Increasing the frequency the beams excite "red cells mostly" and the little bit the "greens" and the colour is seen as orange. Yellow beams excite the "greens" a bit more...

The sensors in cameras, CCD or CMOS, are excited by light beams of any frequency, to take a picture our eyes will see as colour we are just mimicking the human eye - we use, for example, Bayes filter. It consist of three colour filters with transmission spectra intentionally simillar to the cell types of our retina.

The light reflected from a yellow paper illuminated by the Sun exites the "reds" fully (100%), the "greens" fully (100%) as well and slightly the "blues" (5%), so you see it yellow. If you take a picture of it, simillar, say the same, excitation is gathered by the camera. When looking at the image on the screen, the screen sends 100 red photons, 100 green photons and 5 blue photons over a really short period of time towards you. The excitation levels of your retina will be simmilar to the excitation caused by direct observation and you will se a photograph of yellow paper.

There is another problem to be solved if we want to reproduce the colours. Using RGB colourspace we need only three types of light sources per pixel. We can have three colour filters (LCDs work like this), we can have three types of LEDs (LED and OLED panels use that), we can have three types of luminophors (CRT used this). If you want to reproduce the colour fully, you would need infinite ammount of filters/sources per pixel. If you want to use simlify the information colour-to-frequency it won't help either.

You can also try to reproduce the colour by its temperaure. I suppose you will be able to reproduce only red-orange-yellow-white colours and you would have to heat each pixel to temperatures around 3000 K.

And in all that theoretical cases your eyes will still translate the actually true colour to its RGB signals and pass it to your brain.

Another problem to solve is how to store the data? The conventional 18MPx RGB image consists of three matrices 5184x3456 cells, each point with 8-bit size. That means 51 MiB of uncompressed file per image. If we want to store the full spectra for every pixel, say in 8-bit resolution, it will be 5184x3456x256 übermatrix resulting in 4 GiB uncompressed file. That means storing intensities of 256 different frequencies in range of 430–770 THz, that means resolution of 1,3 THz interval per channel.

Totally not worth the effort if I may say...

\$\endgroup\$
2
  • 2
    \$\begingroup\$ Also you can not produce all colors with temperature, as a good portion of human visible space does not exist in the rainbow ;) \$\endgroup\$
    – joojaa
    Commented Jun 5, 2017 at 21:10
  • \$\begingroup\$ @scottbb Thank you for the correctionm yes I mistook bits for bytes and forgot to divide by 8. \$\endgroup\$
    – Crowley
    Commented Jun 8, 2017 at 13:49