How scientists colorize photos of space


This is all the light in the universe that
we can see. It’s just a fraction of what’s out there.
Most frequencies of light are actually invisible to us. The light we can see appears red at its lowest
frequencies and violet at its highest. This is called the “visible spectrum,”
and we see it because cells in our eyes called “cones” interpret light reflecting off
of objects. We have three different types of cones that
are sensitive to long, medium, and short wavelengths of light. Which roughly correspond to red, green, and
blue on the visible spectrum. These are the primary colors of light. Every
other color is some combination of these three. And that combination is the guiding principle
in colorizing black and white images. This portrait was taken in 1911. I know. You came here for space photos. We’re
getting there, I promise. It’s one of the first examples of color
photography, and it’s actually three black-and-white photos composited together. Russian chemist Sergei Prokudin-Gorskii took
three identical shots of this man, Alim Khan, using filters for specific colors of light. One allowed red light to pass through, one
allowed green, and one allowed blue. You can really see how effective this filter
system is when you compare the red and blue exposures. Look how bright Khan’s blue robe is in
the photo on the right, meaning more of that color light passed through the filter. Dyeing and combining the three negatives gives
you this. Alright, you get the idea. So let’s take
it into space. The Hubble Space Telescope has been orbiting
Earth since 1990, expanding human vision into deep space and giving us images like this
one. The thing is, every Hubble image you see started
out black-and-white. That’s because Hubble’s main function
is to measure the brightness of light reflecting off objects in space, which is clearest in
black-and-white. The color is added later, just like the portrait
of Alim Khan/ Except today, scientists use computer programs
like Photoshop. Let’s use this photo of Saturn as an example. Filters separate light into long, medium,
and short wavelengths. This is called “broadband filtering,”
since it targets general ranges of light. Each of the three black-and-white images are
then assigned a color based on their position on the visible spectrum. The combined result is a “true color”
image, or what the object would look like if your eyes were as powerful as a telescope
like Hubble. Okay, now one with Jupiter. See how combining the red and green brings
in yellow? And then adding blue brings cyan and magenta
to fully represent visible spectrum. Watch this animation two more times and I think
you’ll see it. Great, now let’s add another level of complexity. Seeing an object as it would appear to our
eyes isn’t the only way to use color. Scientists also use it to map out how different
gases interact in the universe to form galaxies and nebulae. Hubble can record very narrow bands of light
coming from individual elements, like oxygen and carbon, and use color to track their presence
in an image. This is called “narrowband filtering.” The most common application of narrowband
filtering isolates light from hydrogen, sulfur, and oxygen, three key building blocks of stars. Hubble’s most famous example of this is
called the Pillars of Creation, which captured huge towers of gas and dust forming new star
systems. But this isn’t a “true color” image,
like the one of Saturn from before. It’s more of a colorized map. Hydrogen and sulfur are both seen naturally
in red light, and oxygen is more blue. Coloring these gases as we’d actually see
them would produce red, red, and cyan, and the Pillars of Creation would look more like
this. Not as useful for visual analysis. In order to get a full color image and visually
separate the sulfur from the hydrogen, scientists assign the elements to red, green and blue
according to their place in the “chromatic order.” Basically that means that since oxygen has
the highest frequency of the three, it’s assigned blue. And since hydrogen is red but a higher frequency
than sulfur, it gets green. The result is a full color image mapping out
the process by which our own solar system might have formed. The Hubble Space Telescope can record light
outside of the visible spectrum too – in the ultraviolet and near-infrared bands. An infrared image of the Pillars of Creation,
for example, looks very different. The longer wavelengths penetrate the clouds
of dust and gas that block out visible light frequencies, revealing clusters of stars within
it and beyond. These images showing invisible light are colored
the same way: multiple filtered exposures are assigned a color based on their place
in chromatic order. Lowest frequencies get red, middle get green,
highest get blue. Which could beg the question: are the colors
real? Yes and no. The color represents real data. And it’s used to visualize the chemical
makeup of an object or an area in space, helping scientists see how gases interact thousands
of lightyears away, giving us critical information about how stars and galaxies form over time. So even if it isn’t technically how our
eyes would perceive these objects, it’s not made up, either. The color creates beautiful images, but more
importantly — it shows us the invisible parts of our universe.