Thursday, December 19, 2013

Image Processing in Astronomy

"Having finished chopping up his roots, Harry bent low over his book again. It was really very  irritating, having to try and decipher the directions under all the stupid scribbles of the previous owner, who for some reason had taken issue with the order to cut up the Sopophorous Bean and had written in the alternative instruction:

Crush with flat side of silver dagger, releases juice better than cutting.
......(several sentences later).......
Harry crushed his bean with the flat side of the dagger. To his astonishment, it immediately exuded so much juice he was amazed the shriveled bean could have held it all. "
-J.K Rowling in Harry Potter and the Half Blood Prince


Basic sciences like physics have come so far that huge, monstrous, expensive beasts like the Large Hadron Collider are required to push the boundaries of scientific knowledge. The days were the lone astronomer spent long, peaceful hours staring through the objective lens in the remarkable solitude of the mountains are long gone. Today, very little astronomy is done by people physically looking through the telescope. Instead, we have gigantic tubes (or conduits to the cosmos as Neil deGrasse Tyson puts it) and state of the art, cryogenically cooled sensors take the place of the human eye. These sensors are hundreds of times more sensitive and quite a bit less prone to error and fatigue than the human eye.

In astronomy, the raw data we get from the telescope is Harry's Sopophorous Bean and and the flat side of the silver dagger represents the various image processing techniques and algorithms astronomers have in their arsenal. These algorithms and techniques can often make the data exude such a surprising amount of information that you'd be amazed that the weird looking stream of numbers could have held it all. Image processing is used in most parts of astronomy today. From basic observations of pulsars to extremely complicated things like radio interferometry image processing is a ubiquitous tool in astronomy. It is what is used to turn the grainy, ugly raw data we get from telescopes into the enchantingly beautiful images of the cosmos that people use as desktop wallpapers.

In this post I'm going to talk a bit about an image processing algorithm known as DRIZZLE. The DRIZZLE algorithm is special because it can be used quite easily by amateur astronomers to get good looking photographs of celestial objects.

The algorithm was developed by Andrew Fruchter and Richard Hook and is used when the low resolution of the CCD sensor results in the image being undersampled. The algorithm takes several images of the same portion of the sky with slight shifts applied to telescope aligns the stars to account for the shift and adds all the images together. This combined image has more information that any of the individual images and can result in a final image that seems to have a higher resolution than the actual sensor resolution. It's a bit like human vision. What your left eye sees is almost exactly the same as what your right eye sees except for a slight shift. Your brain is able to combine these two shifted images together to get information about depth which cannot be obtained from either of the individual images.

Here is an image from Wikipedia that shows the difference between a raw image and an image produced by using the DRIZZLE algorithm.
On the left a single 2400s F814W WF2 image taken from the HST archive. On the right, the drizzled combination of twelve such images, each taken at a different dither position.
If you're interested in learning more about Drizzle, you can go to this website. It has a more detailed but still accessible explanation of $DRIZZLE$ with more examples to help you visualize how the algorithm does what it does.

This is just the tip of the iceberg when it comes to image processing. I'll be blogging about more awesome image processing stuff over the next couple of years.

Until next time! :)

1 comment: