A few months back I snapped a monochrome image of the Edge-On spiral galaxy NGC 4656 (“Snapped”, in this case, meaning taking 120 1-minute exposures, stacking and aligning them, then spending a couple hours using Paintshop to try to bring out the faint spiral arms while removing the not-so-faint light dome from the City of Brighton. If any of you happen to live in Brighton, please consider using more efficient outdoor lighting fixtures).
At any rate, last Friday night I “Colorized” the image by taking about 15 1 minute exposures (each) through Red, Green and Blue filters, then again “Aligning and Stacking” the data with K3CCD, then finally using Paintshop to merge my new (low-resolution) color image with the original higher-res Black and White image. The resulting color could best be described as pure guesswork. Initially all of the fainter details of the image came out red, reflecting the relatively higher sensitivity of my CCD camera to red light. I then adjusted the color so that the stars where basically white, then adjusted it some more when that resulted in a blue galaxy (which seems pretty unlikely).
I only got color data for the area right around the galaxy (this is basically the color frames did not overlap very well with the B&W, but fortunately the galaxy itself was in all the images). I took the original B&W image at the full resolution of the camera (750 by 580). I took the color frames at half this resolution using a process called 2X2 binning, in which 2 by 2 groups of adjacent pixels are treated as individual pixels. This reduces the effective resolution to about 375x290, but each of the resulting “virtual” pixels is four times larger then a standard pixel, and hence catches 4 times as much light (just like telescope aperture, right?). This process of building a separate high-resolution monochrome image and a lower resolution color image and then merging them is called “LRGB” imaging, and is standard practice in photographing Deep-Space objects because images taken through color filters will generally be even fainter and fuzzier then monochrome images of these objects. So the idea is to get the color from the RGB image, but the detail from the monochrome.
I regret missing Brian Ottum’s talk on “Lazy Astrophotography” last month (family commitments). If there’s an easier way to do this, I’d sure like to know what it is—in the meantime, better get the coffee started, looks like it’s gonna be a clear night....
PS: Does this mean I now have your permission, if not cash, to buy a 31mm Nagler?
(Yes David, go and buy that Nagler, you have my permission! Editor)