chi and h site logo

{ Practical astronomy | Optics and imaging }


Optics and imaging

Subsections:

In the 1970s and '80s photography for amateur astronomers was hard. With the limited amount of time, effort and equipment I was prepared to invest, the results were usually poorer than what the human eye could see. Around the turn of the century, this has turned around completely with the arrival of digital consumer cameras. Astrophotography at the level I am prepared to go to has become simple enough that usually the results show more than the eye can see.

Why is it that astrophotography is more difficult than regular photography – portraits, landscapes, architecture etc.?

Challenges

The objects are faint

There is not a lot of light in the universe; nights outside the cities are quite dark. To record faint objects we use a combination of methods:

  1. Sensitive detector. This is one of the major strengths of the digital cameras. The CCD and CMOS detectors get much more signal out of a small number of photons than photographic film did.
  2. High ISO setting. This is actually not always a good idea; you will have to experiment with your camera. In digital cameras, the ISO setting is basically an amplifier gain setting. A very high gain setting can mean that most of what comes out of the amplifier is noise.
  3. Long exposure. This has always been the main method to collect enough light for a good image. The main drawback is that the Earth rotates and the objects in the sky move across the field of view. Long exposures require a camera mount that can track the stars and compensate for the Earth's rotation.
  4. Large aperture. A bigger lens collects more light, making faint objects more accessible. However, large precisely shaped pieces of glass cost a lot of money. Still, this is one reason why the astrophotographer puts the camera lens aside and uses a telescope instead.
  5. Fast f ratio. For a given aperture, a shorter focal length gives a brighter image. Fast f ratios require thicker and more complicated lenses, and we need more money to buy them.

Not all objects are faint, though. Sometimes the problem is also one of dynamic range. In a star cluster, the brightest stars may be overexposed before the faint ones are detected. Or the sky background – due to twilight or city lights – may be so bright as to overexpose the whole image before the faint object of interest makes an impact on the detector.

A dSLR will normally give you 8-bit JPG format, but if you ask it for raw images, you may get 14-bit numbers out of it. A CCD camera will usually give you 16-bit numbers. More bits mean higher dynamic range. The smallest brightness step recorded is always one. If you have 8-bit data, saturation occurs at 255, with 14-bit data you can count 64 times further to 16383 before saturation.

The objects are small

small lunar crescent rising next to nocturnal hill
This image illustrates the issues of faint and small objects. The 10-second exposure shows no stars, and the rising crescent Moon would not impress us without the dark hill in front of the twilight sky by its side.

The objects are not really small, but they are far away, making them appear small. Either way, if we want to see a certain amount of detail on the Moon or a planet we have to magnify it enough so that it covers a significant number of pixels in our image.

  1. Long focal length. The focal length determines how many millimetres on the detector correspond to a degree on the sky. A longer lens makes the object bigger. The downside is that a longer lens costs more and tends to have a slower f ratio, thus making matters worse for faint objects.
  2. Smaller pixels. This means more pixels per millimetre and hence spreading the object over more pixels. This can give better resolution of the object. It may not be obvious, but the downside for faint objects still exists: A given lens, from a given object, collects a given number of photons. Spread the image of the object over more pixels and the number of photons per pixel goes down.

In addition, smaller pixels may not even deliver more resolution. The resolution is limited by two fundamental factors:

  1. Due to Heisenberg's uncertainty principle, high resolving power requires a large aperture. Before quantum theory, this phenomenon in optics was called diffraction. Call it what you will, if the aperture itself cannot deliver a well-resolved image, putting smaller pixels into the image plane will not help.
  2. Due to turbulence in the Earth's atmosphere, only very short exposures can have resolution better than a few arc seconds. Any reasonably long exposure will be blurred, and more pixels or larger aperture will not help.

High resolution, like long exposure, causes a problem with the movement of the sky due to the Earth's rotation.

The objects move

Some objects themselves move, like satellites or meteors. In addition, the whole sky – stars, planets, the Moon, etc. – appears to move because the Earth rotates. This is not apparent to the naked eye, and short wide-angle exposures will also not show this. However, we often need high resolution (long focal length) to record small objects, and we often need long exposures to record faint objects. Objects that move will be smeared into trails.

circumpolar star trails above a tree
Circumpolar stars making trails.

This can be an aesthetic bonus, such as in images of circumpolar star trails. Whether desired or not, the movement does exacerbate the problem of too little light coming from faint objects. Say, you take a long wide-angle exposure to record a constellation with a satellite and a meteor. The stars might take several seconds to move from one pixel to the next. The satellite will move faster and spend only a fraction of a second on each pixel. The meteor will pass over hundreds of pixels in less than a second. While the stars are recorded well, satellites and meteors may not deposit enough photons per pixel to become visible at all.

There is nothing we can do to track meteors, because they are unpredictable. Even tracking a satellite will be a challenge. But for stars, planets, Sun and Moon we an use a motorised equatorial mount to compensate for the Earth's rotation and the predictable slow movement of solar system objects against the star background. These mounts are sophisticated mechanical devices, which makes them heavy, cumbersome, and expensive. Even so, at high resolution, they will not be accurate enough, and a guiding feedback mechanism will be needed to put a given star into the same image pixel in spite of drive irregularities. Such feedback could be achieved by putting a human eye behind a second parallel set of optics, or it could be clever software having a peek at the image as it is being exposed. Either way, there would have to be a way to vary the speed of the tracking motor to compensate for the errors in the drive gears and thus make pin-prick stellar images.

Consumer equipment

Products for the mass market are cheap. However, they are not designed for our purposes, and it is good fortune if they can do the job. Typically, easy-to-use equipment is less useful for extraordinary tasks. Auto-focus and automatic exposures are designed for run-of-the-mill tasks where the objects are large and bright. They fail at night, when most of the image appears black at short exposure.

It is vital that we can manually set things like ISO, aperture, exposure time, white balance and focus. "Bulb exposure" should be possible, and ideally the camera should be able itself to time exposures up to 30 s or more.

In most cases, we need control of focal length as well. The camera lens may have to be removed and replaced by a different "lens", such as a telescope. Although a lot can be done without removing the camera lens, including images through a telescope, being able to remove the camera lens gives a lot more flexibility.

This makes a dSLR more useful than a compact digital camera. dSLRs have another advantage. Their detectors are larger, as are their lenses. The larger lenses collect more light and give better diffraction-limited resolution. The larger detectors have larger pixels and each pixel collects more photons, making the images less noisy.

Webcams are used to image planets, as they can quickly take many frames (later to be stacked into a single image), and because their small weight makes them easy to attach to a telescope. With video recording now possible in compact and dSLR cameras, these may be an alternative to the webcam.

Image processing

Our ideal image is one with only signal, i.e. the light from the object of interest. We sometimes find unwanted contributions like noise or a light-polluted night sky in our images. Stacking has become a common weapon to combat noise. The idea is that the noise is a random pattern that changes from one image to the next. Add up several or many images and the noise in them will partially cancel itself out, giving the signal the upper hand. However, you should not rush into image stacking without good cause. A far better way to reduce noise is to take a longer exposure. Only if that is not possible – say, if it would lead to overexposure, star trailing, image blurring – should stacking be used. If you have the choice, using raw images is much better for stacking. The conversion to gamma-corrected, compressed, 8-bit images is best done after stacking.

With the noise reduced, image defects from the camera will become apparent, namely bias and dark current. These can be subtracted, provided they are recorded in separate images. Those are called dark frames, because they are taken with no light reaching the detector. Along with dark subtraction, we should talk about division by a flat field. A flat field would also be a separate image taken of an object of uniform brightness. The flat field image will show vignette from the lens and possibly sensitivity variations between individual image pixels.

Much astrophotography is undertaken in order to take an image, to enjoy looking at it, and to show it to others. We want the image to look its best, and so will often apply cosmetic processing: optimise the brightness and contrast (linear and non-linear stretch), perhaps emphasise small detail over large-scale features (unsharp mask), crop away boring outer regions of a large image, re-scale the pixel size to match how the image is used.

Many images also have some scientific value. I take images of the Sun to count its spots on a daily basis. I also take images of the northern summer night sky to log the presence and extent of noctilucent clouds. This is visual analysis and is compatible with optimising the visual appearance of the images first.

Quantitative analysis of images – photometry and astrometry – should, however, be done on relatively raw images. By all means, stack to reduce the noise and subtract the dark frame. Perhaps also subtract a sky background. But the more cosmetic processing steps will in many cases have a detrimental effect on the numerical analysis. You can still, of course, do the quantitative analysis first and afterwards make the image look nice as well.