Astrophotography captures faint light from distant objects against a mostly black sky, which creates challenges for digital sensors. A single exposure rarely gathers enough light from faint regions before bright stars saturate. To overcome this, we take many short exposures and “stack” them. Stacking improves the signal-to-noise ratio, revealing faint detail in dim areas without overexposing stars.
Specialized software (PixInsight, DeepSkyStacker, etc.) aligns your images so they perfectly overlay each other, then combines them into a composite/stacked image. These programs can also use calibration frames — additional images taken to measure and remove artifacts like sensor noise, vignetting, and dust spots. Some calibration frames depend on outdoor conditions and should be captured during your imaging session, while others can be done indoors. See below for more details on how to shoot these.
The brightness of your night sky dramatically affects what your camera can capture. The Bortle scale rates sky darkness from 1 (pristine, remote skies) to 9 (bright city center). In bright skies, faint details in broadband targets like galaxies and star clusters are drowned out by skyglow. Light pollution filters can help somewhat for broadband imaging, but narrowband filters are far more effective in cutting through light pollution and even moonlight.
To find a low-Bortle area near you, check out this light pollution map
A mount is as important as — or more important than — the optics. For deep-sky imaging, an equatorial mount is essential because it tracks the sky’s rotation without introducing field rotation, and all three of our mounts are of this type.
Calibration frames remove unwanted artifacts from your images:
Stacking these with your light frames produces a cleaner final image.
Once you have the basics down and want to move on to more advanced setup, these are the things that you will run into and need to learn about.
Focusing the stars on your cameras sensor is arguably one of the most important things to get right. The critical focus zone — the range where stars are sharp — is often only a few millimeters wide.
Touching the camera to adjust focus can shake the image, and you can’t see focus changes instantly; you have to wait for the mount to settle and the next image to appear.
Motorized focusers solve this by adjusting focus without disturbing the scope. Imaging software like N.I.N.A. can run autofocus routines that move the focuser through a range of positions, measure star sizes at each step, and quickly find the optimal focus.
Even with good polar alignment, tracking will drift over time due to mechanical flex, small alignment errors, and mount imperfections. Guiding corrects this. A separate app (PHD2) keeps one star at exactly the same pixel location by taking short exposures (~1 second) and sending tiny adjustments to the mount — speeding up or slowing down the RA axis, and nudging the DEC axis as needed.
Seeing — the blurring and twinkling of stars caused by atmospheric turbulence — also makes stars appear to shift. Guiding software must distinguish between real tracking errors and these rapid, random movements to avoid over-correcting.
These are the components needed for guiding:
An Off-axis Guider should be positioned behind the autofocuser and matched in backfocus so that when the main camera is in focus, the guide camera is also in focus.
Dedicated astrophotography cameras connect directly to a computer (have no manual controls) and are designed for deep-sky imaging. Many have cooling systems to drop the sensor temperature well below freezing (e.g., -10 °C), greatly reducing thermal noise.
Cameras come in monochrome and color versions.
Color cameras place tiny color filters over each pixel, typically in an RGGB 2×2 pattern. Each pixel records only one color, and a process called “debayering” estimates the missing two color components from neighboring pixels.
Monochrome sensors lack these color filters and record only brightness per pixel, offering higher resolution and sensitivity. To capture color with a monochrome camera, we use filters.
When using a monochrome camera, filters are used for capturing color information. The camera records brightness only, so we place different filters in front of the sensor to isolate specific wavelengths of light, later combining them into a full-color image.
A filter wheel holds multiple filters and rotates them in front of the sensor under software control.
For broadband targets like galaxies and star clusters, we use Luminance, Red, Green, and Blue (LRGB) filters. The Luminance filter transmits most visible light to capture high-resolution brightness data, while R, G, and B filters record the color channels.
For narrowband targets like emission nebulae, we use filters that isolate light from specific ionized gases:
These gases emit strongly at these discrete wavelengths, making the filters excellent at rejecting unwanted light pollution and moonlight. While all three wavelengths are technically in the visible spectrum, they are far too faint for our eyes to perceive their colors directly. In post-processing, we map these wavelengths to visible colors to create false-color images, such as the Hubble SHO palette.
The total integration time — the sum of all exposures — is the main factor in revealing faint details. Longer total time improves the signal-to-noise ratio, making subtle structures stand out. Individual sub-exposure length is a balance: longer subs reduce read noise impact but risk star saturation or tracking issues; shorter subs are safer but require more frames.
Stacking is just the first step. Post-processing in tools like PixInsight, Siril, DeepStackert, or AstroPixelProcessor involves stretching the histogram to reveal faint detail, reducing noise, balancing color, and enhancing contrast. This stage is where raw data transforms into a polished astrophotograph, and small adjustments can have a huge impact on the final result.
Ready to try your hand at taking some images?