Astronomical Distance Scales
Created | Updated May 24, 2015
When you look up into the night sky, most of the light you see comes from stars. But what are they? What do we know about them, and how do we know it? Hopefully, by the time you finish reading this, you'll know some of the answers to these questions.
Properties of Stars
Every amateur astronomer has heard of stellar magnitudes, the system for quantifying the brightness of stars. But what is it actually a measure of? How do we decide what magnitude a star is? To answer this, we need to know a bit about the human body. The senses have a logarithmic response to a stimulus. This means that an apparent doubling in intensity of a response corresponds to a larger increase in the intensity of the stimulus. Taking the eye as an example, a difference in magnitude of one corresponds to a factor of about 2.5 in brightness.
The origin of the magnitude system itself goes back to the Greek astronomer and mathematician, Hipparchus, who classified stars on a numerical scale from 1 (brightest), to 6 (faintest). The system was standardised by the British astronomer, Norman Pogson, in 1856, so that a 6th magnitude star was 100 times fainter than a 1st magnitude one.
The faintest stars that the average human eye can see in dark sky conditions are about magnitude 6 to 7. For comparison, the Hubble Space Telescope can detect stars as faint as magnitude 27.
The system is complicated by the fact that stars can have different magnitudes when observed at different wavelengths, but we still need to have some kind of a reference, because the scale is relative. The solution is to define the star Vega (alpha Lyrae) as having magnitude zero in any wavelength range.
So we know how bright a star appears. But how bright is it really? This will depend on the distance between it and the observer. So how do we measure that? There are several methods available, each one able to measure out to greater distances than the one before it.
For nearby stars, the method of parallax is used. As the Earth orbits the Sun, the apparent position of nearby stars relative to the background of more distant stars changes slightly, returning to their original position after a year. To find the distance, measurements of the star's position are made six months apart to give an angle, then using some trigonometry the distance in parsecs can be determined.
The parsec is defined as:
The distance at which one astronomical unit subtends one second of arc
-Oxford Dictionary of Astronomy
In other words, if the measured angle is one arc second then the star is one parsec away. One parsec is roughly equal to 3.26 light years. This method was used in 1837 by the German astronomer Friedrich Bessel, to determine accurately the distance to 61 Cygni. The distance he calculated is not much different from the modern day preferred value.
Unfortunately, the trigonometric parallax method is only reliable up to distances of about 100 parsecs. Beyond this the method of spectroscopic parallax is used. The name is slightly misleading, as the technique has nothing to do with parallax whatsoever. Instead, it uses the fact that if all stars are the same brightness then the closer a star is, the brighter it should appear. We could then easily determine how distant they are just by accurately measuring their magnitude.
Unfortunately, they are not all the same brightness. Instead we can compare the apparent magnitude of the star with its absolute magnitude, the brightness the star would appear to have if it was at a distance of 10 parsecs. This is determined by looking at the spectrum of the star to determine how hot it is. Using the mathematical relationship between absolute and apparent magnitudes we can the determine the distance to the star.
This technique can be used at distances up to about 10,000,000 parsecs, otherwise stated as 10 megaparsecs (Mpc). It enables us to calculate distances to nearby galaxies if individual stars within them can be picked out.
Beyond 10 Mpc
What about beyond that? There are several more distance measurements available to us, each with its own limitations and inaccuracies. One uses a particular type of variable star called Cepheid variables. The magnitude of these stars changes over a constant period. The length of this period depends on the star's absolute magnitude; so, if the period is measured, the absolute magnitude can be found and the distance calculated using the same relationship as was used for the spectroscopic parallax method.
The last method uses the Doppler effect, the same effect that makes an ambulance siren change tone as it races past an observer. Distant galaxies are rushing away from us at a speed that is directly dependent of their distance from us, the further away they are, the faster they are moving. This causes the light coming from them to be Doppler shifted in a similar way to the sound waves from the ambulance. The speed (v) and the distance (d) of a distant galaxy are related by v=Hod where Ho is the Hubble constant, named after the American astronomer, Edwin Hubble, who discovered the relationship. This is known as Hubble's Law.
Unfortunately, each of the above methods for measuring distance depends in some way on the method before it in the distance scale. Each system has to be calibrated; the easiest way to do this is by using two methods on the same object to make sure that they give the same result. By the time we come to measure distances using Hubble's Law, the uncertainties have multiplied to produce a large overall error.
Once we know the distances to various objects, we know how long it has taken the light from those objects to reach us. Light from the moon leaves its surface 1.25 seconds before we see it; therefore, the moon is 1.25 light seconds away. The Sun is a little over 8 light minutes from the Earth (1 astronomical unit) and the nearest star (Proxima Centauri) is 4.2 light years away. Light from the most distant objects can take billions of years to reach us; some of the objects we see when we look into the night sky have long since reached the ends of their lives.