Stellar Magnitudes
Created | Updated Feb 23, 2005
When looking at the sky, some stars appear brighter than others. The ancient Greek astronomer Hipparchus decided to classify stars based on how bright they appeared in the sky, after he discovered a "new star" in the constellation Scorpio1. Since previously no one had designed a systematic method of identifying stars, Hipparchus couldn't be completely sure if the star he had seen was really a new one. Hipparchus used the apparent brightness of each star, and also devised a system of latitude and longitude to map the positions. The Ancient Greeks viewed the heavens as perfect and unchanging, therefore the sighting of a "new star" was very important.
Hipparchus's system classed the twenty brightest stars as of the "first magnitude", and the faintest stars visible with the naked eye as 6th magnitude stars.
In 1850 an English astronomer N. R. Pogson noticed that an average first magnitude star was in fact 100 times as bright as an average 6th magnitude star.
The equation for apparent magnitude m is m= -2.5 log f +constant, where f is the flux from the star. This system was that suggested by Pogson, namely that for a difference in magnitude of five, there is a difference in flux of 100 times.
The problem with apparent magnitudes is that there is no way to differentiate between a bright star that is a long way away, and a dimmer star which is nearer. So astronomers use absolute magnitude M to compare the intrinsic brightness of stars, as suggested by Danish astronomer E. Hertzprung.
The absolute magnitude of a star is defined as being the magnitude a star would have if it were 10parsecs away from the Sun.
Because of the complication of the magnitude system requiring that a difference in 5 magnitudes represents a difference in flux of 100 times, giving magnitudes to stars requires that the brightest stars, for example Sirius, have negative magnitude. This makes the magnitude system seem unnatural.