At Saturday’s Digital Photography SIG meeting, we got to discussing this subject and I thought I’d post it up so the essence is captured for all the newbies to benefit from since it’s so important and basic.
On my 5 MP Sony camera:
o I can take (color) photos that capture (e.g.) 2592 pixels wide and 1944 pixels high.
o So, 2592 x 1944 = 5.04 MP camera.
o Now, each (color) pixel contains 24 bits of information (R = 8 bits, G = 8 bits, B = 8 bits). This would be one of 16.8 MM unique colors! (Forget the 24 bits if it’s a black and white photo.)
o So, the camera has to deal with 2592 x 1944 x 24 = 120.9 Mbits of information in each photo.
o Or, 120.9/8 = 15.1 MBytes of information.
o With a Fine Quality-JPEG setting, my camera records a photo file size of about 1.9 MB. (The photo usually looks great at this setting. However, I don’t know which one of the 12 JPEG levels the Fine Quality setting equates to. Maybe, #12?)
o Hence, the file compression is (15.1 – 1.9)/15.1 x 100 = 87.4 % (!)
o Where did these 87.4% of (missing) pixels go when the file was recorded? Into a math formula that has a way of (sort of) remembering all this when the photo file is opened up later because (most) all this information is used all over again when we look at the photo. (Remember that the poorer the quality setting, the more the compression and the more the data is lost on the round trip on the closing-opening step. This is why the JPEG format is called a “lossy” format.)
o Some situations where you can use this information are: how many photos can your hard drive (or, memory stick) hold and what’s the trade-off between resolution-quality-file size.
Here’s a hot link to the web if you want to explore this further:
Maybe some of our more experienced folks can clean this up as we start to comment on it. I thought this subject was important enough that it should be posted for all to see and learn from.