Navigation: LaesieWorks Image Compression Homepage This page

Jump to:


Image Compression


There's no such thing a lossless photo or movie, relative to reality.
Reality (what ever that is..) has an incredible spectrum of colors and lightness (frequency & amplitude), and has a seemingly infinite resolution of motions and details.

What a camera can capture of reality, is terribly sad, but good enough for a human brain.
Colors. A digital camera can register only 3 colors (Red, Green, and Blue) and most save only 256 levels (8 bits) of light intensity per color (density) (although many can register more levels). There are cameras that can save in a higher then JPEG quality; the RAW format, that can register 1024 levels of light intensity per color, or even more (10 bit or more). (Non-RAW cameras should be prohibited!). 1024+ levels are absolutely necessary to take good pictures, but after color and luminance corrections, 256 levels per color channel are enough to show a good quality image, in most situations.
Focus. While reality is sharp, an ordinary camera lens can focus on one distance only, and everything out of the plane of sharpness, is out of focus; blurred. The "plenoptic camera" or "light field camera" might improve this.
Pixels. And for the final destruction of reality, is the image build-up from pixels; large squares that crush all details down to one blurred average value. Anyhow; humans can't handle much information, so there's no problem.


And what (little) a digital camera can capture of it, at 16x16 pixels.

Color spaces/models
There are different types of color systems, based on light (additive color mixing) or paint/ink (subtractive color mixing). Scientific registration of light needs a large garmut and lots of exact values, but to view a movie, you'll only need the colors that matter.
A RGB image is made of three color channels: Red, Green, and Blue. When all 3 channels get merged, you'll get the full color image. By mixing the 3 RGB color channels, 256^3 = 16777216 tones are created.
This 16x16 photo has 256 pixels ánd 256 different colors (although some pixels may look alike, no two pixels are the same here).

Black+Black+Black=Black. Red+Green+Blue=White
Red+Green=Yellow. Red+Blue=Purple. Green+Blue=Cyaan

An other way to construct a color photo is: Lab.
Lab has a Lightness component "L", an "a" component (green-red axis), and a "b" component (blue-yellow axis). Lab has a large gamut, encompassing all colors in the RGB gamut.

There are other color spaces too, like:
- CMYK (print inkts)
- Index (GIF, PNG, only embraces used colors)
- CIE Lab (CIELAB) was specifically designed to encompass all colors the average human can see. This is the most accurate color space (but is too complex for everyday uses).
- sRGB (standard RGB). sRGB is a RGB color space proposed by HP and Microsoft because it approximates the color gamut of the most common computer display devices. sRGB's color gamut encompasses just 35% of the visible colors specified by CIE
- Adobe RGB 1998, has a larger gamut than sRGB. Its working space encompasses roughly 50% of the visible colors specified by CIE, improving upon sRGB's gamut primarily in cyan-greens. But compared to sRGB, it has a bit less tones in the sRGB area (the sRGB area/gamut fits in the Adobe RGB gamut)

There's too much kinds actually: CIE 1931 XYZ, CIELUV, CIELAB, CIE 1964, HSI, HLS (Hue, Lightness, Saturation), HSV or HSB (Hue, Saturation, Value or Brightness), , , YUV (is used in the PAL system of television broadcasting), YIQ (used in NTSC), YCbCr, YPbPr, ...
Read more in Wikipedia

Most cameras ad noise to the image, especially at high iso (light sensitivity). Noise is a problem. We don't want to see noise. Pure noise is almost impossible to compress lossless, little noise is hard to remove & compress without losing real details too.

What (little) a digital camera can capture, at 16x16 pixels.

+ a little noise

And then there's motion... that we usually brake down to 24, 25 or 30 fps (frames per second). We experience a fast sequence of images as motion, but we don't actually safe any motion information. Real-time computer animation rendering can contain motion information, as well as many other information like: weight and stiffness. Most movie compression methods search for what could be motion differences between two or more frames, thus compressed movie data does contain motion information.

Destruction of Values
When a movie-camera films a still object and slowly moves from say left to right, the image-sensor destroys the values of "reality" 25 times per second, every frame in a different way! And next some compression method has to deal with all that non-information. The object that is being shot did not change one bit!!

One question is "how much movement do we want to consider to be movement?" An object can move 1/256 pixels now, but do we need to see this? Moving 1.0 or more whole pixels does not destroy the values. Using higher resolution (more pixels) and only accepting whole-pixel-movements could improve compression a lot. A sharper image resulting in smaller file-size.. ain't that funny?!

Don't mind the limited mind
Luckily the human mind is very limited, thus an incredible amount of details can be trashed or altered. If most of the hundreds of pixels per frame are 1/16777216 off-tone, will you be able to see that, in 1/24 of a second? I think not!

Giesbert Nijhuis

Back to top

Back to index