I've been quite familiar with the technology of bitmapped image, the mathematical gist of their formats, since about 1995. For example, the diff between gif, jpeg, png. There's RGB color model , but also HSL and HSV, and CMYK.
A image is made up by a rectangular grid of pixels, and a pixel has 3 sets of values, for Red, Green, Blue, together they specify a color for the pixel, and all the pixels form the image. In early 2000s, Alpha channel became important. Alpha channel is another value for a pixel. It allows you to have semi-transparent images. Transparency is important for images on computers, because for example, you might want a image partially transparent so users can see what's behind.
One element of digital image i never understood is Gamma.
Going to Wikipedia's gamma article, but it is unreadable. But then i found this:
Skipping to the “What is gamma?” section, it began with this paragraph:
The luminance generated by a physical device is generally not a linear function of the applied signal. A conventional CRT has a power-law response to voltage: luminance produced at the face of the display is approximately proportional to the applied voltage raised to the 2.5 power. The numerical value of the exponent of this power function is colloquially known as gamma. This nonlinearity must be compensated in order to achieve correct reproduction of luminance.
That explains it! A explanation that's simple and potent.
So, gamma, is NOT a intrinsic quality of a image. Rather, it is a value to compensate the quirks of various display devices.
Reading other parts of the article by Charles, i find it excellent. Apparently, it is written by a scientist, and explain things precisely, relying on a physics background. In contrast, the hundreds of articles i read in the past 20 years about color, are mostly wishy-washy, or filled with technical arcana that indicates the writer doesn't really understand it. Going over to his home page Source www.poynton.com, he turns out to be a notable specialist in digital image formats.
Reading the whole gamma faq article, it is truely illuminating. It reminds me how complex is the science of color, and technologies related to color, in particular, digital image formats and image processing. This complexity i have personally experienced in practice over the past decade, in trying to get the image right, from camera, video, computer displays, screenshots, scanners. Endless frustration in trying to adjust some brightness or calibration knobs.
After reading Charle's article, Wikipedia article about gamma now becomes understable.
Another article on Gamma is this: PNG (Portable Network Graphics) Specification, Version 1.2, section “13. Appendix: Gamma Tutorial”, at Source www.libpng.org
The whole thing about the gamma complexity, is of 2 main causes:
These are the main reasons. The real complexity is a lot more than that, involving human understanding of nature (physics of color), psychology of perception, technology and engineering limitations (camera and display devices, image and video formats). Starting from the moment you create a digital image either by camera or camcorder, which needs to capture light intensities, thru lens, onto some light-sensitive media (CCD, Photographic film), which eventually needs to be converted to some file format when stored. And when you display it, the software needs to read the data, interpret it appropriately, and eventually convert the bits to voltage in your screen.
One fact surprising to me i learned is, that video formats such as mpeg, video broadcast signals such as NTSC, or image formats (jpeg, png), they actually store the adjusted light intensities (gamma corrected), and not the unprocessed light intensities.
The reason also seems reasonable. For example, for TV, it is much economic to do the gamma correction processing once and send the gamma corrected signals, than having every TV receiver do the gamma correction.
Another thing i wondered is that, whether there are triangular/hexagonal based grid for image format or display system. (i.e. like honeycomb) We know that a image is just a rectangular array of pixels. And monitors are more or less that too.
(Flat panel displays are all rectangular grids. CRT's display unit is basically lines. Its Shadow mask may be a triangular grids, but CRT's nature with respect to a grid is still array of lines.)
Note that packing dots by a rectangular array arrangement is not optimal, with respect to density. The optimal one is a triangular grid. 〔☛ Circle packing〕 So, with respect to better resolution and the display grid, for a display technology, a triangular grid is superior to rectangular. So, i wonder if there are display devices based on triangular grid, or image formats that is based on triangular grid. I'm sure they exist in some specialized niche.
What made me look into gamma today is when reading this article:
The article details a defect that almost all image processing software have. When you scale some images, the algorithm used in these software has a defect, so the result scaled image is not optimal. The site gives one particular example input image. When you scale the image, the result is very bad. I tested it with ImageMagick, and verified that it also have this defect.Disqus