We know that we live in the future because every high-density screen is covered with high-resolution full-color images. Of course, this comes at a price, and the myriad improvements in the way we represent these beautiful pixels digitally and how we push them to the screen overshadow this. No one will think about this, except for those who work behind the screen to store and light up these pixels. Hey, chances are you will be like that one day. It’s time to learn more about image coding!
[Scott W Harden] put it together A concise primer on representing images in memoryIt focuses on the basics of image storage: B is usually used before G, sometimes includes an alpha (transparency) channel, and has many different bit depths. Putting these in the first place is crucial for microcontroller projects, where deciding which type of image to support is usually limited by the amount of memory available in the frame buffer and the screen function selected as the device’s display.
Speaking of display details, [Scott] Share some details about mapping memory to screen size. If the number of bytes of pixel data is inconsistent with the screen size, padding rows may help with the processing overhead required to put these pixels on the screen. He also has some tricks about “pre-multiplying alpha”, which makes the calculation of transparency a part of the image itself, rather than requiring it when trying to update the screen. Running a test on a one million frame rendering in C# shows the type of savings you can expect.
Decades of trial and error led us to adopt these schemes. The retrospective is actually an archeological project, as a hacker discovered when trying to obtain a set of digital images from a floppy disk of a photo processing service in the 1990s.