Reading those raw digital values from all the pixels and transforming them into a high quality full-color photo requires some more algorithmic work. Firstly, since each pixel only senses a single Red, Green or Blue color, an algorithm needs to guess the value of the other two colors for that location. Furthermore, the raw data on the sensor is very noisy and needs to be cleaned up. Also sensed colors are not reflected exactly as in the real world, and need to be corrected. The software image processing algorithms to do all this have been around for decades, and are based on hand-crafted logic and code tuned to try to invert the issues just described. But is this optimal? Designing these algorithms is complicated and involves tuning knobs on lots of different software blocks that all have to work together sometimes changing one part affects the others and vice versa. And despite years of iteration, these hand-designed algorithms make many heuristic trade-offs in design, in order to keep code manageable by the many engineers who work on them. The result is a decent, but not incredible image.