My question comes from my inference that many IT-8 calibration and interpolation algorithms are assuming 8 bit/channel data and thus calculate in 8 bit/channel math. Funneling 16 bit/channel image data through an 8 bit/channel based IT-8 conversion would not make sense to me.
Does the HDR IT-8 application use 16 bit/channel or 8 bit per channel data ? i assume 16 bit/channel. Or do the data values get reduced to 8 bit anyway during calibration calculations to produce the profile ?
Does the conversion/interpolation algorithm (when interpreting the image raw data in the 16 bit /channel file) use 16 bit/channel interpolation in its code or has the 16 bit/channel been reduced to 8 bit/channel based interpolation?
Frank
IT-8 for HDR: 16 bit or 8 bit based calib. and conversion
Makes perfect sense to me. My concern is using a conversion that can only express X bits of red with data that expresses 2X bits of red. Even if the interpolation is linear between the IT-8 patches, i still have 1/256th of the choices for red. Probably doesn't make any practical difference in most images and would seem to be similar to the discussions about scanning in 16 bit versus 8 bit per channel. Doesn't usually make much difference unless you have to make large corrections on a poorly exposed or badly color shifted image. Still....
Frank
Frank
Return to “HDR (48bit HDR processing)”
Who is online
Users browsing this forum: No registered users and 1 guest

