IT-8 for HDR: 16 bit or 8 bit based calib. and conversion
PostPosted: Tue May 06, 2003 7:31 pm
My question comes from my inference that many IT-8 calibration and interpolation algorithms are assuming 8 bit/channel data and thus calculate in 8 bit/channel math. Funneling 16 bit/channel image data through an 8 bit/channel based IT-8 conversion would not make sense to me.
Does the HDR IT-8 application use 16 bit/channel or 8 bit per channel data ? i assume 16 bit/channel. Or do the data values get reduced to 8 bit anyway during calibration calculations to produce the profile ?
Does the conversion/interpolation algorithm (when interpreting the image raw data in the 16 bit /channel file) use 16 bit/channel interpolation in its code or has the 16 bit/channel been reduced to 8 bit/channel based interpolation?
Frank
Does the HDR IT-8 application use 16 bit/channel or 8 bit per channel data ? i assume 16 bit/channel. Or do the data values get reduced to 8 bit anyway during calibration calculations to produce the profile ?
Does the conversion/interpolation algorithm (when interpreting the image raw data in the 16 bit /channel file) use 16 bit/channel interpolation in its code or has the 16 bit/channel been reduced to 8 bit/channel based interpolation?
Frank