-
Notifications
You must be signed in to change notification settings - Fork 17
Description
I've encountered weird behavior when processing 32 bit input TIFF stacks in BigStitcher, which makes me wonder whether this bit depth is fully supported. However, I can't find any reference to such a limitation in the wiki, the forum or the original paper.
Briefly, to replicate the issue I am defining and resaving as HDF5 a new dataset based on a single 32 bit TIFF tile (stack). Here is what happens following resave, depending on how I tweak conditions:
behavior 1. When the option to "load raw data virtually" IS checked, the tile is preserved as 32 bits, with floating point pixel values. BigDataViewer cannot display the full gamut but the underlying data in the .h5 file is float.
behavior 2. When the option to "load raw data virtually" IS NOT checked, the tile is converted from 32 bits to 16 bits by truncating to integer. Thus, if the input pixel values are < 65535 (as happens when working with processed 16 bit camera stacks), there is none or only a slight loss of information due to losing data from the first decimal place. If, however, the input pixel values extend above 65535, those values are capped at 65535 with a dramatic loss of information.
What are the limitations of BigStitcher regarding bit depth? If 16 bit is the limit then it might be advisable clarify it in the documentation and perhaps post a warning during file parsing.
Define Dataset using: Automatic Loader
Pattern represents: tiles
Re-save as ... HDF5
Compression: deflate