Output image is 8bit from 16bit image input according to file
and ImageMagick compare
.
Steps to reproduce:
git clone <this repo>
./build.sh
# Check inputs
file 8b.png
# 8b.png: PNG image data, 1920 x 1200, 8-bit/color RGB, non-interlaced
file 12b.png
# 12b.png: PNG image data, 1920 x 1200, 16-bit/color RGB, non-interlaced
# Convert images
./build/fpnge 8b.png 8b-out.png 100
./build/fpnge 12b.png 12b-out.png 100
# Compare:
compare -metric AE 8b.png 8b-out.png compare.png # gives 0
compare -metric AE 12b.png 12b-out.png compare.png # Gives a very very large number (2.304e+06)
file 12b-out.png
# 12b-out.png: PNG image data, 1920 x 1200, 8-bit/color RGB, non-interlaced
Note: the 12b-out.png claims to be 8-bit/color
. Interestingly opencv
reads this still as 16-bit, but wrongly.
Extra Info:
CPU: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
OS: Debian GNU/Linux 11 (bullseye) (in docker)
Compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110 g++ (Debian 10.2.1-6) 10.2.1 20210110
Extra extra
I stumbled on this via the python binding here. Below is an example out showing the errors:
blank_image = np.random.randint(0, high=np.iinfo(np.uint16).max, size=(1,2,3), dtype=np.uint16)
image_bytes = fpnge.binding.encode_view(blank_image.data, blank_image.shape[1], blank_image.shape[0], blank_image.shape[2], blank_image.dtype.itemsize * 8)
image_check: cv2.Mat = cv2.imdecode(np.frombuffer(image_bytes, np.uint8), cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH)
assert image_check.dtype == blank_image.dtype
assert image_check.shape == blank_image.shape
assert (image_check == blank_image).all()
# If you print these:
pprint.pprint(image_check)
array([[[51929, 35260, 2880],
[ 6872, 23392, 15276]]], dtype=uint16)
pprint.pprint(blank_image)
array([[[16395, 48265, 55754],
[44091, 24667, 55322]]], dtype=uint16)
Images:
