Here I go again, after a couple of years of silence... Someday I'll tell the tale of what has kept me busy.

In the meantime, here is an interesting bit that I learned today. DirectX defined various level of support across many variables, and in Windows 8, some new scalar types were added to HLSL that could provide better performance - the min* types. min12int however, looked a bit odd to me - it doesn't have an unsigned counterpart, and it's not obvious why it has that precision. Well, it all became clear when I found this tidbit on the GLSL-to-HLSL reference (highlight is mine)

min12int: minimum 12-bit signed integer
This type is for 10Level9 (9_x feature levels) in which integers are represented by floating point numbers. This is the precision you can get when you emulate an integer with a 16-bit floating point number.

And now you know too.

Enjoy!