Someone suggested I ask this question in the development forums, and this forum seemed the closest to where I believe the answer can be found.
I see this question on boards from other websites, but nobody seems to want to ask the people who make the actual CPU. Are integers faster than floats, like they used to be when your company first started creating processors? Are integers helpful for graphics using OpenGL, Vulkan, or DirectX? If the goal was to scan a human being in three dimensions, and display the scan on a monitor for medical purposes, and the measurements were all in microns, would it be better to store them in integers or floats?
This information is easy enough to find, but understanding it can be challenging. For example, Appendix C of the Intel Optimization Reference Manual (document 248966) contains instruction latency and reciprocal throughput data for many recent Intel processors. Even more data is available from Agner Fog's comprehensive testing (e.g., http://www.agner.org/optimize/instruction_tables.pdf).
There is a huge amount of data in these resources, but the short answer is that, in most cases, floating-point arithmetic has slightly higher latency than integer arithmetic, but the same, or better, throughput (for operands of the same bit width).
There are zillions of caveats required here, among them:
In cases where you can use Byte or Word (16-bit) packed integer values, the SIMD instruction set allows operating on twice as many elements per instruction, which can provide increased computational capability. The use of packed 8-bit or 16-bit values also (typically) reduces data transfer requirements through the memory hierarchy, which can increase throughput.