Has anybody come across an implementation, or made one themselves, that addresses the occlusion problems when doing hand tracking? I came across a paper (Tracking a Hand Manipulating an Object, Hamer et. al) that discusses an approach to this, but have been unable to track down the source code and I'm sure it's not for RealSense. I'd love to know if this exists or even if a simpler version that could be built upon exists - really need it for an assistive technology research project. Thanks!
The problem is that you are casting a shadow with your hand, and therefore you don't get the IR pattern, which is light, in the occluded object.
Using multiple cameras might help you here, since you would be looking at the object from another perspective. The problem here is that these cameras are active, which means that they will interfere with each other. You could either synchronize their laser emissions / readings, or use the second one just as passive stereo, which doesn't interfere with the other one, but needs good features in the object to perform the stereo matching.