- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
To rotate a 3D object I can move the cursor of mouse to that object and press left button down, then I can hold the left button and move mouse around to rotate the object. How can I do this by hand?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That may be a new solution but I just want a simulator of mouse. That should be in SDK.
In my problem I need a screen coordinate of the hand cursor then I can translate it to 3D world coordinate.
A problem is to lock/release a position and generate relative position of the hand motion.
Can I get a real world 3D coordinate of my hands from cameras? Where is the original point?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes you can.
Get the raw depth stream and then use the projection utilities to get the real world 3D coordinates of the pixels that represent the hand:
/** @brief Map depth coordinates to world coordinates for a few pixels. @param[in] npoints The number of pixels to be mapped. @param[in] pos_uvz The array of depth coordinates + depth value in the PXCPoint3DF32 structure. @param[out] pos3d The array of world coordinates, in mm, to be returned. @return PXC_STATUS_NO_ERROR Successful execution. */ virtual pxcStatus PXCAPI ProjectDepthToCamera(pxcI32 npoints, PXCPoint3DF32 *pos_uvz, PXCPoint3DF32 *pos3d)=0;
Take a look at pxcprojection.h for more info.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I can capture the world coordinate of hand from an example of Java in SDK. Weird Intel did not provide this sample code in C++ I have to convert it from Java to C++.
My problem is to map or transform the 3D world coordinate to my 2D screen coordinate.
Or how can I match my 3D model world to the 3D hand world? Intel may need to provide a solution to this basic problem. This is a fundamental part to use 3D camera in virtual reality. By matching two worlds all operations become easy to understand. Moreover why not provide a cursor that is the palm center of a hand in API?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi Chang-Li,
you can have a look at the hands_viewer samples (either C++ or C#) to see how the joint information can be retrieved,
Look for this method: QueryTrackedJoint
I would go for the JOINT_WRIST
you can then retrieve the image coordinates of the joint by calling:
int wristX=(int)jointData.positionImage.x;
int wristY=(int)jointData.positionImage.y;
This data is in pixels in the coordinates of the depth image (640x480)
Now convert this to your screen's resolution:
int screenX = (wristX * screenWidth) / 640
int screenY = (wristY * screenHeight) / 480
This should convert to the correct screen location.
Now you can use a gesture event (say "fist") to trigger a mouse click
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I can get the image center coordinate now. But I can not find API to work with mouse drag & move.
The mouse drag & move include:
1. processing WM_LBUTTONDOWN message
2. processing WM_MOUSEMOVE message
3. processing WM_LBUTTONUP message
What gesture can do the LBUTTONDOWN? There is no cursor available to select an object. Without these basic APIs it is hard to control objects in both 2D and 3D.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page