Items with no label
3335 Discussions

Why the facial expression recorded with d415 is unclear and deformed?

fwang22
Beginner
2,061 Views

When I used d415 to record the change of expression, I found that there was no clear facial state.What is the reason for this?Too few facial points or devices?Need further algorithmic processing. Thanks.

 

  •  

 

0 Kudos
28 Replies
MartyG
Honored Contributor III
1,194 Views

Could you tell us more please about the method that you are using to record the facial expression, and supply an image if possible. Are you using a program that can track face details such as Nuitrack or Facerig, or a system that you have written yourself?

 

The best results with face tracking tend to come from systems that are animating the face of a pre-made model. When the face is being created purely from data points, the results can be more unpredictable depending on the facial movements being made during scanning, or environmental conditions such as lighting.

 

There is an example of face-creation with Kinect scanning that went very wrong, and it was though that the movements of the face that were being made during scanning were responsible.

 

1.png

 

 

https://www.youtube.com/watch?v=NpM6NBNbCmw&feature=youtu.be&t=313

 

0 Kudos
fwang22
Beginner
1,194 Views

I used d415 to record facial expressions and extract color images and depth images.Then the depth image is then converted to a point cloud using the depthToClou.m function.

But when I look at three-dimensional images of faces ,there is no clear face state.My guess is whether it is due to the lack of dots on the face or the device setting problem when collecting samples

Below is a reconstruction of my stereo face.

 1.png

 

 

0 Kudos
MartyG
Honored Contributor III
1,194 Views

If you are converting a depth image to a 3D point cloud with RealSense SDK 2.0 then the extrinsics of the camera need to be taken into account, otherwise you will likely get errors in your results. I see that you must be using MATLAB, and that you asked about point clouds in MATLAB in 2018.

 

https://forums.intel.com/s/question/0D50P0000490X0mSAE/convert-depth-image-to-point-cloud?start=30&tstart=0

 

I do not know whether the depthtocloud.m example calculates those extrinsics. As it originated as a downloadable Kinect function for MATLAB on the link below, rather than a program written specially for RealSense SDK 2.0, it may not take the extrinsics into account.

 

https://rgbd-dataset.cs.washington.edu/software.html

0 Kudos
fwang22
Beginner
1,194 Views

Yes, I used matlab to convert, and did not modify the parameters .Because I don't know if I'm computing the right parameters.

 

As for the modification of parameters, I wonder if I can ask for your help?

 

Thanks!

 

 

0 Kudos
fwang22
Beginner
1,194 Views

Yes, I used matlab to convert, and did not modify the parameters .Because I don't know if I'm computing the right parameters.

 

As for the modification of parameters, I wonder if I can ask for your help?

 

Thanks!

 

 

0 Kudos
MartyG
Honored Contributor III
1,194 Views

Since the last time that you asked last year, there was a discussion in January 20.19 on the RealSense GitHub that talked about getting the extrinsics.

 

https://github.com/IntelRealSense/librealsense/issues/3080

 

It talks about recompiling the wrapper to fix a bug, though it looks as though the SDK fixed the error in January and so doing this bug-fix should not now be necessary.

 

https://github.com/IntelRealSense/librealsense/commit/84131590ab0f2afc651f268b89051505ad533d0b

0 Kudos
fwang22
Beginner
1,194 Views

Sorry, I don't know much about matlab connected camera. I want to learn more about it, but I don't have time to learn it.

I want to get these essential parameters quickly, and I have to figure out a way to do it

Thank you for your help. Thank you!

 

 

0 Kudos
MartyG
Honored Contributor III
1,194 Views

I do not have personal experience with using MATLAB with the camera, unfortunately. The best place to ask about a quick method for getting the extrinsics for MATLAB will likely be the RealSense GitHub. On the link below, you can click on the 'New Issue' button to post a question there.

 

https://github.com/IntelRealSense/librealsense/issues

 

0 Kudos
fwang22
Beginner
1,194 Views

Hi,MartyG

 

 

I'm sorry to ask you another question.I have known how to get parameters, but I am still weak in using the API. Could you help me?

I don't know how to use get-video-stream to get my results. I don't quite understand some of them. https://github.com/IntelRealSense/librealsense/wiki/API-How-To#get-video-stream-intrinsics.

 

Thanks!

 

 

0 Kudos
fwang22
Beginner
1,194 Views

I've got the internal parameters, but the resolution is 1280 * 720, and I need 640* 480.So how do you get it?

0 Kudos
MartyG
Honored Contributor III
1,194 Views

It's no trouble at all. :)

 

This section of the documentation explains about intrinsics:

 

https://github.com/IntelRealSense/librealsense/wiki/Projection-in-RealSense-SDK-2.0#intrinsic-camera-parameters

 

To retrieve the intrinsics from a stream, it recommends looking at this example of scripting:

 

https://github.com/IntelRealSense/librealsense/blob/5e73f7bb906a3cbec8ae43e888f182cc56c18692/examples/sensor-control/api_how_to.h#L209

 

There is also a user-contributed script for getting the video stream intrinsics.

 

https://github.com/IntelRealSense/librealsense/issues/1271#issuecomment-370330583

 

0 Kudos
MartyG
Honored Contributor III
1,194 Views

You can set the resolution and FPS parameters of a stream in a cfg_enable_stream instruction. You may have seen this instruction a lot on the documentation page where you found the 'get video stream intrinsics' information.

 

The link below contains an example script for setting the color and depth streams.

 

https://forums.intel.com/s/question/0D50P0000490VWMSA2/initializing-rgb-and-depth-buffer-at-max-resolution-

 

Below I will break down the parts of an example instruction to explain how to customize it for your needs.

 

cfg.enable_stream( RS2_STREAM_COLOR, 1920, 1080, RS2_FORMAT_RGB8, 30 );

 

*******

 

STEP ONE

First, define the type of stream that you want to configure. This would be RS2_STREAM_COLOR for RGB, RS2_STREAM_DEPTH for depth and RS2_STREAM_INFRARED for IR.

 

STEP TWO

Next, the first two numbers in the bracket define the resolution that you want. It should be a resolution that is supported by the RealSense SDK at the FPS speed that you are going to use.

 

For example, if you wanted a color stream of 640x480, you could set it to:

 

cfg.enable_stream( RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_RGB8, 30 );

 

STEP 3

Then define the format that you want the stream to use. In our color stream example, we have set the format to RGB8.

 

cfg.enable_stream( RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_RGB8, 30 );

 

STEP FOUR

Finally in the last number in the bracket, set the FPS speed that you want the stream to have. For example, '30' for 30 FPS.

 

cfg.enable_stream( RS2_STREAM_COLOR, 1920, 1080, RS2_FORMAT_RGB8, 30 );

 

A list of supported resolution and FPS speed combinations, and stream format types, can be found on pages 54 to 56 of the current edition of the data sheet document for the 400 Series cameras.

 

https://www.intel.co.uk/content/www/uk/en/support/articles/000026827/emerging-technologies/intel-realsense-technology.html

 

0 Kudos
fwang22
Beginner
1,194 Views

I'm a little confused. What are the extrinsic parameters needed to convert the depth to the point cloud?The extrinsic parameter is obtained by depth to color or infrared to infrared.

Thanks!

 

 

0 Kudos
MartyG
Honored Contributor III
1,194 Views

If you are converting 2D data to 3D point cloud data (such as generating 3D coordinates from a 2D image file), a common way to do it is to use an instruction called rs2_deproject_pixel_to_point

 

https://github.com/IntelRealSense/librealsense/issues/1413

0 Kudos
fwang22
Beginner
1,194 Views

Does this also apply to the depth frames I have already collected?

 

0 Kudos
MartyG
Honored Contributor III
1,194 Views

rs2_deproject_pixel_to_point should be applicable to files that have previously been created with a RealSense 400 Series camera even if they were created a long time ago, yes.

0 Kudos
fwang22
Beginner
1,194 Views

ok,​Thank you so much! I'll try it.

0 Kudos
fwang22
Beginner
1,194 Views

I already know the others now, but how to read frames from my folder is a problem.Sorry, I know this is a very simple question

 

0 Kudos
MartyG
Honored Contributor III
1,194 Views

My apologies for the delay in responding, as I wanted to research this question very carefully.

 

I have some uncertainty about this one. If you were using a .bag format file then I would suggest using cfg_enable_device_from_file in the pipeline configuration instructions to tell it to use a file as a source for the data instead of a live stream, like the example below:

 

************

 

rs2::config cfg;

cfg.enable_device_from_file(<filename>);

pipe.start(cfg); // Load from file

 

***********

 

So if you had a .bag format file called test.bag then the script would be:

 

rs2::config cfg;

cfg.enable_device_from_file("test.bag");

pipe.start(cfg); // Load from file

 

If you are using image files instead of a .bag format file then I am uncertain how to proceed with using them as the data source. I will tag a couple of Intel support team members into the discussion to kindly request their input. @ElizaD_Intel​  @AlexandruO_Intel​ 

0 Kudos
fwang22
Beginner
1,132 Views

When converting from a bag file to a point cloud, does the depth need to be consistent with the color intrinsics parameters? As mentioned here https://github.com/IntelRealSense/librealsense/issues/1413

0 Kudos
Reply