- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What does PARTIAL_ACCELERATION mean in described configuration ?
Will be HW encoding used or all pipeline will be executed in the CPU ?
Encoding pipeline consists of two operations:
1. VPP transcoding YUV12 --> NV12
2. Encoding NV12 --> h.264
Direct3D11 Acceleration device is used.
MFXVideoENCODE_Init() returns MFX_ERR_NONE
MFXVideoVPP_Init() returns MFX_WRN_PARTIAL_ACCELERATION
- Tags:
- Development Tools
- Graphics
- Intel® Media SDK
- Intel® Media Server Studio
- Media Processing
- Optimization
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The PARTIAL_ACCELERATION variable means to accelerate the pipeline with general GPU operations.
As we know, Media SDK is using the hardware codec in GPU of Intel processor which is a dedicated hardware for video compressing/decompressing. While PARTIAL_ACCELERATION doesn't use this hardware, it uses the general core of GPU also called EU.
PARTIAL_ACCELERATION is also called GACC--GPU Accelerated.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The answer:
#1. No. the decision for HW mode is decided at the run time when MSDK talks with driver layer. In this case, VPP returns MFX_WRN_PARTIAL_ACCELERATION means the formats you submitted is not supported by the hardware so it falls back to partial accelerated silently, you can check the following document at page 48 for detailed information:
https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.pdf
You can also check Table 2 of the above document at page 11 for all the color format we support, I doubt there is a typo since we don't have "YUV12" in the table, so please check yourself.
#2. Yes, this could happen.
Although not related to your question, there is other mistake in your post. MSDK is a hardware based library, so all the key operations are asynchnized, your pipe line description should be:
VPP(f1) --> SYNC(f1) --> ENCODE(f2) -->SYNC(f2)
This is critical since without the first sync operation, the second enc operation might wait forever for the surface be available.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
misspelling, sorry
YV12 - incoming format for VPP operation
( planar format with 4:2:0 sampling )
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have created a separate question about Asynchronous Pipeline:
https://software.intel.com/en-us/forums/intel-media-sdk/topic/780983
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Let's continue
"the formats you submitted is not supported by the hardware"
ok
"so it falls back to partial accelerated silently,"
ok
> what does this mean ?
e.g. MFX_MEMTYPE_SYSTEM_MEMORY will be used instead MFX_MEMTYPE_FROM_VPPOUT ???
"you can check the following document at page 48 for detailed information"
ok
> what function do you mean on p.48 ?
> what sentence could clarify our question ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
MFX_MEMTYPE_SYSTEM_MEMORY means the app will use the system memory instead of video memory.
Page 48 is the function mfxVideoVPP_Init(), it says about return value of MFX_WRN_PARTIAL_ACCELERATION which you were asking at the first place.
Does this answer your question?
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nope
There is no answer to the original questions:
What does MFX_WRN_PARTIAL_ACCELERATION mean in described configuration ?
Will be HW encoding used or all pipeline will be executed in the CPU ?
( see pipeline configuration above )
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
MFX_WRN_PARTIAL_ACCELERATION means, the pipeline still runs on GPU but it will be running in the general component of GPU. This is different from the normal codec function which runs the GPU components that dedicated to video codec, Intel names this as "Quick Sync Video" or short for QSV.
In your case, MFX_WRN_PARTIAL_ACCELERATION is returned by VPP functions which is normal, but for most of the encoder function, it should not return MFX_WRN_PARTIAL_ACCELERATION since it should always happen on QSV components.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Direct3D9 supports YV12 Color format in VPP operation.
So, it asks External Allocator for video memory only
and we have our asynchronous pipeline looks like:
YV12 → VPP → video_memory → ENCODE → SYNC → out
two operations: VPP and ENCODE exchange their results through video_memory.
Here is clear.
On the other hand Direct3D11 doesn't support YV12 and VPP operation is initialized with warning MFX_WRN_PARTIAL_ACCELERATION.
It requests External Allocator to allocate both video memory and system memory ( MFX_MEMTYPE_SYSTEM_MEMORY ).
You have said that the pipeline is still created and we can use only one SYNC() operation.
So, will we have something like this:
YV12 → VPP ( on the EU part of GPU ) → system_memory → video_memory → ENCODE ( GPU ) → SYNC → out ??
How will this addition copy operation affect performance ?
Should we prefer Direct3D9 to Direct3D11 ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We use DirectX for video memory sharing, I can't remember which framework support which format, but if you said is true, then you can need to use D3D9.
In case of system memory, could you just output to system memory from VPP and the same system memory as input to encoder?
Discussing the memory allocation is out of the scope of MSDK, do you want a new feature or a solution?
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Liu, Mark (Intel):
could you just output to system memory from VPP and the same system memory as input to encoder?
Yes, I can.
But what is the reason to do this ?
Why should I use system memory instead video memory ?
Our main goal is HW - hardware acceleration support.
What does documentation say
mediasdk-man.pdf ( page 23 )
The hardware acceleration support in application consists of video memory support and acceleration device support.
I use Direct3D11 acceleration device which expects video memory as parameter.
Let's read Intel forum:
https://software.intel.com/en-us/forums/intel-media-sdk/topic/401351
Larsson, Petter (Intel):
If the input surface is on system memory and HW codec is selected, then Media SDK will automatically perform copy from system memory to D3D memory before scheduling the task.
Are these words correct at the present time ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, Petter's answer is still correct, the key point is, which type of acceleration is not decided by the input/output surface, it is based on the operation requested.
In your question, you are trying to do VPP operation, SDK checks the request and doesn't found a dedicated hardware for it, but it realized it can do this with the general GPU hardware, so it tells you MFX_WRN_PARTIAL_ACCELERATION. This has nothing to do with the surface you are using.
On the other hand, GPU always works with video memory directly, so if the system surface is used, SDK has to do a copying from system memory to video memory, this costs time and that's why I suggested to use the same surface as the output of the VPP and input as the encoder in the pipeline.
There might be a confusion in the pipeline diagram, in my knowledge, the pipeline VPP-->surface-->encoder assume the SDK encoder module to decide copying/not copying internally based on the surface type, this was aligned to Petter's comment, if the surface is system, the copying will be done automatically by encoder, if the surface is video, no copying inside encoder.
Mark
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page