I am using DirectShow and a Hauppauge video capture card to preview video on a E3825 unit. It is running Win7 32 bit with display driver version 220.127.116.111 (Driver date 6/26/2014).
My problem is when I use DirectShow and build a filter graph if I use any of the Renderers that directly use the YUY2 data from the capture board then the video on the screen is wrong (see picture below)
and sometimes I get an error message so this seems to be a error with the display driver.
Working video and graph ..... but takes 20% of CPU to run this because of the AVI Decompressor.
I also tried this on the same hardware running Win8.1 64 bit and the Video was still wrong but did not get the error message. I also tried this on a module using the N2930 and had the same problem.
Next I tried the same capture board in two other PCs and it worked fine. One PC had a intel CPU but a ATI graphics card the other had a AMD processor with build in graphics.
So my questions :
1. Is there a newer driver I should try?
2. Any other tests I should run?
3. Does anyone else have this problem?
4. I have asked my module company and have tried updating to the latest BIOS and firmware but still the same issue. So I have raised the issue with their support but is there anyone else at Intel I should ask?
Welcome back to the Intel Embedded Community. We have received your consultation and soon we will contact you with additional information.
Could you please provide the following information:
1) Is this issue present if you use the Windows 7 inbox drivers?
2) Have you tried using a different source for the video stream? Please test using a different format like NV12
I will be waiting for your reply.
What do you mean by Windows 7 inbox drivers? Are you meaning the default Win7 drivers that you get before you load the Intel drivers? I tried this and the Standard VGA Graphics adapter was installed but it was too limited do much as none of the renderers directly took in the YUY2 data (all needed the AVI decompressor) and they all ran slow and dropped frames (no hardware acceleration) . Are there some non-embedded Intel graphics drivers for Win7 32 bit I should try instead? All my board vendor had were 64 bit drivers.
2. In regards to the Video stream, YUY2 is the only format my board makes. I did try a test where I played a Mpg2 file. It produced YUY2 data that was given to the renderer and that worked fine. (see my answer below to Kirk).
There is a newer hot fix release (build 1127) that is available to our direct Tier 1 customers on our VIP site. My guess is that you would need to get this from your system provider due to the nature of hot fix releases. There is a general Maintenance Release planned for the end of Q2 that should release here on the EDC. However, it is doubtful the operation of your capture card will be affected by the driver update. From the picture it appears the capture card is mishandling the transfer of the data into the frame buffer. Not knowing the Hauppauge code, they may be assuming that the hardware overlay can support a wider color compression than in reality. If the code is using hardware overlay, they would need to understand that is being done in the driver through the Sprite Planes as there is not any hardware overlay plane in the BYT. I believe that would cause the weird data offset you are experiencing as the wrong interpretation of the data widths start causing issues. The NV12 test suggestion is a good one and if the picture looks right, then the color space handling of the Sprite plane for overlay IS the issue and the Happauge code IS making bad assumptions.
I will ask my system supplier for the latest driver .... but have some questions on your statement about the wider color compression.
First the results I got depended on the Video render I used (not sure how refer to them so let me know if you need more info).
Video Renderer - Uses AVI decompressor to convert YUY2 to RGB - this works.
VMR Input Video Renderer - Takes in YUY2 data shows the image I put above.
Enhanced Video Renderer - gives error that This pin cannot use the supplied media type.
Video Mixing Renderer 7 - Show the image above but also blinks a number of times like it is showing frames that are from different points in time.
Video Mixing Renderer 9 - Just show black no image. Also when try to stop the graph it hangs the graph app for along time.
Below is the pin info for the YUY2 data from the capture card and also from an MPEG2 file (this plays fine). I have no idea which difference is causing the problem.
So any ideas??
So if the E3825 can not handle some types of YUY2 data that the older overlay ones could then why does the pin connection process work. It would seem it should say that it does not handle that type of compressed data in the directshow connection process.
I asked my System Supplier to check for the Hot fix driver you mention and his reply was
"I was unable to find the "hot fix", build 1127 Intel was referring to. (You can mention I searched the Intel Business Portal. perhaps they can be specific about the name of the download they are referring to.) "
So can you give me any more info that I can pass on to my System Supplier of where to get the hot fix build 1127 for the E3285?
The hotfix is not on the Intel Business Portal, but is a kit from the Intel Validation Internet Portal. If your vendor doesn't have access to the Validation Portal, they should speak to their Intel Rep to request access.
Have you been able to test the application using the NV12 color space?
Since my board only outputs YUY2 data, I have no idea how I am suppose to test the application using the NV12 color space. So any recommendations?
What directshow block can I use to convert YUY2 data to NV12 data?
I have run a test where I build a graph doing this:
Capture -----> AVI Decompressor ----> Color Space Converter---> Video Mixing Renderer 7 ( or 9) and this works. But again the renderer is now receiving RGB32 data not YUY2 data.
-Unfortunately I'm not aware of any directshow filter from YUY2 to NV12, I think you could consult how to code your own filter on the DirectShow community.
-Have you tested your application using a different Hardware source? For example an USB camera, if I'm not mistaken most usb webcams output YUY2 format.
I did a test using a USB camera as you suggested and it work fine with the YUY2 data it outputted. Looking at the info for the connection to the Renderer the Capture card and USB
camera are very similar but do not understand this enough to know which item matters. (the first image is the capture card the second the USB camera) So at this point unsure what to try next as the capture card works with other GPUs and the E3825 works with other sources. So any suggestions on how to determine what is the issue?
Have you tried to consult with the manufacturer of your video capture card? They could provide you with more information about how they encode their information. I think Kirk gave you a hint about it in a previous communication:
From the picture it appears the capture card is mishandling the transfer of the data into the frame buffer. Not knowing the Hauppauge code, they may be assuming that the hardware overlay can support a wider color compression than in reality. If the code is using hardware overlay, they would need to understand that is being done in the driver through the Sprite Planes as there is not any hardware overlay plane in the BYT
It could be possible that your card is working with other GPUs because they support hardware overlay.
Have you tried using different settings on your renderer? For example check if using windowed mode or windowless mode has any effect on the output display.
Could you please export the data from the properties of the webcam/card/mpeg video to Excel? I think that would make it easier to spot if there are any significant differences.
I have done some more testing and found a colorspace converter filter from Leadtools ( I tried other converter filters with the same results) and if I put that between the Capture filter and the Renderer then it all works. The color space converter does not seem to be changing the data (still YUY2) so it must be something else.
Looking at the pin info it seems the data is still YUY2 and 720 x 480 but the only difference seems to be in the rcSource, rcTarget and dwBitRate.
The rcSource and rcTarget seem to be valid from the docs I have read but the dwBitRate is wrong. However I have found cases where some filters set dwBitRate to 0 and things still worked. So I am guessing it is something else that is not shown in the pin info that is causing the issue.
Another interesting thing was if I put the Lead deinterlacer after the Capture filter and then rendered the output of the deinterlacer it automatically put in the Colorspace converter (the deinterlacer seems to output the data in the same format as it got from the capture filter). So it seems there is some flaw in the Capture filter in the pin negotiation process that makes the renderer think it has a matching connection and yet when the graph is run the capture filter is sending it something else which then cause the display driver to stop working.
Can you tell from the above info what is the difference that is causing the issue?
If not I need to find some tool of something so I can watch the negotiation process to see why it works for the Colorspace converter but not for the Capture filter. Once I have that then I can give that info to the card maker.
Or as an alternative I could just always use a colorspace converter but I think it is adding more over head as it copies data from input to output buffer.
I apologize for the lack of replies on this, but I had technical problems to access the community. I will investigate this issue and I will update with more information as soon as possible.
Unfortunately we have not been able to find any tool to scope on the negotiation process.
On some forums people recommend to use programming code instead of graphs to be able to scope the media negotiation better, but I don't know if that is an option for you.
I think that you should ask on forums related to DirectShow developers if there is a tool to check the media type negotiation, for example: https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/home?forum=windowsdirectshowdevelopmen... https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/home?forum=windowsdirectshowdevelopmen...
Here are some additional suggestions:
Have you checked if there are new drivers for your capture cared? If there are please install them and check if they help with this issue.
And here are some sample applications: https://software.intel.com/en-us/media-client-solutions-support/code-samples https://software.intel.com/en-us/media-client-solutions-support/code-samples
You can try it and see if it helps the performance of your application.
The Intel® Media SDK you has its own support forum: https://software.intel.com/en-us/forums/intel-media-sdk https://software.intel.com/en-us/forums/intel-media-sdk
And finally if none of this suggestions work for you, I think it is important that you contact the vendor of your capture card even if you don't have the media type negotiation data.
I have an idea of what is going wrong but I am still working on how to fix it. I found a post in another forum that said
"When the Video Mixing Renderers are using an overlay surface or a texture surface, those surfaces often have rather unusual alignment requirements (like multiple of 256 bytes). If the VMR is directly connected to a capture filter, what it will do is configure the stream initially for the fully packed format (i.e., a 640x480 DIB). Then, as soon as streaming starts, it will send a change-of-format notice to the capture pin, changing the format to, for example, 768x480. The capture filter knows that it doesn?t really want a new format, it?s merely saying ?OK, I?m switching to the overlay surface now and the stride is 768 pixels?. The capture filter is required to accept this ? VMR doesn?t even check whether it failed "
So when the graph is built the Render is ok with the 720 x 480 the capture filter is creating but when I run the graph if the VMR7 renderer is used it request a 768 x 480 size (put in some debug printfs and see this happening) and the capture filter says no but the VMR7 ignores it and just runs causing all issues. If I use EVR renderer it will also request the change when the graph is started but it will give an error (This pin cannot use the supplied media type.) when the capture filter says no and the whole process quits and no weird stuff on the screen.
If I stick a filter between the capture filter and the render and it correctly handles the size change request then it all works.
I am asking the chip company if the capture filter can be changed to support the 768 x 480 request from the Render.
In the end, until I found the other post I would never had guess the multiple of 256 bytes, the size changes at run time, or the fact the VMR renderer is ignoring some response data from the capture filter. (hope this helps the next person working on this kind of issue
Here is the link to the other forum post
https://www.osronline.com/showthread.cfm?link=256115 OSR's ntdev List: AVStream: using pre-allocated frame buffers in custom allocator
All of this brings up another question that I am hoping you can give some info on.
If the capture filter can only product 720 x 480 data but the renderer has to have 768 x 480 what really needs to happen...
1. Does the capture filter just need to put out 720 x 480 data in to an 768 x 480 block of memory with each line of data having 48 used pixels at the end?
2. OR does the capture filter have to up scale the data to have 768 pixels of data in each line?
If it is the first then I can see changing the driver to support this but if it is the second then I do not see making that change ( the chip has no upscale function).
Hi, I think it would be OK to pad out to the 768 buffer width with 00 or FF data although that is just a guess at this point.
I was looking at the information earlier and one thing jumped out at me from the non-working Hauppauge information vs the working web cam and that is that the BiHeight is negative (-480) rather than positive (480) in the Hauppauge case (I assume). I wonder is a negative height causes the renderer some issues?
The general Win 7 driver has been released (on VIP) to release 1210 (MR2) and that one will be up on other download sites soon because it is a general maintenance release. Ask for it!
Hope this helps.
I have been able to add some debug printfs to the driver and have found what I think is the problem. (Just to be clear all I am getting now on the on screen video is a black window. It has been this way for the last few months after I update the graphics driver. So what I am trying to fix is just getting this black screen in the preview window)
The driver is based on AVStream and directly DMAs the data in to the buffers provide from the down stream filter.
When I connect my driver to a Color space converter or Deinterlacer then buffers I get seem to have valid physical addresses and it all works.
My pointers come from the KSStream_pointer structure
For example the first address in the list looks like this:
Low 32 bit address = 0x77dff000 high 32 bits of the address = 0 buffer length = 4096.
As required the driver supports what is called Scatter/Gather DMA so it gets a list of addresses and how much space there is at each address and sends the data as needed to the addresses.
When I connect my capture filter up to the VMR renderer the buffers I get seem to have invalided Physical addresses.
For example from the VMR I get:
Low 32 bit address = 0x80800000 high 32 bits of the address = 0 buffer length = 737280.
Because of these invalid address my dma data is going to the invalided address so nothing is really written in to ram so nothing shows on the screen.
As a double check I ran RamMap from Sysinternals and it showed that all of my ram addresses are below 2 GB.
So any idea why I am get invalid physical ram addresses? ( I also am surprised/concerned that it is all one large block) Is this because the intel driver does not really have an overlay? Any ideas on how to get valid physical addresses?
As one more double check, the KSStream_pointer structure also has a virtual address of the buffer so as a test I added test code to use the RTLZeroMemory command and zeroed out part of the buffer. The area of the display corresponding to the part of the buffer that was zeroed out changed from black to a green proving the virtual addresses are good. So the problem just seems to be not having valid physical addresses.
Figured out by looking in Device Manager that for the Display Adapter one of its memory ranges is 0x80000000 - 0x8FFFFFFF.
So the memory addresses I am getting are valid Display adapter addresses.
So is DMAing to Display Ram the same as DMAing to standard Ram addresses? I just setup my DMA engine with the addresses and let it go like I do for standard Ram?
I did another test where I changed the driver to dma into a common buffer and then used RTLCopyMemory to copy the data from the common buffer to the buffer provided by AVstream. So this shows me that my driver is correctly setting up all of it stuff but I have some problem dmaiing into the AVstream buffers that come from the VMR Render filter and give the physically address that is in the Display memory range.
So can a PCIE bus master dma into Display Memory? Do I need to do something else to dma into Display memory? If so then why would AVstream give me buffers that can not be used for DMA?
Any suggestions on what to try next?
Please note if I put a filter between my capture filter and the VMR renderer then I get address for system memory and I can DMA just fine into that memory.