This call fails because the DXVideoRender init doesn't like the wayvideo_info was initialized.
umcRes = InitMJPEGVideoDecoder(&mjpeg_dec, video_info);
I realize I need to initialize things differently and perform a color transform on the MPEG bitmap, but the steps required are not obvious. It would be nice to have a sample that supports audio as well. Preferably using dsound.
Thanks in advance.
Not sure if this is related to the problem you're having ornot, but I have found you need to be careful in selecting the input color format. For example in the DXVideoRender::Init() method, you will find a listing of all the available input color formats (YV12, YUY2, UYVY, NV12, RGB565). If the output of your decoder is some other color format, then the renderer init will fail.
I used the provided simple_player application as an example to show how to configure everything.
Not exactly. I'm trying to construct something simpler than "simple_player". I'm running under Vista and am trying to sort of the mechanics of getting aDX based render surface working. The simple_player example is too layered to be useful since I will eventually reduce things to my own set of classes. I'm aware of all the color space transforms that are at play.
Basically, I'm looking for a working starting pointthat uses a DX surfaceand is based on the MJPEG sample in the docs. Something not too simple, yet not overly obfuscated by other layers of code. In other words, I'm looking to save time.
I noticed some of your other posts ... looks like you may have gotten there. Care to share a zipped project?
By the way, simple_player fails to bring up a video render window under Vista on my setup. Haven't yet had time to dig into that. Though I've already spotted a number of basic issues with the code.
I ended up working with the AVSync classes (in the pipline directory) that simple_player uses. I started by using simple_player to verify functionality, and then started reworking AVSync to provide additional functionality. It is large, but it seems to handle most decode/splitter/reader/render combinations provided as part of the core, io, and codec directories. I do think I've gotten to where I need to be, but by using AVSync as a base. Unfortunately I have not come up with any real extensions of my own, although I was able to merge in a modified version of Pushkar's UDP network stream reader. I think this fairly extensive sample ended up being easier to extendthan creating a smaller decode loop from scratch (at least in my case).
Looks like by the time someone pushes up a working source code example, I'll have this finished. I think I'm about 3/5s of the way there. Some of the UMC code layers are verygood, but it's time consuming to figure out how to leverage it. I've alreadydone a complete re-write ofthe ippdemo app to streamline the code (using GDI+)so it can be leveraged in other projects more easily. Added a batching facility so you can run several API's in succession. I think my version is way more useful and much easier to follow. The UMC bits are challenging to work with and I'm not seeing as many prospects for simplification.
I've been looking for ways to integrate wth Directshow.
Ah yes, the DirectShow component. In the next week or so I'm planning to look into merging the DX render class into a more useful window. I really like the fact that there is an existing DX render, but it needs work to be generally useful. From what I've seen this has been Intel's intention with the samples: to provide a workingexample of the difficult parts, and leave the productization up to the developer. I like their approach since everyone has a different idea of product, and usefulness.
Getting the DirectX Render output into a movable, resizable, overlayed window was actually much easier than I thought it would be. Following are the steps I took:
1) Create a Win32 window with a WM_PAINT hander that always fills the window with a color key color (I used 255,0,255).
2) Set the AVsync::CommonCtl::usVideoRanderFlags to 4 (for FLAG_VREN_USECOLORKEY as defined in umc_structures.h). This was the hardest part to figure out.
3) Put a copy of the window's HWND in the AVsync::CommonCtl::renContextHwnd and the COLORREF of the windows color key in the AVsync::renContextColorKey (such as RGB(255, 0, 255)).
If I remember correctly, that's all I had to do to get my window working with the AVsync based classes. If you're not woking from AVsync, it should be possible to see how it sets up the DX Render and then plug in your own window information.
In my window class I also stored a pointer to a VideoRender in-case I needed to work with it within the window proceedure, but I ended up not needing it. If you need more let me know and I try to package up the window code for you.
Thanks, that helps a chunk ... I guess there's no chance that you could post some source code or a working project? I was hoping to avoid the trials and tribulations. What canI say ... I'm sure you can relate.
I don't have a project I can upload since it is heavily dependent on my own low-level libraries, but I have tried to extract the important parts. There may still be some references to a WindowCore class which is not included, but you should get the idea from the attached code. I also have the AVsync initialization since I'm using that class as the core of my UMC development.
As you can see, it's very simple code - the DX render does most of the work.
Hope that helps,