- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A few days ago, I installed linux SDK successfully with X11, and now I don't need X11 anymore, so I want to reinstall SDK with drm.
I use remote login with ssh, when I run the install_media.sh, I saw this error :
INFO... Install on Ubuntu ...
INFO... Installing New Driver...
INFO... The default media driver is renderless API, do you want to use X11 backend?
press 'y' to use X11 backend, otherwise by default(drm backend, renderless)
INFO... Renderless DRM backend enabled!
Error... Cannot get Device id, please follow the release notes of MSDK and install them manually!
I found in the install script that the define of DEVICEID
DEVICEID=`lspci -nn | grep VGA | grep Intel | cut -d: -f 4 | cut -d']' -f 1`
my system is as follow:
root@ubuntu:/home/wdg/intel# lspci -nn
...
00:02.0 Display controller [0380]: Intel Corporation Ivy Bridge Graphics Controller [8086:0152] (rev 09)
...
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Turks [Radeon HD 7500 Series] [1002:675d]
It seems that the OS does not load the Intel HD Graphics, right?
CPU info:Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
Is there somebody can help me?
Thanks!
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I install the SDK manually, and compile the sample
when I run the encoder sample
./sample_encode_drm h264 -i /home/wdg/video/beijing420p.yuv -o /home/wdg/video/intel.drm.h264 -w 720 -h 576 -hw
libva info: VA-API version 0.34.0
libva info: va_getDriverName() returns -1
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
What can I do?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I tried to modify the source code as follow
sample_common/vaapi_utils_drm.cpp, line 23
m_fd = open("/dev/dri/card0", O_RDWR);
changed it to :
m_fd = open("/dev/dri/card1", O_RDWR);
and now the sample_encode_drm works fine!
But I don't think it's good method, so any other suggestion?
BTW, I don't know well about libva,so It confused me several hours
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Every system handles multiple graphics cards a little bit differently. The easiest approach is probably to change your BIOS settings so that Intel Integrated Graphics (IGFX) is the primary display. Otherwise, you have to query DRM at runtime. With IGFX as primary display /dev/dri/card0 should always be the right card. There may be other ways to do it, but I've had some success with querying the /dev/dri directory this way:
For each card in the directory
1. fd=open /dev/dri/card%d (plug in 0,1,etc. here)
2. query with drmGetVersion(fd)
3. Check the returned struct for name='i915' and desc='Intel Graphics'
You would then need to change to the right card # for Media SDK.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Jeffrey Mcallister (Intel) wrote:
Every system handles multiple graphics cards a little bit differently. The easiest approach is probably to change your BIOS settings so that Intel Integrated Graphics (IGFX) is the primary display. Otherwise, you have to query DRM at runtime. With IGFX as primary display /dev/dri/card0 should always be the right card. There may be other ways to do it, but I've had some success with querying the /dev/dri directory this way:
For each card in the directory
1. fd=open /dev/dri/card%d (plug in 0,1,etc. here)
2. query with drmGetVersion(fd)
3. Check the returned struct for name='i915' and desc='Intel Graphics'
You would then need to change to the right card # for Media SDK.
Dear Jeffrey
Thank you for reply, it's good idea for query all card in /dev/dri/, and I'll try.
Another question, I found many Allocator(for example:SysMemFrameAllocator..) in the sample code. If I use SysMemFrameAllocator, does it mean that SDK will use system memory for encoding? (and libva will use graphic memory), right?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, you're correct. For Linux, VAAPI plays the same role as D3D/DXVA in Windows -- access to GPU. Media SDK for Linux Servers does not have a software implementation, so especially for transcodes you'll want GPU/VAAPI surfaces. If you do not need access to the surfaces during transcode (that is, your pipeline is entirely constructed in Media SDK) you may also use opaque surfaces, where Media SDK handles some of the details automatically for you.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page