- GPU: GT1, 850MHz
- Driver: v18.104.22.1681
- Ram: 2GB
- OS: Win7 build 7600
Thanks in advance.
Thanks for your feedback.
I'd tried to build the modified sample_decode as 64bits and it seems to work.
(be noticed that the modified sample_decode is reference from http://software.intel.com/en-us/forums/showthread.php?t=84292&p=1#155568provided by your previous post. :))
However, I observe a scenario that would need to have your comment.
When I use more threads to simulate the case of videowall, I found that the system memory is surprised increased, since I supposed to not use too much system memorydue toHW decoding.
The source is HD content as 1920x1080, H.264
I found that it costs about 100MB of system memory per one decoding thread, when I extend to 16 decoding threads, it will cost about 800MB
Is it reasonable? or is there a way to reduce the system memory consumption?