Greetings, I am trying to do some processing on video being streamed from an Edi-Cam server which is written in nodejs. I have been trying to wrap my head around its structure and syntax, hoping to modify it, but its a bit too much at the moment. My main goal is to capture the video stream into python and use the opencv library. Looking online there are two main solutions that should work, but they are not for me. I think my URL is incomplete.
1) Use opencv's video capture class.
ret, frame = cap.read()
if cv2.waitKey(1) == 27:
2) Using the urllib
import numpy as np
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a!=-1 and b!=-1:
jpg = bytes[a:b+2]
i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.CV_LOAD_IMAGE_COLOR)
if cv2.waitKey(1) ==27:
Looking at what bytes is, its the source code of the web page, and doesn't change, there is no video stream found in it. Others examples have addresses like..
From my limited understanding of Edi-Cam, it uses ffmpeg to encode the video and then sets up a server on the Edison and client on the browser written in Node.js, as so to be non-blocking. I see that there are three ports used, one to listen to the hardware port for video(8020), one to listen to clients(8040), and one to serve clients the video page (8080). It is not clear to me what other address entries are needed in the URL to access the mpeg stream. Is there a way to scan the url for any streams urls?
I am using python 2,7 and opencv3.0. I am not sure if I have ffmeg on my pc. Running windows 10.
I am quite new to this stuff, the only simulair thing I have done was setup a simple TCP server client using the Edison to send sensor data.
I have seen a clean solution to this problem on the Rasberry pi which uses its raspivid library to read the camera and encode it and netcat to set up a TCP pipe on the pi. https://www.youtube.com/watch?v=sYGdge3T30o Wirelessly streaming a video from a Raspberry to a remote laptop - YouTube
I could imagine all that is needed is to capture video from the camera, encode it, send it over tcp, receive it, and read a frame. Any advice on getting the Edi-cam to work or making another solution work. I like the edicam solution because it preforms very well.
To know the necessary parameters for the OpenCV functions I'd recommend you to look at the OpenCV documentation. http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html .Here you can see the video capture class and the options you have to use the command. Was this what you were referring to?
There are some guides that illustrate live video streaming on Edison. The approach might not be exactly the one you're looking for but they are helpful to show an alternative. You can check https://github.com/drejkim/edi-cam https://github.com/drejkim/edi-cam .
I need help on this. I am using c++ in openCV. The video being streamed from an Edi-Cam server (https://github.com/drejkim/edi-cam/ GitHub - drejkim/edi-cam: Video streaming on Intel Edison).
The C++ coding that i wrote as follow:
This reply seems to be a duplicate of the thread you posted here: /thread/106958 https://communities.intel.com/thread/106958 . We'll post a reply to you in the aforementioned link soon.