Video Streaming with Flask

Posted by
on under

I'm sure by now you know that I have released a book and a couple of videos on Flask in cooperation with O'Reilly Media. While the coverage of the Flask framework in these is fairly complete, there are a small number of features that for one reason or another did not get mentioned much, so I thought it would be a good idea to write articles about them here.

This article is dedicated to streaming, an interesting feature that gives Flask applications the ability to provide large responses efficiently partitioned in small chunks, potentially over a long period of time. To illustrate the topic I'm going to show you how to build a live video streaming server!

NOTE: there is now a follow-up to this article, Flask Video Streaming Revisited, in which I describe some improvements to the streaming server introduced here.

What is Streaming?

Streaming is a technique in which the server provides the response to a request in chunks. I can think of a couple of reasons why this might be useful:

  • Very large responses. Having to assemble a response in memory only to return it to the client can be inefficient for very large responses. An alternative would be to write the response to disk and then return the file with flask.send_file(), but that adds I/O to the mix. Providing the response in small portions is a much better solution, assuming the data can be generated in chunks.
  • Real time data. For some applications a request may need to return data that comes from a real time source. A pretty good example of this is a real time video or audio feed. A lot of security cameras use this technique to stream video to web browsers.

Implementing Streaming With Flask

Flask provides native support for streaming responses through the use of generator functions. A generator is a special function that can be interrupted and resumed. Consider the following function:

def gen():
    yield 1
    yield 2
    yield 3

This is a function that runs in three steps, each returning a value. Describing how generator functions are implemented is outside the scope of this article, but if you are a bit curious the following shell session will give you an idea of how generators are used:

>>> x = gen()
>>> x
<generator object gen at 0x7f06f3059c30>
>>> next(x)
1
>>> next(x)
2
>>> next(x)
3
>>> next(x)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration

You can see in this simple example that a generator function can return multiple results in sequence. Flask uses this characteristic of generator functions to implement streaming.

The example below shows how using streaming it is possible to generate a large data table, without having to assemble the entire table in memory:

from flask import Response, render_template
from app.models import Stock

def generate_stock_table():
    yield render_template('stock_header.html')
    for stock in Stock.query.all():
        yield render_template('stock_row.html', stock=stock)
    yield render_template('stock_footer.html')

@app.route('/stock-table')
def stock_table():
    return Response(generate_stock_table())

In this example you can see how Flask works with generator functions. A route that returns a streamed response needs to return a Response object that is initialized with the generator function. Flask then takes care of invoking the generator and sending all the partial results as chunks to the client.

For this particular example if you assume Stock.query.all() returns the result of a database query as an iterable, then you can generate a potentially large table one row at a time, so regardless of the number of elements in the query the memory consumption in the Python process will not grow larger and larger due to having to assemble a large response string.

Multipart Responses

The table example above generates a traditional page in small portions, with all the parts concatenated into the final document. This is a good example of how to generate large responses, but something a little bit more exciting is to work with real time data.

An interesting use of streaming is to have each chunk replace the previous one in the page, as this enables streams to "play" or animate in the browser window. With this technique you can have each chunk in the stream be an image, and that gives you a cool video feed that runs in the browser!

The secret to implement in-place updates is to use a multipart response. Multipart responses consist of a header that includes one of the multipart content types, followed by the parts, separated by a boundary marker and each having its own part specific content type.

There are several multipart content types for different needs. For the purpose of having a stream where each part replaces the previous part the multipart/x-mixed-replace content type must be used. To help you get an idea of how this looks, here is the structure of a multipart video stream:

HTTP/1.1 200 OK
Content-Type: multipart/x-mixed-replace; boundary=frame

--frame
Content-Type: image/jpeg

<jpeg data here>
--frame
Content-Type: image/jpeg

<jpeg data here>
...

As you see above, the structure is pretty simple. The main Content-Type header is set to multipart/x-mixed-replace and a boundary string is defined. Then each part is included, prefixed by two dashes and the part boundary string in their own line. The parts have their own Content-Type header, and each part can optionally include a Content-Length header with the length in bytes of the part payload, but at least for images browsers are able to deal with the stream without the length.

Building a Live Video Streaming Server

There's been enough theory in this article, now it is time to build a complete application that streams live video to web browsers.

There are many ways to stream video to browsers, and each method has its benefits and disadvantages. The method that works well with the streaming feature of Flask is to stream a sequence of independent JPEG pictures. This is called Motion JPEG, and is used by many IP security cameras. This method has low latency, but quality is not the best, since JPEG compression is not very efficient for motion video.

Below you can see a surprisingly simple, yet complete web application that can serve a Motion JPEG stream:

#!/usr/bin/env python
from flask import Flask, render_template, Response
from camera import Camera

app = Flask(__name__)

@app.route('/')
def index():
    return render_template('index.html')

def gen(camera):
    while True:
        frame = camera.get_frame()
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')

@app.route('/video_feed')
def video_feed():
    return Response(gen(Camera()),
                    mimetype='multipart/x-mixed-replace; boundary=frame')

if __name__ == '__main__':
    app.run(host='0.0.0.0', debug=True)

This application imports a Camera class that is in charge of providing the sequence of frames. Putting the camera control portion in a separate module is a good idea in this case, this way the web application remains clean, simple and generic.

The application has two routes. The / route serves the main page, which is defined in the index.html template. Below you can see the contents of this template file:

<html>
  <head>
    <title>Video Streaming Demonstration</title>
  </head>
  <body>
    <h1>Video Streaming Demonstration</h1>
    <img src="{{ url_for('video_feed') }}">
  </body>
</html>

This is a simple HTML page with just a heading and an image tag. Note that the image tag's src attribute points to the second route of this application, and this is where the magic happens.

The /video_feed route returns the streaming response. Because this stream returns the images that are to be displayed in the web page, the URL to this route is in the src attribute of the image tag. The browser will automatically keep the image element updated by displaying the stream of JPEG images in it, since multipart responses are supported in most/all browsers (let me know if you find a browser that doesn't like this).

The generator function used in the /video_feed route is called gen(), and takes as an argument an instance of the Camera class. The mimetype argument is set as shown above, with the multipart/x-mixed-replace content type and a boundary set to the string "frame".

The gen() function enters a loop where it continuously returns frames from the camera as response chunks. The function asks the camera to provide a frame by calling the camera.get_frame() method, and then it yields with this frame formatted as a response chunk with a content type of image/jpeg, as shown above.

Obtaining Frames from a Video Camera

Now all that is left is to implement the Camera class, which will have to connect to the camera hardware and download live video frames from it. The nice thing about encapsulating the hardware dependent part of this application in a class is that this class can have different implementations for different people, but the rest of the application remains the same. You can think of this class as a device driver, which provides a uniform implementation regardless of the actual hardware device in use.

The other advantage of having the Camera class separated from the rest of the application is that it is easy to fool the application into thinking there is a camera when in reality there is not, since the camera class can be implemented to emulate a camera without real hardware. In fact, while I was working on this application, the easiest way for me to test the streaming was to do that and not have to worry about the hardware until I had everything else running. Below you can see the simple emulated camera implementation that I used:

from time import time

class Camera(object):
    def __init__(self):
        self.frames = [open(f + '.jpg', 'rb').read() for f in ['1', '2', '3']]

    def get_frame(self):
        return self.frames[int(time()) % 3]

This implementation reads three images from disk called 1.jpg, 2.jpg and 3.jpg and then returns them one after another repeatedly, at a rate of one frame per second. The get_frame() method uses the current time in seconds to determine which of the three frames to return at any given moment. Pretty simple, right?

To run this emulated camera I needed to create the three frames. Using gimp I've made the following images:

Frame 1 Frame 2 Frame 3

Because the camera is emulated, this application runs on any environment, so you can run this right now! I have this application all ready to go on GitHub. If you are familiar with git you can clone it with the following command:

$ git clone https://github.com/miguelgrinberg/flask-video-streaming.git

If you prefer to download it, then you can get a zip file here.

Once you have the application installed, create a virtual environment and install Flask in it. Then you can run the application as follows:

$ python app.py

After you start the application enter http://localhost:5000 in your web browser and you will see the emulated video stream playing the 1, 2 and 3 images over and over. Pretty cool, right?

Once I had everything working I fired up my Raspberry Pi with its camera module and implemented a new Camera class that converts the Pi into a video streaming server, using the picamera package to control the hardware. I will not discuss this camera implementation here, but you can find it in the source code in file camera_pi.py.

If you have a Raspberry Pi and a camera module you can edit app.py to import the Camera class from this module and then you will be able to live stream the Pi camera, like I'm doing in the following screenshot:

Frame 1

If you want to make this streaming application work with a different camera, then all you need to do is write another implementation of the Camera class. If you end up writing one I would appreciate it if you contribute it to my GitHub project.

Limitations of Streaming

When the Flask application serves regular requests the request cycle is short. The web worker receives the request, invokes the handler function and finally returns the response. Once the response is sent back to the client the worker is free and ready to take on another request.

When a request that uses streaming is received, the worker remains attached to the client for the duration of the stream. When working with long, never ending streams such as a video stream from a camera, a worker will stay locked to the client until the client disconnects. This effectively means that unless specific measures are taken, the application can only serve as many clients as there are web workers. When working with the Flask application in debug mode that means just one, so you will not be able to connect a second browser window to watch the stream from two places at the same time.

There are ways to overcome this important limitation. The best solution in my opinion is to use a coroutine based web server such as gevent, which Flask fully supports. With the use of coroutines gevent is able to handle multiple clients on a single worker thread, as gevent modifies the Python I/O functions to issue context switches as necessary.

Conclusion

In case you missed it above, the code that supports this article is this GitHub repository: https://github.com/miguelgrinberg/flask-video-streaming/tree/v1. Here you can find a generic implementation of video streaming that does not require a camera, and also an implementation for the Raspberry Pi camera module. This follow-up article describes some improvements I made after this article was published originally.

I hope this article shed some light on the topic of streaming. I concentrated on video streaming because that is an area I have some experience, but streaming has many more uses besides video. For example, this technique can be used to keep a connection between the client and the server alive for a long time, allowing the server to push new information the moment it becomes available. These days the Web Socket protocol is a more efficient way to achieve this, but Web Socket is fairly new and works only in modern browsers, while streaming will work on pretty much any browser you can think of.

If you have any questions feel free to write them below. I plan to continue documenting more of the not well known Flask topics, so I hope you connect with me in some way to know when more articles are published. I hope to see you in the next one!

Miguel

Become a Patron!

Hello, and thank you for visiting my blog! If you enjoyed this article, please consider supporting my work on this blog on Patreon!

455 comments
  • #226 yiwang said

    environment: os x 10.11.6,opencv-python
    Here's my implement about camera.py(It's work):

    import numpy as np
    import cv2

    class Camera(object):
    def init(self):
    self.cap = cv2.VideoCapture(0)
    self.cap.set(3,240)
    self.cap.set(4,320)

    def get_frame(self):
        ret, frame = self.cap.read()
        formated_data = cv2.imencode('.jpeg', frame)[1]
        frame_bytes = formated_data.tobytes()
        return frame_bytes
    
    def __del__(self):
        self.cap.release()
    
  • #227 DAnthony said

    Hi Miguel, great article.
    using gevent, trying to get the app to run on multiple browsers but it looks like its blocked by the yield in the gen method. Was trying to get it so the camera.html page would be the supply feed (just 1 connection), and otherscould see it on the 'viewer'l. Once the 'camera' has successfully loaded, i am unable to view any another page... include 'blank' (!)

    below is the app.py

    <h1>!/usr/bin/env python</h1> <h1>enable our printing</h1>

    from future import print_function # In python 2.7
    import sys

    import gevent
    from gevent.monkey import patch_all

    from os import environ
    from flask import Flask, render_template, Response
    from gevent import monkey; monkey.patch_all()

    <h1>emulated camera</h1>

    from camera import Camera

    patch_all()
    app = Flask(name)
    gFrame = None

    @app.route('/')
    def index():
    """Video streaming home page."""
    return render_template('index.html')

    @app.route('/camera/')
    def camera():

    """Video Broadcast."""
    print('Camera', file=sys.stderr)
    return render_template('camera.html')
    

    @app.route('/viewer/')
    def viewer():

    """Viewer."""
    print('Viewer', file=sys.stderr)
    return render_template('viewer.html')
    

    @app.route('/blank/')
    def blank():

    """Viewer."""
    print('Blank', file=sys.stderr)
    return render_template('blank.html')
    

    @app.route('/video_feed')
    def video_feed():
    """Video streaming route. Put this in the src attribute of an img tag."""

    def gen(camera):
    
        """ Video streaming generator function. """
    
        while True:
    
            frame = camera.get_frame()   
            gFrame = (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
            print('get gFrame' + gFrame, file=sys.stderr)
    
            yield gFrame
    
    
    return Response ( gen(Camera()),  mimetype='multipart/x-mixed-replace; boundary=frame')
    

    @app.route('/video_viewer')
    def video_viewer():

    """Video streaming route. Put this in the src attribute of an img tag."""
    if (gFrame != None) :
        return Response(gFrame,
            mimetype='multipart/x-mixed-replace; boundary=frame')
    

    if name == 'main':

    monkey.patch_all()
    from gevent.wsgi import WSGIServer
    WSGIServer(('', 5000),app).serve_forever()
    
    '''
    HOST = environ.get('SERVER_HOST', 'localhost')
    try:
        PORT = int(environ.get('SERVER_PORT', '5555'))
    except ValueError:
        PORT = 5555
    app.run(host=HOST, port=PORT, debug=True, threaded=True)
    '''
    

    thanks for any help
    DAnthony

  • #228 Miguel Grinberg said

    @DAnthony: try moving the patch_all call above all other imports. It is possible that the patching is not effective because you have it below some imports.

  • #229 DAnthony said

    Tried moving the patch_all call above all other imports. Looking at the camera.py file, i am using opencv wtih VideoCapture and the read call. I believe this can be blocking:

    <h1>begin camera.py</h1>

    from gevent import monkey; monkey.patch_all()

    from time import time

    import sys
    sys.path.append('C:\Python27\Lib\site-packages') #set path to find opencv

    import numpy as np
    import cv2

    class Camera(object):
    """An emulated camera implementation that streams a repeated sequence of
    files 1.jpg, 2.jpg and 3.jpg at a rate of one frame per second."""

    def __init__(self):
        #uncomment for test images
        #self.frames = [open(f + '.jpg', 'rb').read() for f in ['1', '2', '3']]
    
        self.cap = cv2.VideoCapture(0)
        self.cap.set(3,720)
        self.cap.set(4,1280)
    
    def get_frame(self):
        #uncomment for test images
        #return self.frames[int(time()) % 3]
    
        ret, frame = self.cap.read()
        formatted_data = cv2.imencode(".jpeg", frame)[1]
        frame_bytes = formatted_data.tobytes()
        return frame_bytes
    
    
    def __del__(self):
        self.cap.release()
    
    <h1>end camera.py</h1>

    i am able to get the video in one browser. i also have tried a new thread, using your camera_pi.py as a starting template, but the page hangs with no image in any page. this seems to be preferred opencv direction too, creating a new thread for the VideoCapture and read()/

    <h1>camera.py with threading starts here</h1>

    from gevent import monkey; monkey.patch_all()

    from time import time
    import io
    import threading

    import sys
    sys.path.append('C:\Python27\Lib\site-packages') #set path to find opencv

    import numpy as np
    import cv2

    class Camera(object):

    thread = None  # background thread that reads frames from camera
    frame_bytes = None  # current frame_bytes is stored here by background thread
    
    
    def initialize(self):
    
        if Camera.thread is None:
            Camera.thread = threading.Thread(target=self._thread)
            Camera.thread.start()
    
            # wait until frames start to be available
            while self.frame_bytes is None:
                time.sleep(0)
    
    
    def get_frame(self):
    
        self.initialize()
        return self.frame_bytes
    
    
    def _thread(cls):
    
        camera = cv2.VideoCapture(0)
        camera.set(3,720)
        camera.set(4,1280)
    
        time.sleep(2)
    
        while True:
            ret, frame = camera.read()
            formatted_data = cv2.imencode(".jpeg", frame)[1]
            cls.frame_bytes = formatted_data.tobytes()
    
    
    
    def __del__(self):
        self.camera.release()
    
    <h1>camera.py with threading stops</h1>

    Any more suggestions? Do you know if this has been successfully implemented with opencv and from a desktop webcam?

    thanks
    DAnthony

  • #230 Miguel Grinberg said

    @DAnthony: I am not aware of any async implementation with OpenCV. It maybe that OpenCV is not compatible with an async framework, so you will have to move it to another process to avoid it blocking the async loop.

  • #231 Max Hartvigsen said

    I've successfully set a wepage which streams my IP cams at home. The webpages are set up using python and flask.

    The IP camera's works by using a simple img string in my html document. ex : http://login:password@192.168.0.80:9000/stream.jpg'>

    However I would like to import stream into a python/flask script since it gives me more flexibility (I can add more features etc) also the url with password etc.. would not be visible in the html code.

    I've read through quite a few tutorial on this subject, but not able to adapt them partly because my string has .jpg annotation which I assume means it streams like a mjpg? I guess the first challenge is to build a driver that reads the stream with opencv? Any advise to get me started? (needless to say I'm quite new to this :) )

  • #232 Miguel Grinberg said

    @Max: what you want to do is different than this project. Here I am taking individual jpeg images from a camera, and assembling them into a mjpeg stream. In your case, you already have a mjpeg stream coming from your cameras, so there is really little point in using Python, which is a fairly slow language, to decompose that stream back into individual images, only to then reassemble a new mjpeg stream. What makes more sense for your situation is to put a reverse proxy in your host that reads from the camera(s) and delivers the stream directly to clients. Nginx should be able to do this, and it would be much more efficient than the Python solution.

  • #233 Raamish said

    Hi miguel, excellent blog post here.
    I had one question, how would we go by if we need to constantly send motion jpeg from android to my flask server.
    I've searched a lot about it but I can't seem to find a decent answer.
    Thanks a ton for your help.

  • #234 Miguel Grinberg said

    @Raamish: If you want the phone to push the mjpeg stream to your server then the project featured in this article is not going to help you. For a client pushing a stream to a server a good solution is to use WebSocket.

  • #235 Fred Perkins said

    Thanks for this. I'm late to the party an am a little baffled. I can't figure out how the video frame rate is set.

  • #236 Miguel Grinberg said

    @Fred: Yeah, that is actually tricky. In the example in this article using the 1, 2 and 3 frames, the actual frame rate is one frame per second, but effectively, you may be getting a faster or slower frame rate. Basically, any time the client is ready to take a new frame, it will get the frame that plays at that time. If the client asks faster than once per second, then it will get duplicated frames, if it asks at a slower rate, then frames will be skipped. For a real camera it works in the same way, the camera will produce frames at a certain rate (depending on many factors, including the speed of the Python script reading those frames from the camera), but that may be different from the viewing rate of a particular client.

  • #237 Aakash said

    Thank you so much helped a lot :)
    Great Job Your work is used in so many stack overflow answers :)

  • #238 Jean-Marc said

    Thanks a lot for this tutorial. For my project, in addition of the image streaming (video_feed), I want to show telemetry data in real time: it is for a robot application. To make things easier for development, images are not from a live cam feed, but from a local folder. And for the telemetry, for the moment I limit myself to showing the name of each frame. The different approach I tried were not successful: I added a <div id="telemetry_feed"></div> in index.html and a js to update the div content from telemetry_feed.html. The problem is that the div content does not get updated. Please, would really appreciate to get some pointers to get this working. Thank you.

  • #239 Miguel Grinberg said

    @Jean-Marc: not sure how I can help. This sounds like something you would ask in stack overflow, including all the relevant details. As far as pointers, for the telemetry updates I would probably use jquery, or one of the big frameworks such as Angular.

  • #240 MagicMake said

    Dear Miguel thanks a lot first of all!
    I'm trying to grab the stream from a different python script (now locally, later in the same network). My Code for that is taken from SO:

    import cv2
    from urllib.request import Request,urlopen
    import numpy as np
    req = Request('http://127.0.0.1:5000/video_feed')
    stream=urlopen(req)
    bytes=b''
    while True:
    bytes+=stream.read(1024)
    a = bytes.find('\xff\xd8')
    b = bytes.find('\xff\xd9')
    if a!=-1 and b!=-1:
    jpg = bytes[a:b+2]
    bytes= bytes[b+2:]
    i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.CV_LOAD_IMAGE_COLOR)
    cv2.imshow('i',i)
    if cv2.waitKey(1) ==27:
    exit(0)

    Unfortunately it fails at urlopen:
    "RemoteDiconnected: Remote end closed connection without response"

    Do you have an idea, where this might be coming from or how to fix it?
    Kind regards!

  • #241 Miguel Grinberg said

    @MagicMake: Does the Flask server show any errors, any indication that the request was received?

  • #242 Terence said

    Hi Miguel,

    Thanks for sharing this technique. I reproduced everything but when it comes to using it with multiprocessing, I noticed visible CPU performance decline. I documented the case in https://stackoverflow.com/questions/45525030/video-streaming-with-flask-multiprocessing-cpu-usage-capped-to-single-core-lev .

    Terence

  • #243 simon said

    I have created a multiple pages which renders opencv. But whenever I navigate to another page which streams opencv, it took me a minute to navigate to specific page. I have tried to add frame.destroyAllWindows() function but still it doesn't work.

  • #244 Miguel Grinberg said

    @simon: Do you know why does it take so long? Is it OpenCV doing something? I can't really tell you what's wrong without seeing the code. I have added OpenCV streaming to this project, by the way, have you seen how I implemented it?

  • #245 simon said

    here is my main.py

    from flask import Flask, render_template, Response
    from camera import VideoCamera

    app = Flask(name)

    @app.route('/')
    def index():
    return render_template('index.html')

    @app.route('/profile')
    def profile():
    return render_template('profile.html')

    @app.route('/basic_table')
    def basic_table():
    return render_template('basic-table.html')

    @app.route('/vid')
    def stream():
    return render_template('stream.html')

    @app.route('/add_dataSet')
    def add_dataSet():
    return render_template('data_set.html')

    def gen(camera):
    while True:
    frame = camera.get_frame()
    yield (b'--frame\r\n'
    b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')

    @app.route('/video_feed')
    def video_feed():
    return Response(gen(VideoCamera()),
    mimetype='multipart/x-mixed-replace; boundary=frame')

    if name == 'main':
    app.run(debug=True)

    and this is the content of stream.html and data_set.html:

    {% extends 'menus.html' %}

    {% block bgtitlte %}
    <h4 class="page-title">OpenCV Stream</h4> </div>


    1. Dashboard

    2. OpenCV page


    {% endblock %}

    {% block whitebox %}
    <center>
    <img src="{{ url_for('video_feed') }}">
    </center>
    {% endblock %}

    so whenever I go to stream.html it works great but whenever I go to data_set.html it takes too long about an hour or even infinite to load data_set.html so I need to stop the browser to load then navigate to data_set.html then it loads faster. By the way I just followed your tutorial but it does not load another html file which streams opencv

  • #246 Miguel Grinberg said

    @simon: You are using the Flask development server with default settings. You can have only one client. I suspect that is what's causing this issue. Either add threaded=True to add threading support, or else use a production web server such as gunicorn with enough number of worker threads configured.

  • #247 simon said

    It works when I added thread but whenever I switch again to another page with opencv stream the program stops. How am I going to use gunicorn with this kind of setup?

  • #248 Miguel Grinberg said

    @simon: what do you mean by "the program stops"? Do you get an error? It's hard to understand what you are experiencing without details.

  • #249 simon said

    Sir miguel? I have another question I want to integrate this face recognition to this flask video stream with this code:

    import cv2
    import numpy as np

    <h1>classifier</h1>

    faceDetect = cv2.CascadeClassifier('haarcascade_frontalface_default.xml');
    cam = cv2.VideoCapture(0);

    while(True):
    ret,img = cam.read();
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY);
    faces = faceDetect.detectMultiScale(gray,1.3,5);

    #draw rectangle
    for(x,y,w,h) in faces:
        cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
    cv2.imshow("Face",img);
    if(cv2.waitKey(1)==ord('q')):
        break;
    

    cam.release()
    cv2.destroyAllWindows()

    How am I going to achieve that one? Thank you very much for your quick response :)

  • #250 Miguel Grinberg said

    @simon: did you see the camera driver I wrote for OpenCV? I Think it would be fairly easy to integrate your code with it, have a look at it on the GitHub repository.

Leave a Comment