Flask Video Streaming Revisited
Posted byon under
Almost three years ago I wrote an article on this blog titled Video Streaming with Flask, in which I presented a very modest streaming server that used a Flask generator view function to stream a Motion-JPEG stream to web browsers. My intention with that article was to show a simple, yet practical use of streaming responses, a not very well known feature in Flask.
That article is extremely popular, but not because it teaches how to implement streaming responses, but because a lot of people want to implement streaming video servers. Unfortunately, my focus when I wrote the article was not on creating a robust video server, so I frequently get questions and requests for advice from those who want to use the video server for a real application and quickly find its limitations. So today I'm going to revisit my streaming video server and describe a few improvements I've made to it.
Recap: Using Flask's Streaming for Video
I recommend you read the original article to familiarize yourself with my project. In short, this is a Flask server that uses a streaming response to provide a stream of video frames captured from a camera in Motion JPEG format. This format is very simple and not the most efficient, but has the advantage that all browsers support it natively and without any client-side scripting required. It is a fairly common format used by security cameras for that reason. To demonstrate the server, I implemented a camera driver for a Raspberry Pi with its camera module. For those that didn't have a Pi with a camera at hand, I also wrote an emulated camera driver that streams a sequence of jpeg images stored on disk.
Running the Camera Only When There Are Viewers
One aspect of the original streaming server that people did not like is that the background thread that captures video frames from the Raspberry Pi camera starts when the first client connects to the stream, but then it never stops. A more efficient way to handle this background thread is to only have it running while there are viewers, so that the camera can be turned off when nobody is connected.
I implemented this improvement a while ago. The idea is that every time a frame is accessed by a client the current time of that access is recorded. The camera thread checks this timestamp and if it finds it is more than ten seconds old it exits. With this change, when the server runs for ten seconds without any clients it will shut its camera off and stop all background activity. As soon as a client connects again the thread is restarted.
Here is a brief description of the changes:
class Camera(object): # ... last_access = 0 # time of last client access to the camera # ... def get_frame(self): Camera.last_access = time.time() # ... @classmethod def _thread(cls): with picamera.PiCamera() as camera: # ... for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True): # ... # if there hasn't been any clients asking for frames in # the last 10 seconds stop the thread if time.time() - cls.last_access > 10: break cls.thread = None
Simplifying the Camera Class
A common problem that a lot of people mentioned to me is that it is hard to add support for other cameras. The
Camera class that I implemented for the Raspberry Pi is fairly complex because it uses a background capture thread to talk to the camera hardware.
To make this easier, I decided to move the generic functionality that does all the background processing of frames to a base class, leaving only the task of getting the frames from the camera to implement in subclasses. The new
BaseCamera class in module
base_camera.py implements this base class. Here is what this generic thread looks like:
class BaseCamera(object): thread = None # background thread that reads frames from camera frame = None # current frame is stored here by background thread last_access = 0 # time of last client access to the camera # ... @staticmethod def frames(): """Generator that returns frames from the camera.""" raise RuntimeError('Must be implemented by subclasses.') @classmethod def _thread(cls): """Camera background thread.""" print('Starting camera thread.') frames_iterator = cls.frames() for frame in frames_iterator: BaseCamera.frame = frame # if there hasn't been any clients asking for frames in # the last 10 seconds then stop the thread if time.time() - BaseCamera.last_access > 10: frames_iterator.close() print('Stopping camera thread due to inactivity.') break BaseCamera.thread = None
This new version of the Raspberry Pi's camera thread has been made generic with the use of yet another generator. The thread expects the
frames() method (which is a static method) to be a generator implemented in subclasses that are specific to different cameras. Each item returned by the iterator must be a video frame, in jpeg format.
Here is how the emulated camera that returns static images can be adapted to work with this base class:
class Camera(BaseCamera): """An emulated camera implementation that streams a repeated sequence of files 1.jpg, 2.jpg and 3.jpg at a rate of one frame per second.""" imgs = [open(f + '.jpg', 'rb').read() for f in ['1', '2', '3']] @staticmethod def frames(): while True: time.sleep(1) yield Camera.imgs[int(time.time()) % 3]
Note how in this version the
frames() generator forces a frame rate of one frame per second by simply sleeping that amount between frames.
The camera subclass for the Raspberry Pi camera also becomes much simpler with this redesign:
import io import picamera from base_camera import BaseCamera class Camera(BaseCamera): @staticmethod def frames(): with picamera.PiCamera() as camera: # let camera warm up time.sleep(2) stream = io.BytesIO() for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True): # return current frame stream.seek(0) yield stream.read() # reset stream for next frame stream.seek(0) stream.truncate()
OpenCV Camera Driver
A fair number of users complained that they did not have access to a Raspberry Pi equipped with a camera module, so they could not try this server with anything other than the emulated camera. Now that adding camera drivers is much easier, I wanted to also have a camera based on OpenCV, which supports most USB webcams and laptop cameras. Here is a simple camera driver for it:
import cv2 from base_camera import BaseCamera class Camera(BaseCamera): @staticmethod def frames(): camera = cv2.VideoCapture(0) if not camera.isOpened(): raise RuntimeError('Could not start camera.') while True: # read current frame _, img = camera.read() # encode as a jpeg image and return it yield cv2.imencode('.jpg', img).tobytes()
With this class, the first video camera reported by your system will be used. If you are using a laptop, this is likely your internal camera. If you are going to use this driver, you need to install the OpenCV bindings for Python:
$ pip install opencv-python
The project now supports three different camera drivers: emulated, Raspberry Pi and OpenCV. To make it easier to select which driver to use without having to edit the code, the Flask server looks for a
CAMERA environment variable to know which class to import. This variable can be set to
opencv, and if it isn't set, then the emulated camera is used by default.
The way this is implemented is fairly generic. Whatever the value of the
CAMERA environment variable is, the server will expect the driver to be in a module named
camera_$CAMERA.py. The server will import this module and then look for a
Camera class in it. The logic is actually quite simple:
from importlib import import_module import os # import camera driver if os.environ.get('CAMERA'): Camera = import_module('camera_' + os.environ['CAMERA']).Camera else: from camera import Camera
For example, to start an OpenCV session from bash, you can do this:
$ CAMERA=opencv python app.py
From a Windows command prompt you can do the same as follows:
$ set CAMERA=opencv $ python app.py
Another observation that was made a few times is that the server consumes a lot of CPU. The reason for this is that there is no synchronization between the background thread capturing frames and the generator feeding those frames to the client. Both run as fast as they can, without regards for the speed of the other.
In general it makes sense for the background thread to run as fast as possible, because you want the frame rate to be as high as possible for each client. But you definitely do not want the generator that delivers frames to a client to ever run at a faster rate than the camera is producing frames, because that would mean duplicate frames will be sent to the client. While these duplicates do not cause any problems, they increase CPU and network usage without any benefit.
So there needs to be a mechanism by which the generator only delivers original frames to the client, and if the delivery loop inside the generator is faster than the frame rate of the camera thread, then the generator should wait until a new frame is available, so that it paces itself to match the camera rate. On the other side, if the delivery loop runs at a slower rate than the camera thread, then it should never get behind when processing frames, and instead it should skip frames to always deliver the most current frame. Sounds complicated, right?
What I wanted as a solution here is to have the camera thread signal the generators that are running when a new frame is available. The generators can then block while they wait for the signal before they deliver the next frame. In looking through synchronization primitives, I've found that threading.Event is the one that matches this behavior. So basically, each generator should have an event object, and then the camera thread should signal all the active event objects to inform all the running generators when a new frame is available. The generators deliver the frame and reset their event objects, and then go back to wait on them again for the next frame.
To avoid having to add event handling logic in the generator, I decided to implement a customized event class that uses the thread id of the caller to automatically create and manage a separate event for each client thread. This is somewhat complex, to be honest, but the idea came from how Flask's context local variables are implemented. The new event class is called
CameraEvent, and has
clear() methods. With the support of this class, the rate control mechanism can be added to the
class CameraEvent(object): # ... class BaseCamera(object): # ... event = CameraEvent() # ... def get_frame(self): """Return the current camera frame.""" BaseCamera.last_access = time.time() # wait for a signal from the camera thread BaseCamera.event.wait() BaseCamera.event.clear() return BaseCamera.frame @classmethod def _thread(cls): # ... for frame in frames_iterator: BaseCamera.frame = frame BaseCamera.event.set() # send signal to clients # ...
The magic that is done in the
CameraEvent class enables multiple clients to be able to wait individually for a new frame. The
wait() method uses the current thread id to allocate an individual event object for each client and wait on it. The
clear() method will reset the event associated with the caller's thread id, so that each generator thread can run at its own speed. The
set() method called by the camera thread sends a signal to the event objects allocated for all clients, and will also remove any events that aren't being serviced by their owners, because that means that the clients associated with those events have closed the connection and are gone. You can see the implementation of the
CameraEvent class in the GitHub repository.
To give you an idea of the magnitude of the performance improvement, consider that the emulated camera driver consumed about 96% CPU before this change because it was constantly sending duplicate frames at a rate much higher than the one frame per second being produced. After these changes, the same stream consumes about 3% CPU. In both cases there was a single client viewing the stream. The OpenCV driver went from about 45% CPU down to 12% for a single client, with each new client adding about 3%.
Production Web Server
Lastly, I think if you plan to use this server for real, you should use a more robust web server than the one that comes with Flask. A very good choice is to use Gunicorn:
$ pip install gunicorn
With Gunicorn, you can run the server as follows (remember to set the
CAMERA environment variable to the selected camera driver first):
$ gunicorn --threads 5 --workers 1 --bind 0.0.0.0:5000 app:app
--threads 5 option tells Gunicorn to handle at most five concurrent requests. That means that with this number you can get up to five clients to watch the stream simultaneously. The
--workers 1 options limits the server to a single process. This is required because only one process can connect to a camera to capture frames.
You can increase the number of threads some, but if you find that you need a large number, it will probably be more efficient to use an asynchronous framework instead of threads. Gunicorn can be configured to work with the two frameworks that are compatible with Flask: gevent and eventlet. To make the video streaming server work with these frameworks, there is one small addition to the camera background thread:
class BaseCamera(object): # ... @classmethod def _thread(cls): # ... for frame in frames_iterator: BaseCamera.frame = frame BaseCamera.event.set() # send signal to clients time.sleep(0) # ...
The only change here is the addition of a
sleep(0) in the camera capture loop. This is required for both eventlet and gevent, because they use cooperative multitasking. The way these frameworks achieve concurrency is by having each task release the CPU either by calling a function that does network I/O or explicitly. Since there is no I/O here, the sleep call is what achieves the CPU release.
Now you can run Gunicorn with the gevent or eventlet workers as follows:
$ CAMERA=opencv gunicorn --worker-class gevent --workers 1 --bind 0.0.0.0:5000 app:app
--worker-class gevent option configures Gunicorn to use the gevent framework (you must install it with
pip install gevent). If you prefer,
--worker-class eventlet is also available. The
--workers 1 limits to a single process as above. The eventlet and gevent workers in Gunicorn allocate a thousand concurrent clients by default, so that should be much more than what a server of this kind is able to support anyway.
All the changes described above are incorporated in the GitHub repository. I hope you get a better experience with these improvements.
Before concluding, I want to provide quick answers to other questions I have received about this server:
- How to force the server to run at a fixed frame rate? Configure your camera to deliver frames at that rate, then sleep enough time during each iteration of the camera capture loop to also run at that rate.
- How to increase the frame rate? The server as described here delivers frames as fast as possible. If you need better frame rates, you can try configuring your camera for a smaller frame size.
- How to add sound? That's really difficult. The Motion JPEG format does not support audio. You are going to need to stream the audio separately, and then add an audio player to the HTML page. Even if you manage to do all this, synchronization between audio and video is not going to be very accurate.
- How to save the stream to disk on the server? Just save the sequence of JPEG files in the camera thread. For this you may want to remove the automatic mechanism that ends the background thread when there are no viewers.
- How to add playback controls to the video player? Motion JPEG was not made for interactive operation by the user, but if you are set on doing this, with a little bit of trickery it may be possible to implement playback controls. If the server saves all jpeg images, then a pause can be implemented by having the server deliver the same frame over and over. When the user resumes playback, the server will have to deliver "old" images that are loaded from disk, since now the user would be in DVR mode instead of watching the stream live. This could be a very interesting project!
That is all for now. If you have other questions please let me know!
Become a Patron!
Hello, and thank you for visiting my blog! If you enjoyed this article, please consider supporting my work on this blog on Patreon!
#26 Islam Saad said 2017-10-15T10:30:40Z
By the way, this is python code i use for H264 video streaming, and i want to merge it with flask
camera = picamera.PiCamera()
camera.resolution = (640, 480)
camera.framerate = 24
server_socket = socket.socket()<h1>Accept a single connection and make a file-like object out of it</h1>
connection = server_socket.accept().makefile('wb')
#27 JACK LIM said 2017-10-15T16:04:26Z
HI Miguel Grinberg ! May i ask if i using the drone as the camera it is possible and what should i modify for the python file ??
#28 Miguel Grinberg said 2017-10-15T16:57:29Z
@Islam: The solution I presented in this article is specific to Motion JPEG. It is not adaptable to H.264. The generator based streaming cannot be used to stream other formats besides MJPEG.
#29 Miguel Grinberg said 2017-10-15T17:01:07Z
@JACK: it largely depends on what kind of video stream your drone offers. My guess is that you'll get a H.264 stream, which is very different to the Motion JPEG I'm using here, but if you can somehow get JPEG images from your drone, it should work just fine with this.
#30 Radu said 2017-10-18T17:05:34Z
This is great but i have a question, i'm just starting with python and flask and i cannot for the love of God figure how to stop the thread and record a video with the picamera everytime a PIR sensor detects motion and restart the stream after this do you have any ideea how i can make this work
#31 Miguel Grinberg said 2017-10-18T18:12:49Z
@Radu: The code that stops the camera due to inactivity is in the BaseCamera class: https://github.com/miguelgrinberg/flask-video-streaming/blob/master/base_camera.py#L101-L103.
To stop the camera for a different reason, you have to expand that code with new stopping/restarting logic.
#32 radu nanescu said 2017-10-19T15:46:06Z
@Miguel Holy moly you pointed directly there thanks a bunch!
#33 Jug said 2017-10-22T09:01:33Z
I found out that my problem was in my virtual environment on Raspberry Pi. I created a new virtual environment and now everything works.
#34 DH said 2017-11-07T16:51:57Z
Great post - thanks a ton for putting this together.
If anyone runs in to an error along the lines of:
numpy.ndarray object has no attribute 'tobytes'
Its due to an numpy library update issue when Python went from 2.x to 3.x. The solution is to make sure you have the latest numpy library. In Raspbian/Debian:
sudo pip install numpy --upgrade
#35 Neeraj Gupta said 2017-12-07T04:53:26Z
@Miguel - I have an IP camera and i am using gstreamer command for video streaming but the problem is gstreamer open its own window for the video streaming. I want to show the video stream on a browser.
Can we do it using flask framework ?
#36 Miguel Grinberg said 2017-12-07T05:45:37Z
@Neeraj: Not familiar enough with gstreamer to tell you how to do it, but I don't see why this would not work if you write a driver class for it that extract frames from your gstreamer pipeline. It's probably hard, but doable.
#37 Cecil said 2017-12-21T10:05:33Z
Thanks for the great article(s)! I'm using a modified version of the OpenCV camera as I'm using an IP camera. The issue I'm seeing the performance is very slow. Outside of Flask, video playback is near realtime. Here is my code from my Camera.py:
@staticmethod def frames(): bytes = '' while True: bytes += liveStream.read(1024) a = bytes.find('\xff\xd8') b = bytes.find('\xff\xd9') if a != -1 and b != -1: jpg = bytes[a:b + 2] bytes = bytes[b + 2:] mjpeg = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8), cv2.IMREAD_COLOR) # (h, w) = mjpeg.shape[:2] center = (w / 2, h / 2) M = cv2.getRotationMatrix2D(center, 90, 1) rh, rw = h * 1, w * 1 r = np.deg2rad(90) rw,rh = (abs(np.sin(r)*rh) + abs(np.cos(r)*rw),abs(np.sin(r)*rw) + abs(np.cos(r)*rh)) (tx,ty) = ((rw - w)/2, (rh - h)/2) M[0,2] += tx M[1,2] += ty rotated = cv2.warpAffine(mjpeg, M, dsize=(int(rw), int(rh))) # #ret, jpeg = cv2.imencode('.jpg', rotated) ret, jpeg = cv2.imencode('.jpg', mjpeg) return jpeg.tobytes() #yield cv2.imencode('.jpg', mjpeg).tobytes()
If I use "yield", I cannot get any image to display. That is why it is set the way it is. Do you have any thoughts as to what the issue is?
#38 Miguel Grinberg said 2017-12-23T18:34:24Z
@Cecil: you seem to be doing a lot of CPU work for each frame with the rotation and then encoding back into a jpeg. That affects your frame rate. If you can figure out a way to not have to do all these processing on each frame your frame rate will improve.
#39 george said 2017-12-31T08:10:41Z
Hi Miguel! Nice work I would like to ask if how am I going to resize the window size of opencv inside flask?
#40 Miguel Grinberg said 2018-01-01T06:27:16Z
@george: Not sure I understand the question. OpenCV should not open any window, at least it doesn't for me.
#41 Dayle said 2018-01-13T01:06:01Z
Thanks a ton for putting this post together. As fate would have it, it is exactly what I was looking for and it runs flawlessly. I have to admit, I'm still struggling to understand how all the pieces work together. It's not super obvious to someone new to Python and a complete novice at Flask.
I just bought your latest book and look forward to reading it.
#42 abdullah said 2018-02-04T07:24:13Z
Hi miguel how am I able to do face recognition using picamera and stream it in flask? any idea?
#43 Miguel Grinberg said 2018-02-04T07:35:25Z
@abdullah: Not sure if the Pi is powerful enough for this, but OpenCV should help with the face recognition part.
#44 abdullah said 2018-02-04T08:02:21Z
I was able to stream picamera on flask but I cannot make it stream the face recognition here is my code:
from opencv_py.base_camera import BaseCamera
with picamera.PiCamera() as camera:
# let camera warm up
faceCascPath = "haarcascade_frontalface_default.xml" eyeCascadePath = "haarcascade_eye.xml" faceCascade = cv2.CascadeClassifier(faceCascPath) eyeCascade = cv2.CascadeClassifier(eyeCascadePath) camera.vflip = True time.sleep(2) stream = io.BytesIO() for _ in camera.capture_continuous(stream, 'jpeg', use_video_port=True): buff = numpy.fromstring(stream.getvalue(), dtype=numpy.uint8) image = cv2.imdecode(buff, 1) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale(gray, 1.1, 5, flags = cv2.CASCADE_SCALE_IMAGE) for (x, y, w, h) in faces: face = gray[y:y+h, x:x+w] eyes = eyeCascade.detectMultiScale(face, 1.1, 5, flags = cv2.CASCADE_SCALE_IMAGE) if len(eyes) == 2: cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2) # return current frame stream.seek(0) yield stream.read() # reset stream for next frame stream.seek(0) stream.truncate(0)
and here is my reference for the code I added to your code please have a look at it sir miguel thanks :)
#45 abdullah said 2018-02-04T08:17:52Z
Sir Miguel! just an update I got it working I deleted the yield stream.read() and change it to yield cv2.imencode('.jpg', image).tobytes() but it is slower than yield stream.read() any recommendations?
#46 Miguel Grinberg said 2018-02-05T04:45:36Z
@abdullah: not much that can be done, you are encoding jpeg images in real time, and that takes time.
#47 Kenneth said 2018-02-13T08:50:24Z
Hi Miguel! Loved reading the post and thank you for putting it together. I am trying to use openCV to read and show the image in a new window on my PC.
My code for the client is the following:<h1>client.py</h1>
cap = cv2.VideoCapture()
cap.open('https://RPI_IP:5000/video_feed') #RPI_IP is IP address but removed for posting here
ret, frame = cap.read()
if ret == True:
However, I get the following output on my PC:
[mpjpeg @ 000002032e5d24a0] Expected boundary '--' not found, instead found a line of 40 bytes
[mpjpeg @ 000002032e5d24a0] Expected boundary '--' not found, instead found a line of 83 bytes
[mpjpeg @ 000002032e5d24a0] Expected boundary '--' not found, instead found a line of 16 bytes
[mpjpeg @ 000002032e5d24a0] Expected boundary '--' not found, instead found a line of 34 bytes
[mpjpeg @ 000002032e5d24a0] Expected boundary '--' not found, instead found a line of 82 bytes
[mpjpeg @ 000002032e5d24a0] Expected boundary '--' not found, instead found a line of 127 bytes
[mpjpeg @ 000002032e5d24a0] Expected boundary '--' not found, instead found a line of 19 bytes
The Raspberry Pi running the Flask web server outputs:
oerror: [errno 32] broken pipe
I read that the expected boundary error is due to the protocol. I am quite new to Flask and OpenCV but am eager to find out what my problem is. Several people had the same issues as I, but could not fix them. Do you have any thoughts to this issue?
#48 Miguel Grinberg said 2018-02-14T06:26:58Z
@Kenneth: Does opencv support reading a motion jpeg stream? I don't see any mention of this in the VideoCapture class documentation.
#49 PyEldar said 2018-02-16T08:00:11Z
I have a question about the background thread and it is more about python threading but i think you can have an answer.
Python threads are regular OS(POSIX threads on linux) so the OS is scheduling the threads and i would like to know if there is any chance that the OS would be slowing down/stopping the background thread which is getting camera frames? Can the OS really anyhow affect the background thread?
Thank you for help
#50 Miguel Grinberg said 2018-02-16T17:41:19Z
@PyEldar: Threads in Python work in a strange way due to the global interpreter lock. So while the OS is going to decide when to give control to your different threads according to its own scheduling algorithms, the Python GIL is going to take away some control over this from the OS.