1👍
Creating a Django endpoint: WebSockets to receive encoded frames in Django, decode them, and return a response, while continuously updating the video frame on the web browser.
import cv2
import json
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
from channels.layers import get_channel_layer
from asgiref.sync import async_to_sync
@csrf_exempt
def process_frames(request):
if request.method == 'POST':
data = json.loads(request.body)
# Extract the encoded frames and other data from the JSON packet
encoded_frames = data['frames']
# Process other data as needed
# Decode the frames using cv2.imdecode()
decoded_frames = []
for encoded_frame in encoded_frames:
frame = cv2.imdecode(encoded_frame, cv2.IMREAD_COLOR)
decoded_frames.append(frame)
# Perform any necessary operations with the frames
# Return a response for each frame
response = {'status': 'success'}
return JsonResponse(response)
For video streaming you can either use browser(HTML) for video rendering or React(JS) for video rendering. Both have its pros and cons.
<!DOCTYPE html>
<html>
<head>
<title>Integrating inside HTML</title>
</head>
<body>
<video id="videoPlayer" autoplay controls></video>
<script>
const video = document.getElementById('videoPlayer');
function updateVideoFrame(frame) {
const blob = new Blob([frame], { type: 'image/jpeg' });
const frameURL = URL.createObjectURL(blob);
video.src = frameURL;
}
// Make a request to the Django endpoint to receive the frames
setInterval(() => {
fetch('/process_frames', { method: 'POST' })
.then(response => response.json())
.then(data => {
if (data.status === 'success') {
updateVideoFrame(data.frame);
}
})
.catch(error => {
console.error('Error:', error);
});
}, 40); // Adjust the interval to achieve the desired frame rate (25 fps = 40 ms delay)
</script>
</body>
</html>
Integrating inside JS
import React, { useEffect, useState } from 'react';
const VideoPlayer = () => {
const [frame, setFrame] = useState(null);
useEffect(() => {
const fetchFrame = async () => {
try {
const response = await fetch('/process_frames', { method: 'POST' });
const data = await response.json();
if (data.status === 'success') {
setFrame(data.frame);
}
} catch (error) {
console.error('Error:', error);
}
};
// Fetch frames at the desired frame rate
const intervalId = setInterval(fetchFrame, 40); // Adjust the interval to achieve the desired frame rate (25 fps = 40 ms delay)
return () => {
clearInterval(intervalId);
};
}, []);
const videoSource = frame ? URL.createObjectURL(new Blob([frame], { type: 'image/jpeg' })) : '';
return (
<video src={videoSource} autoPlay controls />
);
};
export default VideoPlayer;
EDIT
Django endpoint using Django Channels
# This is a template code for using Django Channels
import cv2
import json
from channels.generic.websocket import WebsocketConsumer
class FrameProcessingConsumer(WebsocketConsumer):
def receive(self, text_data=None, bytes_data=None):
if bytes_data:
# Extract the encoded frames and other data from the JSON packet
data = json.loads(bytes_data.decode())
encoded_frames = data['frames']
# Process other data as needed
# Decode the frames using cv2.imdecode()
decoded_frames = []
for encoded_frame in encoded_frames:
frame = cv2.imdecode(encoded_frame, cv2.IMREAD_COLOR)
decoded_frames.append(frame)
# Perform any necessary operations with the frames
# Return a response for each frame
response = {'status': 'success'}
self.send(json.dumps(response))
@csrf_exempt
def process_frames(request):
if request.method == 'POST':
data = json.loads(request.body)
# Extract the encoded frames and other data from the JSON packet
encoded_frames = data['frames']
# Process other data as needed
# Decode the frames using cv2.imdecode()
decoded_frames = []
for encoded_frame in encoded_frames:
frame = cv2.imdecode(encoded_frame, cv2.IMREAD_COLOR)
decoded_frames.append(frame)
# Perform any necessary operations with the frames
# Return a response for each frame
response = {'status': 'success'}
return JsonResponse(response)
Make changes as per your requirements.
Hope this helps…
1👍
You seem to be dealing with a video stream in MJPEG format (Motion JPEG). It is a sequence of JPEG frames without any inter-frame compression.
Frontend only
You typically can capture the MJPEG stream from the frontend. But if the third-party accesses the IP camera directly without caching layer, you might effectively DDOS it with very little traffic. I managed to slow down my localhost webcam MJPEG server by just using a handful of receivers.
Also, you directly expose your third-party source to your users. And the third party can monitor your users.
Backend-frontend
Passing the stream through your own backend is more costly on resources.
You make minimum requests per frame to the third-party server but have to serve the stream to all your clients.
Resolution
If your frontend is going to be used by you only, go for the backend-free solution. If you have enough resources for the backend, you expect more clients, and you don’t want to expose your clients to the third party, serve the MJPEG in the backend.
As for technical part, there are plenty out-of-the-box solutions.
- [Django]-Creating SubCategories with Django Models
- [Django]-__name__ attributes of actions defined in <class 'astromatchapp.report.admin.user.ReportUserAdmin'> must be unique
- [Django]-Send real time data to Django frontend
0👍
Based on my experience, it’s best to utilize server-sent events – if the communication is unidirectional and it’s not needed to send data to the backend, here’s what could be done:
I highly suggest to reduce complexity as much as possible, that means removing Django from the picture in this scenario (otherwise, you would have to consume event-stream
content from there and relay it to the client with some additional configuration).
- Ensure that the 3rd-party app allows serving
text/event-stream
as a response type and has CORS enabled. - In React, install
@microsoft/fetch-event-source
, it has all the built-in features of nativeFetch API
with the enhanced usability to consumeevent-stream
content. - Within your React component, add the following logic from
fetch-event-source
package within auseEffect
hook (Others done an amazing job detailing the steps) Just make sure that theContent-type
header value is set totext/event-stream
.
Tip: make sure you cleanup your useEffect
by returning the function to close the EventStream
to avoid any performance issues or memory leaks.
Please let me know if you need further clarification.