With the boom of video doorbells with the likes of Ring, Skybell and Next doorbell cams, I came to the realization that I did not want to be cloud dependent for this type service for long term reliability, privacy and cost.
I finally found last year a wifi video doorbell which is cost effective and support RTSP and now ONVIF streaming:
The RCA HSDB2A which is made by Hikvision and has many clones (EZViz, Nelly, Laview). It has an unusual vertical aspect ratio designed to watch packages delivered on the floor....
It also runs on 5GHz wifi which is a huge advantage. I have tried running IPCams on 2.4GHz before and it is a complete disaster for your WIFI bandwidth. Using a spectrum analyzer, you will see what I mean. It completely saturates the wifi channels because of the very high IO requirements and is a horrible design. 2.4GHz gets range but is too limited in bandwidth to support any kind of video stream reliably... unless you have a dedicated SSID and channel available for it.
The video is recorded locally on my NVR. I was able to process the stream from it on Home Assistant to get it to do facial recognition and trigger automations on openLuup like any other IPCams. This requires quite a bit of CPU power to do...
I also get snapshots through push notifications through pushover on motion like all of my other IPcams. Movement detection is switched on and off by openLuup... based on house mode.
Sharing a few options for object recognition which then can be used as triggers for home automation.asmirnou/watsor asmirnou/watsor
My two favorites so far:
Object detection for video surveillance. Contribute to asmirnou/watsor development by creating an account on GitHub.skvark/opencv-python skvark/opencv-python
Automated CI toolchain to produce precompiled opencv-python, opencv-python-headless, opencv-contrib-python and opencv-contrib-python-headless packages. - skvark/opencv-python
I have optimized my facial recognition scheme and discovered a few things:
My wifi doorbell, the RCA HSDB2, was overloaded by having to provide too many concurrent rtsp streams which was causing the streams themselves to be unreliable:
stream to QNAP NVR
stream to home assistant (regular)
stream to home assistant facial recognition.
I decided to use the proxy function of the QNAP NVR to now only pull 2 streams from the doorbell and have the NVR be the source for home assistant. This stabilized the system quite a bit.
The second optimization was to find out that by default home assistant processes images every 10s. It made me think that the processing was slow but it turns out that it was just not being triggered frequently enough. I turned it up to 2s and now I have a working automation to trigger an openLuup scene, triggering opening a doorlock with conditionals on house mode and geofence. Now I am looking to offload this processing from the cpu to an intel NCS2 stick so I might test some other components than Dlib to make things run even faster.
Sharing what I have learned and some modifications to components with their benefits.Establishing and maintaining a camera stream (over rtsp or http protocol) Have the ability to extract a single frame from the open stream in order to process it Pre process using the same steps as a video frame and store in memory a predetermined number of pictures as the known people to later compare with. In reality what is being compared are arrays of numbers generated by a model. Run a face detection and localization on the frame using one model Using the resulting location of 4., Extract from the picture, the face and encode it into a array of number Run a classification or comparison between the pre-set faces and the face on the video and spit out the "inference" or "prediction" to determine if they are close enough to be the same person.
On home assistant/python3, facial recognition involves the following steps:
Even though a few components have been created on home-assistant for many years to do this, I ran into challenges which forced me to improve/optimize the process.Home Assistant's camera does not establish and keep open a stream in the background. It can open one on demand through its UI but doesn't keep it open. This forces the facial camera component to have to re-establish a new stream to get a single frame every time it needs to process an image causing up to 2s of delays, unacceptable for my application. I therefore rewrote the ffmpeg camera component to use opencv and maintain a stream within a python thread and since I have a GPU, I decided to decode the video using my GPU to relieve the CPU. This also required playing with some subtleties to avoid uselessly decoding frames we won't process while still needing to remove them from the thread buffer. The frame extraction was pretty challenging using ffmpeg which is why I opted to use opencv instead, as it executes the frame synchonization and alignment from the byte stream for us. The pre-set pictures was not a problem and a part of every face component. I started with the dlib component which had two models for ease of use. It makes use of the dlib library and the "facial_recognition" wrapper which has a python3 API but the CNN model requires a GPU and while it works well for me, turned out not to be the best as explained in this article and also quite resource intensive:https://www.learnopencv.com/face-detection-opencv-dlib-and-deep-learning-c-python/
So I opted to move to the opencv DNN algorithm instead. Home Assistant has an openCV component but it is a bit generic and I couldn't figure out how to make it work. In any case, it did not have the steps 5 and 6 I wanted. For the face encoding step, I struggled quite a bit as it is quite directly connected to what option I would chose for step 6. From my investigation, I came to this: https://www.pyimagesearch.com/2018/09/24/opencv-face-recognition/
"*Use dlib’s embedding model (but not it’s k-NN for face recognition)
In my experience using both OpenCV’s face recognition model along with dlib’s face recognition model, I’ve found that dlib’s face embeddings are more discriminative, especially for smaller datasets.
Furthermore, I’ve found that dlib’s model is less dependent on:
Preprocessing such as face alignment
Using a more powerful machine learning model on top of extracted face embeddings
If you take a look at my original face recognition tutorial, you’ll notice that we utilized a simple k-NN algorithm for face recognition (with a small modification to throw out nearest neighbor votes whose distance was above a threshold).
The k-NN model worked extremely well, but as we know, more powerful machine learning models exist.
To improve accuracy further, you may want to use dlib’s embedding model, and then instead of applying k-NN, follow Step #2 from today’s post and train a more powerful classifier on the face embeddings.*"
The trouble from my research is that I can see some people have tried but I have not seen posted anywhere a solution to translating the location array output from the opencv dnn model into a dlib rect object format for dlib to encode. Well, I did just that...For now I am sticking with the simple euclidian distance calculation and a distance threshold to determine the face match as it has been quite accurate for me but the option of going for a much more complex classification algorithm is open... when I get to it.
So in summary, the outcome is modifications to:rafale77/home-assistant rafale77/home-assistant
A. the ffmpeg camera component to switch to opencv and enable background maintenance of a stream with one rewritten file:
:house_with_garden: Open source home automation that puts local control and privacy first - rafale77/home-assistantB. Changes to the dlib face recognition component to support the opencv face detection model:
:house_with_garden: Open source home automation that puts local control and privacy first - rafale77/home-assistantC. Modified face_recognition wrapper to do the same, enabling conversion between dlib and opencv
The world's simplest facial recognition api for Python and the command line - rafale77/face_recognitionD. And additions of the new model to the face_recognition library involving adding a couple of files:
Trained models for the face_recognition python library - rafale77/face_recognition_modelsinit.py
Trained models for the face_recognition python library - rafale77/face_recognition_models
Overall these changes significantly improved speed and decreased cpu and gpu utilization rate over any of the original dlib components.
At the moment the CUDA use for this inference is broken on openCV using the latest CUDA so I have not even switched on the GPU for facial detection yet (it worked fine using the dlib cnn model) but a fix may already have been posted so I will recompile openCV shortly...
Edit: Sure enough openCV is fixed. I am running the face detection on the GPU now.
At the moment i'm using the Surveillance software in Synology but I'm limited to 6 cameras (2 included and I took a 4pack a while ago)
But I have 8 cameras, so right now, 2 of them are not in the NVR!
I checked back in time motioneye but this software is very slow and all my cameras feed was lagging...
any other solution? 😉
Something fun to do if you have a camera located on your driveway:Home Assistant OpenALPR Local OpenALPR Local
This home assistant component enables recognition of a license plate which in turn could open the garage door...
Instructions on how to integrate licences plates with OpenALPR local into Home Assistant.
Sharing an excellent skill I use to locally stream from my IPCams to echo shows:Monocle
Your video stream does not need to go to the cloud. This skill just forwards the local stream address to the echo device when the camera name is called. It does require them to host the address and camera information (credentials) on their server though. I personally block all my IP cameras from accessing the internet from the router.
CCTV on Openluup
CatmanV2 last edited by
Can we? Simply? Specifically Foscams which used to run 'fine' on Vera but are not exactly high importance to me.
But since they are there....
therealdb last edited by
What's your need? I probably have some lua code to send a snapshot via telegram. (Right now I have transitioned to a different solution sendinging my a video as gif, but it's not lua).
akbooer last edited by akbooer
It's all there...
I_openLuupCamera1.xml implementation file:
A camera device created with this implementation file will create an associated child Motion Sensor device which is triggered when the camera’s own motion detection algorithm sends an email.
Out of the box, openLuup will start the SMTP server on port 2525. This can be changed in Lua Startup code with the following line:
luup.attr_set ("openLuup.SMTP.Port", 1234) -- use port 1234 instead
The camera’s device implementation file may be set on the openLuup device’s Attributes page, followed by a Luup reload. The only other significant parameters are the usual:
ipattribute, and the
Camera configuration is obviously device-specific. For my Foscam camera the important parameters are:
- Enable - ticked
- SMTP Server - the IP address of openLuup on your LAN eg. 172.16.42.156
- SMTP Port - 2525, or whatever other port number you configured in openLuup startup
- Need Authentication - No
- SMTP Username / Password - not used
- Sender Email - must include the form
xxx@yyy, for example
- First Receiver -
My camera (FI9831P) also sends three snapshots as email attachments. These may be ignored, or can be written to a folder accessible from openLuup, depending on the configured email address for the trigger.
The Motion Sensor device will remain triggered for 30 seconds (or longer if the camera is re-triggered within that time.) In keeping with the latest security sensor service file, in addition to the
Trippedvariable, there is also an
ArmedTrippedvariable which is only set/reset when the device is armed. This makes AltUI device watch triggers easy to write when wanting only to respond when the device is actually armed.
akbooer last edited by
CatmanV2 last edited by
Cool. Thanks, gents