(Last Updated: August 21, 2020)

Object Recognition

  • Sharing a few options for object recognition which then can be used as triggers for home automation.
    My two favorites so far:

  • Quiet weekend on the forum...
    So now that I have a reliable facial recognition completed, and tweaked/optimized a few minor things...
    I am off to the next project which is object recognition... watsor publishes to MQTT which would be a great opportunity to get openluup to connect to it.
    The purpose will be to replace all my camera motion detection events with more specific event triggers like people detection, car detection, deers, dogs, etc... which would eliminate a lot of the pure motion events!

  • @rafale77 said in Object Recognition:

    Quiet weekend on the forum...

    That's because Openluup just works.....

    Seriously apart from my AltUI licence issue and some slightly odd motion sensor, it's all just fine.

    Can't think of anything else to do right now 🙂


  • And... it is done! My outdoors IP Cams no longer just detect motion... They detect people, packages, and cars and are now used as openLuup scene triggers!

  • So I got a bit frustrated with the resource drain from Watsor's usage of FFmpeg and have been investigating various models to implement on openCV. I ended up completely rewriting the Home assistant openCV component to use an updated SSD512 model and using my own FFmpeg component for decoding... which uses openCV:

  • I got to what I think is the final solution, having switched to the YOLO v4 model which is only 3 months old:

    and running it through opencv, the updated code is in the link in the previous post.

  • Rafael, is there a "write-up" on where to start with homeassistant to use your github code?

  • It is pretty straightforward if you know where your installation is. If you use a virtual environment, bare metal or VM installation, it should be easy to find the location of your python packages. If you use containers then I cannot help you.
    Once you the location of your home assistant package, it is as easy as replacing the original file with my version and setup the component the same way as the original one says. For these, you will need to download the models which are publicly available in the links I posted and then put them into a model/ folder in your home assistant configuration folder. ".homeassistant/model".

    Edit: pretty thrilled with the possibilities: Counting cars in the driveway/garage counting people at the door to adjust the greeting message... etc...

  • In my pursuit of efficiency, for home assistant, I have dug deeper into its image processing and camera code and found that image processing, not being the camera component's main intended purpose is extremely inefficient and can generate 3x more CPU load than needed. Essentially it is doing way too much format coding/decoding:
    From the camera's stream H264/H265 to raw then to JPEG then to bytes then to raw then to array for processing. I found it to be insane and decided recode this:

    These two core component files need to be replaced:

    Then the ffmpeg integration component:

    And finally the two image processing components, opencv (object detection) and dlib (face recognition) which I also streamlined:


    For the last 3 files, comment out all the lines containing Cuda if you do not have a GPU.

    How to:

    Replace all 5 files I have posted above into your home assistant installation.
    Configure your cameras as ffmpeg components as you normally would. If it uses h265 encoding, add "h265" to the "extra_command" option. No other extra command is needed or works.

    Get the model files and create a "model" folder in your homeassistant folder (it should be ~/.homeassistant).
    Here are the model files:
    You only need the res10_300x300_ssd_iter_140000.caffemodel and the deploy.prototxt.txt files.
    and why I picked this model for detection.

    Next the object detection model files:
    You only need the first two files. (.cfg and .weight)
    and last, this file , which you will have to rename cococlasses.txt. It is the list of objects the detector can identify which you can use for your configuration below. By default it will detect "person"
    You should have a total of 5 files in the model folder if you want both face recognition and object detection. Now configure these components in yaml as per the official documentation and you are done. For object detection, the "classifiers" option is a comma separated list: i.e. "person,car,dog". Homeassistant's image processing component has you set the update frequency.
    These should work fine on a couple of cameras on a CPU but the higher the processing frequency is and the more cameras you have, the higher the load. I have mine update at 5Hz, meaning using 0.2s interval and I run these on a GPU, now with 7 cameras. No more camera motion trips due to the wind blowing on leaves... etc... and yeah it's completely integrated into openLuup.

  • Ehm... noob here again... let's say:
    I have homeassistant supervised (ability to add addons). I have option to easiliy spin up a VM. Where do I start?

    As you know, I am not a Linux guru and I would need every step documented 😞

  • Unfortunately, I can't help much in this case. Using the supervised installation means you have everything containerized and I am a bit allergic to this approach as it complicates the interaction between containers and makes access/changes to various code very difficult. It also an inefficient installation mode. These changes are not addons. They are core code changes which will require any installation but the supervised/containerized ones.

  • @sender

    One idea I just had to simplify things is just to install home assistant from my branch by doing this from a venv:

    pip3 install git+https://github.com/rafale77/core.git

    But again it does not help the containerized installation which has the disadvantage of its benefit: Contained controlled small environment so you cannot mess with it means that you can't modify anything in it.

  • Hm that's a pity. I think most users have a "supervised" install due to simplicity.

    Also interesting:

  • Yeah I look at things from an overall standpoint... Yes it may appear simple at the surface but it is actually very complicated just one layer underneath so if one wants to do anything more with it, it becomes a hot mess. I use some containers but only for very small and limited applications which benefit from being self contained. For anything more than that... like home-assistant, I just find it absurd.

    And yup, privacy... again why I am not running anything like this through a cloud API...
    Another example of something made simple but using a very complex solution under the surface to a simple problem. I see the cloud as philosophically the same as the docker containers... Adding layers of complexity and liabilities for the sake of making it convenience.
    I am a big advocate of KISS but from an overall standpoint. Not from just the user installation which is only a short one time extra effort vs. a lifetime of inefficiency and other risks.

  • @sender said in Object Recognition:

    Ehm... noob here again... let's say:
    I have homeassistant supervised (ability to add addons). I have option to easiliy spin up a VM. Where do I start?

    As you know, I am not a Linux guru and I would need every step documented 😞

    Ok, from another standpoint. You know my setup... can I do something with a vm on ubuntu and having it integrated in hass?

  • @sender

    Yes you can. I don't know what this entire obsession with containers is about in the home assistant forum. I found it actually much easier to install and manage with a VM or a virtual environment. By the way, the supervised installation has had an attempt for deprecation causing a huge thread on their forum. A lot of people use it but it is so complicated and takes so much work to maintain that the devs wanted to take it away... also because IMHO it really made 0 sense to begin with.

    Disturbingly the simplest, fastest, most flexible and easiest installation is the one they recommend for developers:

    In my case, in a ubuntu VM, I even skip the virtual environment which is an unnecessary added complication.

    Make a copy of your home-assistant configuration folder. (If you don't want to start from scratch)
    Set up the VM to the same address as your previous installation.
    Just install a virgin ubuntu OS, install python3.7 if it isn't already installed, this will depend on the version of ubuntu you installed. I think any version before 20.04 will need it:

    sudo apt-get update && sudo apt-get install python3.7

    Copy your old .homeassistant folder into your user directory again only if you want to keep your config.
    Install home assistant from my repo

    python3.7 -m pip install git+https://github.com/rafale77/core.git

    and start it:

    hass --open-ui

    You can go through another tutorial to make it auto start upon VM start but this should suffice for now.
    I would argue that this isn't any more complicated than the supervised installation. To me it is actually much simpler... It is literally 3 commands in the worst case for a new install, without any funky docker software to download.

    The docker container, I think helped people setup the autostart and install the right version of python... A complicated solution to a simple problem. Adding another layer of management program, virtualization, restrictions, file system management, cpu and memory inefficiency etc... just to control 5 lines of startup code?

  • And... now that I have switched to pytorch for facial detection/recognition I am looking to see if Yolov4 can be enhanced and sure enough... a 7 days old update to this project could be what I will try to implement in home assistant next:

    It combines YoloV4 with some of the enhancements of YOLOv5 and seems to be better than both.

Log in to reply