M
mblindsey
@mblindsey
TL;DR: Format of data in storage directory will soon change. Make sure you are backing up the contents of that directory in its entirety, and you preserve your backups for an extended period, particularly the backup you take right before upgrading to the build containing this change (date of that is still to be determined, but soon). The old data format will remain readable (so you'll be able to read your pre-change backups) for the foreseeable future.
In support of a number of other changes in the works, I have found it necessary to change the storage format for Reactor objects in storage at the physical level.
Until now, plain, standard JSON has been used to store the data (everything under the storage directory). This has served well, but has a few limitations, including no real support for native JavaScript objects like Date, Map, Set, and others. It also is unable to store data that contains "loops" — objects that reference themselves in some way.
I'm not sure exactly when, but in the not-too-distant future I will publish a build using the new data format. It will automatically convert existing JSON data to the new format. For the moment, it will save data in both the new format and the old JSON format, preferring the former when loading data from storage. I have been running my own home with this new format for several months, and have no issues with data loss or corruption.
A few other things to know:
If you are not already backing up your storage directory, you should be. At a minimum, back this directory up every time you make big changes to your Rules, Reactions, etc.
Your existing JSON-format backups will continue to be readable for the long-term (years). The code that loads data from these files looks for the new file format first (which will have a .dval suffix), and if not found, will happily read (and convert) a same-basenamed .json file (i.e. it looks for ruleid.dval first, and if it doesn't find it, it tries to load ruleid.json). I'll publish detailed instructions for restoring from old backups when the build is posted (it's easy).
The new .dval files are not directly human-readable or editable as easily as the old .json files. A new utility will be provided in the tools directory to convert .dval data to .json format, which you can then read or edit if you find that necessary. However, that may not work for all future data, as my intent is to make more native JavaScript objects directly storable, and many of those objects cannot be stored in JSON.
You may need to modify your backup tools/scripts to pick up the new files: if you explicitly name .json files (rather than just specifying the entire storage directory) in your backup configuration, you will need to add .dval files to get a complete, accurate backup. I don't think this will be an issue for any of you; I imagine that you're all just backing up the entire contents of storage regardless of format/name, that is the safest (and IMO most correct) way to go (if that's not what you're doing, consider changing your approach).
The current code stores the data in both the .dval form and the .json form to hedge against any real-world problems I don't encounter in my own use. Some future build will drop this redundancy (i.e. save only to .dval form). However, the read code for the .json form will remain in any case.
This applies only to persistent storage that Reactor creates and controls under the storage tree. All other JSON data files (e.g. device data for Controllers) are unaffected by this change and will remain in that form. YAML files are also unaffected by this change.
This thread is open for any questions or concerns.
Just another thought. Adding devices from my Home Assistant / Zigbee2MQTT integration. Works perfectly but they always add as their IEEE address. Some of these devices have up to 10 entities associated, and the moment they are renamed to something sensible, each of those entities 'ceases to exist' in MSR. I like things tidy, and deleting each defunct entity needs 3 clicks.
Any chance of a 'bulk delete' option?
No biggy as I've pretty much finished my Z-wave migration and I don't expect to be adding more than 2 new Zigbee devices
Cheers
C
Build 21228 has been released. Docker images available from DockerHub as usual, and bare-metal packages here.
Home Assistant up to version 2021.8.6 supported; the online version of the manual will now state the current supported versions;
Fix an error in OWMWeatherController that could cause it to stop updating;
Unify the approach to entity filtering on all hub interface classes (controllers); this works for device entities only; it may be extended to other entities later;
Improve error detail in messages for EzloController during auth phase;
Add isRuleSet() and isRuleEnabled() functions to expressions extensions;
Implement set action for lock and passage capabilities (makes them more easily scriptable in some cases);
Fix a place in the UI where 24-hour time was not being displayed.
Just for the record, in case anyone is following, I'm really rather impressed.
I have installed one of these:
https://www.amazon.co.uk/dp/B0B6P22YJC?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1
That's connected (physically) to the VM running on my Synology, with a 2m USB extension.
The same host also runs Openluup, Mosquito, HA Bridge.
Yesterday I installed Zigbee2mqtt. That was a bit of a PITA but mostly because of ports and permissions.
Once up and running, and the correct boxes ticked, immediately visible in Home Assistant via the MQTT integration, and thence into Reactor
I've only got two devices. I bought the cheapest sensor I could find, which is a door sensor. Dead easy to add to ZIgbee2mqtt and again, immediately visible in HA.
https://www.amazon.co.uk/dp/B0FPQLWRW1?ref=ppx_yo2ov_dt_b_fed_asin_title
The dongle is on the top floor of the house, and I wanted the sensor on the back door (just about as far apart as it's possible to get short of going into the garage) When I moved the sensor downstairs it dropped out pretty instantly (which wasn't a huge surprise) so quick bit of research found out that smart plugs will act as routers so...
https://www.amazon.co.uk/dp/B0FDQDPGBB?ref=ppx_yo2ov_dt_b_fed_asin_title
Took me about 30 seconds to connect. Updated the name. Instantly visible in Reactor with the new name pushed over from Zigbee2mqtt.
And lo, the door sensor now has a signal of 140 and works as far as I can tell perfectly and instantly (unlike my z-wave one).
A few more of those will be purchased and used to replace the Tuya wifi cloud devices and the (continually failing) Z-wave plugs (yeah, they were TKB so....)
Commended to the house. Thanks for everyone that got me on the right lines.
C
If you’re like me and still running HABridge to control your devices locally via Alexa, you might need to tweak your endpoints to call Reactor via HTTP. Here’s the best way to do it, IMO:
Insert the Reactor Canonical ID (e.g., zwavejs>71-1) into the MapID field, but make sure it’s URL-encoded like this: zwavejs%3E71-1.
Then, configure these endpoints as needed:
On: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/power_switch.on
Off: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/power_switch.off
Dim:
For lights: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/dimming.set?level=${intensity.decimal_percent}
For roller shutters: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/position.set?value=${intensity.decimal_percent}
Color: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/rgb_color.set_rgb?red=${color.r}&green=${color.g}&blue=${color.b}
Just replace [ReactorIP] with your actual IP address. By using these placeholders, you can standardize your endpoints across all devices, making maintenance easier.
This setup works with any device mapped under MSR, regardless of the controller (ZWaveJS, Vera, HASS, OpenSprinkler, virtual, MQTT, DynamicEntities, etc.). If you need different calls, just go to the entities, get the action and parameters, and adjust accordingly. Enjoy super fast access to your devices via Alexa!
If you're migrating from Vera, the endpoints are (URL-encoded) in a file called device.db, in JSON format, under your config. You'd write a script to align the new endpoints to the new one, if you prefer to do it automatically. YMMV.
I have tried numerous ways to define a recurring annual period, for example from December 15 to January 15. No matter which method I try - after and before, between, after and/not after, Reactor reports "waiting for invalid date, invalid date. Some constructs also seem to cause Reactor to hang, timeout and restart. For example "before January 15 is evaluated as true, but reports "waiting for invalid date, invalid date". Does anyone have a tried and true method to define a recurring annual period? I think the "between" that I used successfully in the past may have broken with one of the updates.
Good evening all,
For about the past week or so, I've been having problems with a specific rule in my home automation that controls when my home goes from an Away mode to Home mode. One of the conditions it checked for was my alarm panel, when it changed from Armed Away to Disarmed. There seems to have been a firmware update on the panel that added an intermittent step of "pending", and I can't say for certain it happens 100% of the time.
Is there a way to write a condition that so it changes from one condition, to the next, and then another condition? As in, Home alarm changes from armed_away to pending to disarmed.
Thanks.
No idea how easy this would be. During my migration away from Z-wave I've been replacing the Z-wave devices with Sonoff which has broken some of my automations.
Any chance of a 'Test Reaction' function to call out which ones are broken because an entity no longer exists? Without actually running the reaction?
Or does this exist already and I'm just not aware of how to do it? Obviously I can see entities that are no longer available, but not quite what I'm looking for.
I guess it's something of an edge case so no huge issue.
TIA!
C
I'm sure this has been asked, and answered, but damned if I can figure it out
Use case: I have a rear garden with lights. A door from the kitchen into the garden and a door from the garage.
Currently if I open the kitchen door the lights come on (yay) and a 3 minute delay starts.
After 3 minutes, no matter what else happens, the lights go off (Boo! But also yay!)
What I would like is for the 3 minute delay until the lights go off to start from the latest door open event.
That is, if I'm going from kitchen to garage, and back again, the lights stay on until there's three minutes of no activity.
I've tried 'hacking' with a virtual switch, but can't seem to stop the delay.
Any pointers?
TIA
C
Hello oh great ones.
After a couple of hours messing with ports and permissions I have Zigbee2mqtt installed and running on my virtual pi
Can connect to the front end and everything
Odd one though, simply cannot get systemctl to work and the error is, well, unhelpful. The service file is this:
[Unit]
Description=zigbee2mqtt
After=network.target
[Service]
Environment=NODE_ENV=production
Type=notify
ExecStart=/usr/local/bin/node index.js
WorkingDirectory=/opt/zigbee2mqtt
StandardOutput=inherit
# Or use StandardOutput=null if you don't want Zigbee2MQTT messages filling syslog, for more options see systemd.exec(5)
StandardError=inherit
WatchdogSec=10s
Restart=always
RestartSec=10s
User=pi
[Install]
WantedBy=multi-user.target
Straight out of the docs with the change to point to my local node install (which we know works as it's the same as the very fine Reactor is using.
Running manually pnpm start
in /opt/zigbee2mqtt works fine
However:
catman@openluup:/etc/systemd/system$ sudo systemctl start zigbee2mqtt.service
Job for zigbee2mqtt.service failed because the control process exited with error code.
See "systemctl status zigbee2mqtt.service" and "journalctl -xe" for details.
Which I have
catman@openluup:/etc/systemd/system$ sudo systemctl status zigbee2mqtt.service
● zigbee2mqtt.service - zigbee2mqtt
Loaded: loaded (/etc/systemd/system/zigbee2mqtt.service; disabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2025-12-16 12:32:42 GMT; 4s ago
Process: 3093 ExecStart=/usr/local/bin/node index.js (code=exited, status=217/USER)
Main PID: 3093 (code=exited, status=217/USER)
and
-- A start job for unit zigbee2mqtt.service has begun execution.
--
-- The job identifier is 17477.
Dec 16 12:35:16 openluup systemd[3178]: zigbee2mqtt.service: Failed to determine user credentials: No such process
Dec 16 12:35:16 openluup systemd[3178]: zigbee2mqtt.service: Failed at step USER spawning /usr/local/bin/node: No such process
-- Subject: Process /usr/local/bin/node could not be executed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The process /usr/local/bin/node could not be executed and failed.
--
-- The error number returned by this process is ERRNO.
Dec 16 12:35:16 openluup systemd[1]: zigbee2mqtt.service: Main process exited, code=exited, status=217/USER
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit zigbee2mqtt.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 217.
Dec 16 12:35:16 openluup systemd[1]: zigbee2mqtt.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit zigbee2mqtt.service has entered the 'failed' state with result 'exit-code'.
Dec 16 12:35:16 openluup systemd[1]: Failed to start zigbee2mqtt.
-- Subject: A start job for unit zigbee2mqtt.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zigbee2mqtt.service has finished with a failure.
Which strikes me as very odd.
Any blindingly obvious things I'm missing?
TIA!
C
Obviously a quiet forum, but perhaps it's time
I'm looking at rolling Zigbee into my system, in large part for the Aqara FP300 presence sensors which seem to finally provide a solution to if the wasp is actually in the box.
My current set up is as follows:
One Debian VM on Synology NAS running:
Z-wave Server
Open Luup
Multi system reactor
HA bridge
Mosquito MQQT broker
This machine has a UZB Z-wave stick connected via the USB port on the NAS
Another HAOS VM on the same NAS running HAOS
I've got some older Z-wave stuff that I keep around until it fails.
I have some Tuya stuff integrated in HA
My thought was to get either a SMLIGHT SLZB-06M
or an Aqara Hub M2
Integrate them via Zigbee2MQQT (running on the Debian machine) and then expose them in HA so I can continue to automate in MSR.
Thoughts on which of those devices wold be preferable long term. Both are POE capable which is good. It also appears I could add a USB dongle to the NAS and expose it to the HAOS machine.
Any thoughts from the assembled experts here? TIA
C
Another question to the hive mind. Prompted by the fact that I lost yet another z-wave device over the weekend due to a power issue. It looks like z-way server is reporting another device failed (although it's working fine) and message queue is far too long IMHO. Also the failed device has been removed in the expert interface, but still there in the 'normal' one. Sigh.
Currently I have z-wave, Tuya, thinking about Zigbee.... Does anyone use one single protocol for everything? Right now I'm feeling that as the z-wave stuff dies, I'm just gonna replace it with something else....
C
mblindsey's Groups
There are no groups to see





