sys_system.state for the Controller instance.
toggledbits
Posts
-
Self test -
How to upgrade from an old version of MSR?It should be fine. Take a backup of your
storagedirectory in its entirety before the upgrade, along withconfigand any other directories where you may have customizations (perhaps alsoextif you have add-in Controllers). Then go. Make sure you do anpm run depsin the install directory before starting the new version of Reactor, to upgrade package dependencies, or Reactor will likely not start. Post if you have any problems.Edit: Oh! And very important... make sure you are running nodejs version 18 or higher. If you have to upgrade, install a current LTS (Long-Term Support) version (either 22 or 24). Stick to even-numbered releases.
-
Access control - allowing anonymous user to dashboardSorry, that's not possible in the current incarnation of access control... either everything is protected by user authentication (login), or its not. I use a trivial password on my guest account ("guest") to work around it.
-
Upcoming Storage Change -- Got Back-ups?TL;DR: Format of data in
storagedirectory will soon change. Make sure you are backing up the contents of that directory in its entirety, and you preserve your backups for an extended period, particularly the backup you take right before upgrading to the build containing this change (date of that is still to be determined, but soon). The old data format will remain readable (so you'll be able to read your pre-change backups) for the foreseeable future.In support of a number of other changes in the works, I have found it necessary to change the storage format for Reactor objects in
storageat the physical level.Until now, plain, standard JSON has been used to store the data (everything under the
storagedirectory). This has served well, but has a few limitations, including no real support for native JavaScript objects likeDate,Map,Set, and others. It also is unable to store data that contains "loops" — objects that reference themselves in some way.I'm not sure exactly when, but in the not-too-distant future I will publish a build using the new data format. It will automatically convert existing JSON data to the new format. For the moment, it will save data in both the new format and the old JSON format, preferring the former when loading data from storage. I have been running my own home with this new format for several months, and have no issues with data loss or corruption.
A few other things to know:
- If you are not already backing up your
storagedirectory, you should be. At a minimum, back this directory up every time you make big changes to your Rules, Reactions, etc. - Your existing JSON-format backups will continue to be readable for the long-term (years). The code that loads data from these files looks for the new file format first (which will have a
.dvalsuffix), and if not found, will happily read (and convert) a same-basenamed.jsonfile (i.e. it looks forruleid.dvalfirst, and if it doesn't find it, it tries to loadruleid.json). I'll publish detailed instructions for restoring from old backups when the build is posted (it's easy). - The new
.dvalfiles are not directly human-readable or editable as easily as the old.jsonfiles. A new utility will be provided in thetoolsdirectory to convert.dvaldata to.jsonformat, which you can then read or edit if you find that necessary. However, that may not work for all future data, as my intent is to make more native JavaScript objects directly storable, and many of those objects cannot be stored in JSON. - You may need to modify your backup tools/scripts to pick up the new files: if you explicitly name
.jsonfiles (rather than just specifying the entirestoragedirectory) in your backup configuration, you will need to add.dvalfiles to get a complete, accurate backup. I don't think this will be an issue for any of you; I imagine that you're all just backing up the entire contents ofstorageregardless of format/name, that is the safest (and IMO most correct) way to go (if that's not what you're doing, consider changing your approach). - The current code stores the data in both the
.dvalform and the.jsonform to hedge against any real-world problems I don't encounter in my own use. Some future build will drop this redundancy (i.e. save only to.dvalform). However, the read code for the.jsonform will remain in any case. - This applies only to persistent storage that Reactor creates and controls under the
storagetree. All other JSON data files (e.g. device data for Controllers) are unaffected by this change and will remain in that form. YAML files are also unaffected by this change.
This thread is open for any questions or concerns.
- If you are not already backing up your
-
[Solved] function isRuleEnabled() issuePlease post a readable screen shot. I don't know what or how this one got posted, but it's tiny and low resolution and I can't see clearly what's in it.
-
[Reactor] Problem with Global Reactions and groupsTry 26011
-
Reactor (Multi-System/Multi-Hub) AnnouncementsReaction build 26011
USERS OF AARCH64-tagged DOCKER IMAGES: Per this earlier post, this build will likely be the last with the
aarch64tag. Please follow the post's guidance for changing to one of the newer tags appropriate for your hardware and OS (32-bitarmv7lor 64-bitarm64).- Reactions UI: Fix update of display after copy in-place.
- Don't store reaction history entries for sub-reactions
- VirtualEntityController: better consistency in time-series configuration; update documentation.
- Fix an error in date display of time range conditions within certain parameters.
- HassController: Bless HA to 2026.1.0
This is a "silent" release (it is not advertised in the Status page of Reactor).
-
Reactor (Multi-System/Multi-Hub) AnnouncementsReactor build 25238
- Rules/Date-Time: Fix a regression (in 25315) where a Date-Time
betweencondition that crosses midnight may determine incorrect state if the Rule is reloaded (e.g. by editing it or a Reactor restart) in the period after midnight but before the end time. - Restarts on Win32 systems can now (finally) do a gradual, organized reload or shutdown.
- HassController: Bless HA to 2025.11.3
This is a "silent" release (it is not advertised in the Status page of Reactor)
- Rules/Date-Time: Fix a regression (in 25315) where a Date-Time
-
Possible feature request 2?@CatmanV2 said in Possible feature request 2?:
Any chance of a 'bulk delete' option?
Each Controller instance implements
sys_system.purge_dead_entitiesaction.This is also automatic for the Controller implementations I provide for entities that are older than 24 hours when the controller connects/reconnects.
-
Copying a global reactionGot it. Next build likely for this weekend.
-
[HowTo] Using HABridge with Reactor@CatmanV2 said in [HowTo] Using HABridge with Reactor:
(PS Cinema was occupied as the cat had just walked in there... )

Actually, what's not there is equally telling. The HTTP request itself probably returned a (failed) result code, so however you made that request, that's the tool that will log that result. The possible HTTP results for that endpoint are:
- 200: The request succeeded, which it did not, because the action itself would have caused more log info that we don't see here;
- 400: The request failed because the action failed. This is not it either, because that is also logged on the Reactor side, and we don't see it here;
- 404: The entity was not found. This isn't logged in Reactor, it's just a fast return.
It's a 404, because you did not give it a canonical ID for the entity in the request URL. A canonical ID includes both the entity ID and it's parent controller's iD — different controllers can have an entity with the same ID, so you have to specify which controller's entity you are targeting. Canoncal IDs take the form
controller-id>entity-id(note>between the two parts). The absence of this also explains why the URL encoder didn't make any changes, because that>must be escaped.So for troubleshooting API calls, remember, if it doesn't work, look at the Reactor log for messages; if nothing is logged on the Reactor side, look at the logs/messages for the tool that made the request to Reactor.
Ref: API docs
-
[HowTo] Using HABridge with ReactorSigh. People, let's make 2026 the year we stop posting one line from the logs. Almost everything we have in the logs requires context. Posting it up-front increases the chances that, if you haven't already figured it out yourself from what's there, someone else can without further back-and-forth in the thread.
Odds are, there's a bunch of interesting stuff following the line you posted, @CatmanV2 that tells how the system is responding to the HTTP request. Let's have a look at it!
-
[Reactor] Bug when sending MQTT boolean payloadsYeah, I think the underlying package has some kind of half-check somewhere, like
if (payload) { ... }to see if a payload is being sent, and that would fail for boolean false and other falsy values, but it doesn't matter, I don't assume the proper conversion below me, and I missed it on the exception case in that action, so there's good permanent fix (for the code... my brain, maybe not so much). -
[Reactor] Bug when sending MQTT boolean payloadsAll messages and payloads are text. You literally cannot send a boolean as a payload in MQTT. You have to convert all other types to a text representation.
That said, I will look into why MQTTController isn't doing it automatically for this action. It should.
Sorry for the reply delay... I was at a training event for a week.
Edit/Update: found it. The
x_mqtt.publishaction is handled specially, and that code was missing the default case that would catch and convert non-string/non-object types. The underlying package handles other primitives like Number, but apparently not Booleans. Give me a day or two to settle back in and publish an updated MQTTController. -
Time series documentationIn the version you are using, it is still required; that's the docs leading the released code by a step (sorry). The version you are using also doesn't enforce limits well, so it allows values that could produce no result, or store more data than needed.
For example, given your
retentionof 60 andintervalof 5, there would ber/i+1 = 13samples in the series. But if you specifydepthof 2, then only the most recent2d-1 = 3samples are considered... the other 10 samples don't contribute to the result value. Adepthof 7 would be the best match for the givenintervalandretention(just mathematically, ignoring your semantic goals). -
Time series documentationThe previous post shows
depthas null or not provided. Please specify bothdepthandretention, restart and wait at least one full interval, then copy-paste the attributes if a value hasn't been calculated. -
Time series documentation@tunnus OK. Let's start here: copy-paste the device attributes for that VEC entity (remember to use fenced code block formatting when pasting in reply).
-
Time series documentationYou have to wait for the time series to fill before acceleration can be computed.
depthis used in the calculation, as well asretention... that's a peculiarity of this aggregate (accel). There need to be at least2*depth-1samples collected before the calculation can occur.I have updated the documentation in the online version just yesterday, so check here.
-
Reset a delay@gwp1 said in Reset a delay:
This is where @toggledbits says "wow, that's over-complicated --- try this instead" because he usually does and is usually right LOL. (I tend to overthink/overcomplicate my automations....)
ROFL! Actually, your response is dead-nuts right for both the consequence of @CatmanV2 's approach and what he needs to do to get what he wants. The only critique I have is that the first image is a bit daunting to look at, with all the extra conditions. But overall, as they say: This is the way...
-
Reset a delay@CatmanV2 foul ball. Not showing your work. Can't guide you properly here. What you are asking is actually default behavior, so either you've done something odd, or there's a bug. Either way, can't tell unless you post your work.




