Skip to content

Multi-System Reactor

746 Topics 7.1k Posts
  • [Solved] Issue with Tasmota sensors and MQTTController later than 22024

    Locked
    22
    0 Votes
    22 Posts
    1k Views
    akbooerA

    Ah, OK, I wasn't clear. I am now. That's clearly a bug. Thanks.

  • Sensative Strips Guard not showing status in MSR

    Locked
    9
    0 Votes
    9 Posts
    242 Views
    R

    Nevermind. Under the Strips guard 700 name I can see status changes in zwavejs

  • Variable Missing but working

    Locked
    2
    0 Votes
    2 Posts
    121 Views
    S

    I found a thread where it was stated this was a MSR bug that allowed you to select variable from other rules. I assume these rules were created before build 21267. I have read the docs and understand that this method is not possible. I will change to use global expressions. Strange the rules still work as is.

  • Summer Time : fonction interval

    Locked
    10
    0 Votes
    10 Posts
    381 Views
    toggledbitsT

    Keep going. Tell me what your trip through the UI would look like in configuring such a thing.

  • SOLVED: Use of multiple Hubitat hubs with MSR

    Locked
    13
    0 Votes
    13 Posts
    511 Views
    toggledbitsT

    Good sleuthing. That's an interesting resolution. In a way it makes sense, because I could see docker reserving the resource (a network interface and its desired address) whether the container is running or not, but it's not the obvious thing for it to do (IMO), especially given that the side-effect is to let the host OS handle requests when the guest is down. That actually seems like is has potential security implications...

  • MSR/MQTT - detecting broker offline status

    Locked
    2
    0 Votes
    2 Posts
    110 Views
    toggledbitsT

    Yeah, the underlying mqtt package has an odd rhythm to its events that I can do a better job handling, I think. I just put up a new version of MQTTController. Give that one a try.

  • Plugin - InfluxDB 2- RequestTimedOutError: Request timed out

    Locked
    2
    0 Votes
    2 Posts
    1k Views
    toggledbitsT

    That error means that the Influx client library used by InfluxFeed is unable to connect to your InfluxDB server and service.

    You are sort of indicating that you changed something. It would be good to know what. If you've been shuffling containers around, or recreated/upgraded any containers, you've likely not gotten the network setup right for one or both containers and they can't see each other, or at least, the Reactor container cannot access the InfluxDB container.

  • MSR connecting to Ezlo Plus

    Locked Solved
    5
    0 Votes
    5 Posts
    216 Views
    G

    @tom_d Edit the title of your original post by using the meatball menu (three vertical dots) to the right of your signature, next to the Reply Quote.

  • Changes to iblinds showing in HASS but not in zwavejs

    Locked
    14
    0 Votes
    14 Posts
    346 Views
    toggledbitsT

    Awesome! Enjoy your round.

  • MSR feature requests

    Locked
    1
    0 Votes
    1 Posts
    93 Views
    No one has replied
  • [Solved] Arm for Eastern Standard Time/Daylight Saving Time

    Locked
    7
    0 Votes
    7 Posts
    241 Views
    G

    @toggledbits I saw your release notes and immediately dropped 22080 in place and went straight to the variable rules I'd built from @Alan_F's comment and disabled them in favor of the system's SunInfo dst flag. I didn't delete the others - I really want to fully grasp latching.

    I'm not holding my breath on the possible law - it was written back in 2021 and the House keeps putting it in committee.

    Fingers crossed!

  • [RESOLVED] Missing attributes for a device

    Locked
    5
    0 Votes
    5 Posts
    206 Views
    toggledbitsT

    Not related to one specific controller, rather to a specific capability that applies to the devices reported in the PR. All good. Happy to hear it's working for you now.

  • Urgent Help MSR stopped running

    Locked
    38
    0 Votes
    38 Posts
    1k Views
    toggledbitsT

    Losing state will cause rules to possibly run unexpectedly. For example, say you had a rule that triggers between sunrise and sunset. If state was lost due to disk full corruption such as this in mid-day, discarding the state at restart will cause the rule to have no state, therefore think it has a transition into the sunrise (triggered) period and fire. This may or may not be a desirable side-effect, depending entirely on how the user determines the rules to work, and I cannot predetermine whether re-running a rule would have side-effects in the user environment or not. Since this can occur unattended (such as while on vacation or at a remote vacation home), it's potentially very troublesome and should at least be corrected with the full awareness of the user, as you have in this case.

    My number one recommendation is that you manage filesystem space well on the system. This is always an issue on all systems, and in Linux/Unix systems in particular, processes can die (due to errors) and files can become (logically) corrupt due to disk full conditions. It is something that needs to be managed. In your case, I would remember what log files were deleted, and investigate why those were so large and not being rotated and expired or archived. In your case, it appears you have a single root volume with everything on it. That means any subsystem running on the box can potentially fill the filesystem and cause other applications to misbehave. Best practices for system management often call for segmenting files/directories out. It is not uncommon, for example, to have /var, /usr and /home all on separate filesystems, that any one of them filling (/var often has this issue because it holds system logs and other fast-changing files) would not result in other files in the other directories to become corrupt or truncated. Taking this to an extreme, but if hardening Reactor is mission-critical to you, it is possible to create a small volume and mount it as /var/reactor at boot time, and use that as the home for the storage directory (and config, but not Reactor's logs), thereby isolating and protecting Reactor's storage from everything else. This is all Linux system management stuff, so not really appropriate to deep dive into, but if you're going to use and maintain these systems as part of your infrastructure, well worth spending the time to learn. A must really, because when it goes wrong, and it will go wrong, your fluency in system management will directly determine the time it takes to recover, and how well (or not) you recover.

    I can take more mitigating steps to harden startup, and I will definitely do that. I will also see about adding Status page alerts for disk space problems. The way Reactor works, if the disk space problem is mitigated before a restart of Reactor (i.e. while Reactor is running), Reactor will (eventually) rewrite the files with state (which is cached in RAM during operation) -- at shutdown, Reactor writes the RAM state back to disk as a final assurance that they are in sync, just for this reason. But I cannot protect those files from every eventuality, and every other subsystem on the machine works pretty much the same way (rebooting after fixing a no-space condition is always recommended, because many daemons will just die when they can't write a file).

    And as I've said again and again and again, please read the documentation and look at your log files. If you see something in the log files that points to obvious issue, handle it. If you don't know how to handle it, post the log file snippet and remember that context is vital, so posting 2-3 lines of a log file may provide little or no useful information (even when the error contains module names and line numbers); it often takes a dozen or more lines of context prior to really interpret how the system got to the crash point, so you must post a least a dozen or more lines prior to any error message you are inquiring about. Fortunately in this case, it was pretty obvious.

    Pardon errors. Tapping this out on a bluetooth keyboard and over my home VPN on sketchy Internet made this a bit of a chore. Onward.

  • 0 Votes
    6 Posts
    277 Views
    toggledbitsT

    What you posted was great, and I was able to see the string data in the events, so clue attained, and that's a win. We'll fix it right when I get back.

  • [Solved] Hubitat>MSR and dedicated Maker API

    Locked
    4
    0 Votes
    4 Posts
    261 Views
    wmarcolinW

    @gwp1

    It would be interesting when the Make Api can do as Hubitat Z-Wave Mesh Details does and give authorization to Extended Device Data, which has very useful information to further augment the Reactor rules.

    42d39af8-0bcb-41db-a876-d9313ed6b089-image.png

  • Zen32 question

    Locked
    12
    0 Votes
    12 Posts
    451 Views
    R

    Would that have cleaned up all the blue alerts in the status page about devices being deleted and added. I was getting hundreds of those.

  • ZwaveJS crashing

    Locked
    13
    0 Votes
    13 Posts
    431 Views
    toggledbitsT

    Got it. That's good. In any case, the first point... I have no idea how zwavejs reports board/interface communication or other problems (I mean, I see how it's supposed to, and I have coded for that, but I have never seen a detected change, so can't confirm that their mechanism is actually working as advertised). And Hass and ZwaveJSController can only present the data they are given. Given that you seem to not be seeing any change in the Hass entity, and same on the Reactor side, and given that my experience with an Aeotec Z-Stick Gen 5+ is similar, that's an issue for the zwavejs devs.

  • Who are my Home Assistant + ZWave-JS users?

    Locked
    51
    0 Votes
    51 Posts
    3k Views
    toggledbitsT

    Build 22067

    A few more device tweaks; Performance improvements; Update mechanism for data files, so that I can post device updates without having to publish a build.

    This version is fully in sync with fixes and improvements in latest release 22067.

  • [SOLVED] Expressions not auto-updating when dependencies change (22022)

    Locked
    25
    0 Votes
    25 Posts
    1k Views
    A

    Option 2 seems like the way to go. I'll make that change and wait for nothing to happen ☺

  • MQTT interface... time for some testing. Where are my experts?

    Locked
    55
    2 Votes
    55 Posts
    3k Views
    R

    Thinks for the tips. Figured out the DNS issue was a setting in pihole. Had interface settings set to only respond on interface eth0, changed it to local requests and the install script ran fine. Got me some learnin today, thanks,

Recent Topics