Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Unsolved
Collapse
Discussion Forum to share and further the development of home control and automation, independent of platforms.
  1. Home
  2. Software
  3. Multi-System Reactor
  4. Quality of Life Request: Update Button
Advice reqeusted to migrate MSR from Bare Metal to Container
T
Good day all, I'm in the process of trying to shut down my 10 year old Linux home server that served many purposes, but primarily it's what I used for my NAS/Plex Media server. I migrated the NAS aspect of the server in November of last year to a true NAS solution (Ubiquti UNAS Pro), which is rack mount and much more efficient than my old tower, which it's only side benefit was heating my home office during the winter. Unfortunately it also means heating my home office during the summer, which were about to be in full swing. I have two things running on this 10 year old server at this point. MSR and pi-hole. I'm running Plex Media Server on Fedora Workstation in Podman on mini PC, which is much more energy efficient than my old tower. My next step is to migrate MSR. I know there are images of MSR out there, and creating it is well documented. I'm going to be using Podman instead of Docker for various reasons, but they work very similar. What I don't know, is what I need to do to migrate my existing Bare Metal installation over to a container. Has anyone done this? Any advice?
Multi-System Reactor
Reactor (Multi-System/Multi-Hub) Announcements
toggledbitsT
Build 21228 has been released. Docker images available from DockerHub as usual, and bare-metal packages here. Home Assistant up to version 2021.8.6 supported; the online version of the manual will now state the current supported versions; Fix an error in OWMWeatherController that could cause it to stop updating; Unify the approach to entity filtering on all hub interface classes (controllers); this works for device entities only; it may be extended to other entities later; Improve error detail in messages for EzloController during auth phase; Add isRuleSet() and isRuleEnabled() functions to expressions extensions; Implement set action for lock and passage capabilities (makes them more easily scriptable in some cases); Fix a place in the UI where 24-hour time was not being displayed.
Multi-System Reactor
Can´t restart or upgrade/deploy MSR
F
Topic thumbnail image
Multi-System Reactor
[Solved] Limit HA Entity in MSR
wmarcolinW
Topic thumbnail image
Multi-System Reactor
Organizing/ structuring rule sets and rules
R
Hi guys, Just wondering how you guys organize your rule sets and rules. I wish I had an extra layer to have some more granularity, but my feature request was not popular. Maybe there are better ways to organize my rule sets. I use the rule sets now primarily for rooms. So a rule set per room. But maybe grouping by functionality works better. Any examples/ suggestions would be appreciated.
Multi-System Reactor
Moving MSR from a QNAP container to RP 5 - some issues
Tom_DT
Topic thumbnail image
Multi-System Reactor
Widget deletion does not work and landing page (status) is empy
M
Topic thumbnail image
Multi-System Reactor
Need help reducing false positive notifications
T
Topic thumbnail image
Multi-System Reactor
Deleting widgets
tunnusT
Hopefully a trivial question, but how do you delete widgets in a status page? Using build 22266
Multi-System Reactor
MQTT configuration question
tunnusT
I have the following yaml configuration in local_mqtt_devices file x_mqtt_device: set_speed: arguments: speed: type: str topic: "command/%friendly_name%" payload: type: json expr: '{ "fan": parameters.speed }' While this works fine, I'm wondering how this could be changed to "fixed" parameters, as in this case "fan" only accepts "A", "Q" or a numeric value of 1-5?
Multi-System Reactor
System Configuration Check - time is offset
F
Hi! I get this message when I'm on the status tab: System Configuration Check The time on this system and on the Reactor host are significantly different. This may be due to incorrect system configuration on either or both. Please check the configuration of both systems. The host reports 2025-04-01T15:29:29.252Z; browser reports 2025-04-01T15:29:40.528Z; difference 11.276 seconds. I have MSR installed as a docker on my Home Assistant Blue / Hardkernel ODROID-N2/N2+. MSR version is latest-25082-3c348de6. HA versions are: Core 2025.3.4 Supervisor 2025.03.4 Operating System 15.1 I have restarted HA as well as MSR multiple times. This message didn´t show two weeks ago. Don´t know if it have anything to do with the latest MSR version. Do anyone know what I can try? Thanks in advance! Let's Be Careful Out There (Hill Street reference...) /Fanan
Multi-System Reactor
Programmatically capture HTTP Request action status code or error
therealdbT
I have a very strange situation, where if InfluxDB restarts, other containers may fail when restarting at the same time (under not easy to understand circumstances), and InfluxDB remains unreachable (and these containers crashes). I need to reboot these containers in an exact order, after rebooting InfluxDB. While I understand what's going on, I need a way to reliable determine that InfluxDB is not reachable and these containers are not reachable, in order to identify this situation and manually check what's going on - and, maybe, in the future, automatically restart them if needed. So, I was looking at HTTP Request action, but I need to capture the HTTP response code, instead of the response (becase if ping is OK, InfluxDB will reply with a 204), and, potentially, a way to programmatically detect that it's failing to get the response. While I could write a custom HTTP controller for this or a custom HTTP virtual device, I was wondering if this is somewhat on you roadmap @toggledbits Thanks!
Multi-System Reactor
ZwaveJSUI - RGBWW BULB - Warm/Cold White interfered with RGB settings - Bulb doesn't change color if in WarmWhite state.
N
Hi , I'm on -Reactor (Multi-hub) latest-25067-62e21a2d -Docker on Synology NAS -ZWaveJSUI 9.31.0.6c80945 Problem with ZwaveJSUI: When I try to change color to a bulb RGBWW, it doesn't change to the RGB color and the bulb remains warm or cold white. I tryed with Zipato RGBW Bulb V2 RGBWE2, Hank Bulb HKZW-RGB01, Aentec 6 A-ZWA002, so seems that it happens with all RGBWW bulb with reactor/zwavejsui. I'm using from reator the entity action: "rgb_color.set" and "rgb_color.set_rgb". After I send the reactor command, It changes in zwavejsui the rgb settings but doesn't put the white channel to "0", so the prevalent channel remains warm/cold White and the bulb doesn't change into the rgb color. This is the status of the bulb in zwavejsui after "rgb_color.set" (235,33,33,) and the bulb is still warmWhite. x_zwave_values.Color_Switch_currentColor={"warmWhite":204,"coldWhite":0,"red":235,"green":33,"blue":33} The "cold white" and "warm white" settings interfer with the rgb color settings. Reactor can change bulb colors with rgb_color set — (value, ui8, 0x000000 to 0xffffff) or rgb_color set_rgb — (red, green, blue, all ui1, 0 to 255) but if warm or cold white are not to "0", zwavejsui doesn't change them and I can't find a way to change into rgb or from rgb back to warm white. So if I use from reactor: rgb_color set_rgb — (235,33,33) in zwavejsui I have x_zwave_values.Color_Switch_targetColor={"red":235,"green":33,"blue":33} 14/03/2025, 16:43:57 - value updated Arg 0: └─commandClassName: Color Switch └─commandClass: 51 └─property: targetColor └─endpoint: 0 └─newValue └──red: 235 └──green: 33 └──blue: 33 └─prevValue └──red: 235 └──green: 33 └──blue: 33 └─propertyName: targetColor 14/03/2025, 16:43:57 - value updated Arg 0: └─commandClassName: Color Switch └─commandClass: 51 └─property: currentColor └─endpoint: 0 └─newValue └──warmWhite: 204 └──coldWhite: 0 └──red: 235 └──green: 33 └──blue: 33 └─prevValue └──warmWhite: 204 └──coldWhite: 0 └──red: 235 └──green: 33 └──blue: 33 └─propertyName: currentColor In zwavejsui, the bulb changes rgb set but warm White remains to "204" and the bulb remais on warm White channel bacause is prevalent on rgb set. x_zwave_values.Color_Switch_currentColor_0=204 x_zwave_values.Color_Switch_currentColor_1=0 x_zwave_values.Color_Switch_currentColor_2=235 x_zwave_values.Color_Switch_currentColor_3=33 x_zwave_values.Color_Switch_currentColor_4=33 Is it possible to targetColor also for "warmWhite" and "coldWhite" and have something similar to this? x_zwave_values.Color_Switch_targetColor={"warmWhite":0,"coldWhite":0,"red":235,"green":33,"blue":33} Thanks in advance.
Multi-System Reactor
Problem with simultaneous notifications.
T
Topic thumbnail image
Multi-System Reactor
Problem after upgrading to 25067
R
MSR had been running fine, but I decided to follow the message to upgrade to 25067. Since the upgrade, I have received the message "Controller "<name>" (HubitatController hubitat2) could not be loaded at startup. Its ID is not unique." MSR throws the message on every restart. Has anyone else encountered this problem? I am running MSR on a Raspberry Pi4 connecting to two Hubitat units over an OpenVPN tunnel. One C8 and a C8 Pro. Both are up-to-date. It appears that despite the error message that MSR may be operating properly.
Multi-System Reactor
Global expressions not always evaluated
tunnusT
Topic thumbnail image
Multi-System Reactor
[Solved] Local expression evaluation
V
Topic thumbnail image
Multi-System Reactor
[Solved] Runtime error when exiting global reaction that contains a group
S
I am getting a Runtime error on different browsers when I click exit when editing an existing or creating a new global reaction containing a group. If the global reaction does not have a group I don't get an error. I see a similar post on the forum about a Runtime Error when creating reactions but started a new thread as that appears to be solved. The Runtime Error is different in the two browsers Safari v18.3 @http://192.168.10.21:8111/reactor/en-US/lib/js/reaction-list.js:171:44 You may report this error, but do not screen shot it. Copy-paste the complete text. Remember to include a description of the operation you were performing in as much detail as possible. Report using the Reactor Bug Tracker (in your left navigation) or at the SmartHome Community. Google Chrome 133.0.6943.142 TypeError: self.editor.isModified is not a function at HTMLButtonElement.<anonymous> (http://192.168.10.21:8111/reactor/en-US/lib/js/reaction-list.js:171:34) You may report this error, but do not screen shot it. Copy-paste the complete text. Remember to include a description of the operation you were performing in as much detail as possible. Report using the Reactor Bug Tracker (in your left navigation) or at the SmartHome Community. Steps to reproduce: Click the pencil to edit a global reaction with a group. Click the Exit button. Runtime error appears. or Click Create Reaction Click Add Action Select Group Add Condition such as Entity Attribute. Add an Action. Click Save Click Exit Runtime error appears. I don’t know how long the error has been there as I haven’t edited the global reaction in a long time. Reactor (Multi-hub) latest-25060-f32eaa46 Docker Mac OS: 15.3.1 Thanks
Multi-System Reactor
Cannot delete Global Expressions
SnowmanS
I am trying to delete a global expression (gLightDelay) but for some strange reason, it comes back despite clicking the Delete this expression and Save Changes buttons. I have not created a global expression for some times and just noticed this while doing some clean-up. I have upgraded Reactor to 25067 from 25060 and the behaviour is still there. I have restarted Reactor (as well as restarting its container) and cleared the browser's cache several times without success. Here's what the log shows. [latest-25067]2025-03-08T23:50:22.690Z <wsapi:INFO> [WSAPI]wsapi#1 rpc_echo [Object]{ "comment": "UI activity" } [latest-25067]2025-03-08T23:50:26.254Z <GlobalExpression:NOTICE> Deleting global expression gLightDelay [latest-25067]2025-03-08T23:50:27.887Z <wsapi:INFO> [WSAPI]wsapi#1 rpc_echo [Object]{ "comment": "UI activity" } Reactor latest-25067-62e21a2d Docker on Synology NAS
Multi-System Reactor
Local notification methods?
CatmanV2C
Morning, experts. Hard on learning about the internet check script in MSR tools, I was wondering what suggestions anyone has about a local (i.e. non-internet dependent) notification method. This was prompted by yesterday's fun and games with my ISP. I've got the script Cronned and working properly but short of flashing a light on and off, I'm struggling to think of a way of alerting me (ideally to my phone) I guess I could set up a Discord server at home, but that feels like overkill for a rare occasion. Any other suggestions? TIA C
Multi-System Reactor

Quality of Life Request: Update Button

Scheduled Pinned Locked Moved Multi-System Reactor
28 Posts 13 Posters 2.2k Views 13 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • PablaP Offline
    PablaP Offline
    Pabla
    wrote on last edited by
    #1

    Hey Patrick, along with my request to have a back and restore button it would be nice to also have a update button. Not an urgent request, just would make updates quicker and easier.

    1 Reply Last reply
    1
    • Black CatB Offline
      Black CatB Offline
      Black Cat
      wrote on last edited by
      #2

      +1
      Requested this way back when MSR was just getting on it's feet.

      aka Zedrally

      1 Reply Last reply
      1
      • CatmanV2C Offline
        CatmanV2C Offline
        CatmanV2
        wrote on last edited by
        #3

        +2 😉

        C

        The Ex-Vera abuser know as CatmanV2.....

        1 Reply Last reply
        1
        • C Offline
          C Offline
          Cadwizzard
          wrote on last edited by
          #4

          Massive upvote on all 3. Just like back in the old Vera days 😄
          (Unsure how easy update would be with the various ways of implementing MSR inside/outside docker etc) - unless a standard install method is specified, and one of the features of that is 'update' capability.
          Docker-Compose please 😉

          1 Reply Last reply
          0
          • tunnusT Offline
            tunnusT Offline
            tunnus
            wrote on last edited by tunnus
            #5

            +1 (could at least make it to the backlog)

            EDIT:
            -1, after a very good (& long) explanation 😄

            Using MSR on Docker (Synology NAS), having InfluxDB, Grafana & Home Assistant, Hubitat C-8, Zigbee2MQTT

            1 Reply Last reply
            1
            • toggledbitsT Offline
              toggledbitsT Offline
              toggledbits
              wrote on last edited by toggledbits
              #6

              @Cadwizzard said:

              Just like back in the old Vera days

              How soon we forget the tales of bricked Veras. Who among us didn't have a little sense that they were playing Russian roulette every time we hit that button?

              Unsure how easy update would be with the various ways of implementing MSR inside/outside docker etc

              OK. He hits it on the head here. Let me explain some of the complications and my reservations around this.

              The biggest pitfall is for docker users, IMO; that's the majority of you. The first thing you need to understand about docker is that the image and the container are separate objects in the system. The container is created from the image, but it's basically a copy, not linked in any meaningful way. The container can change, so that's good — I can download a release package and apply it to the container, restart, and the container will now be running the new release files. Unfortunately, that has no bearing on what happens to the image. Changing files in the container does nothing to the image. So let's take a scenario... @tunnus (Docker, Synology) downloads the image for Reactor 22274 and creates a container for it, so he's now running 22274. A little later, 22291 is released, so he hits the handy, flashy new "Upgrade" button and the container is upgraded in place. Perfect. Except, not... his image is still 22274. Stay with me now... In all likelihood, because of the "ease" of the automated upgrade, tunnus never needs to download a new image again (so he thinks), so he never bothers (it's a pain anyway on Synology, I'll agree). So build 22293 comes along, and then 22302, and then 22305, and then 22308, and he upgrades to all of them using the automated process, but the image is still sitting there on his NAS at 22274. The problem strikes if, for any reason (DSM major upgrade?), he decides to reset and rebuild the container, or delete it. He will get.... 22274. Because that's the image he has.

              Can I make docker download the newer image as part of the upgrade process? No. Reactor is running inside the container, and the container, by definition, contains Reactor and keeps it from doing anything external to the container (except the limited data volume that's specifically created for the single purpose). So the running Reactor instance has no ability whatsoever to cause docker/DSM to pull later images. Pulling a new image and rebuilding the container is the real "right" way to upgrade, but it's not possible to automate it from within the container itself (and it's darned clunky in Synology's UI, unfortunately).

              It's not hard to imagine that this problem would not bite him for months or years. But when it bites, it has the potential to bite hard. Imagine along Reactor's evolutionary path from 22274 to a future 24107 (released in 2024, all automated updates between, no image refreshes), there are changes that needed to be made to the data structures of rules, reactions, stored states, etc. (not at all hard to imagine, it actually happens all the time). It is easy, although sometimes a bit cumbersome, for me to provide forward compatibility: to make sure that newer versions of code read the old data and upgrade the structures, and the mechanisms for those upgrades remain in the code for some time. But there is no way under the sun for 22274, now running once again unexpectedly, to know what to do with data from the future 24107 build, and there's a chance it could do something really bad to it. Now tunnus has an old version running in his container with corrupt data. I hope he has a backup.

              I'll take the opportunity to say that this is a cautionary tale for all of you who stay on older builds. I keep the code that reads and upgrades the data, when needed, for a while, so that people who skip a few upgrades can safely do so and "jump in time" when they are ready to apply a new build, but I don't and won't keep that code forever; it becomes a maintenance nightmare and it's beyond my available time and sensibilities to test every possible combination of upgrades between versions. If you're running on a Reactor that's more than a year out (21307 or earlier), you're playing with fire as far as I'm concerned, and you should not expect a smooth upgrade when you get around to it. You may need to upgrade to an interim build still available, which works for bare-metal, but isn't an option for docker users. And before the "I can't have something like that in my home" people start in here, please know that I'm sorry that the free software I offer you and for which I provide ready, quick, and free ongoing support (and upgrades) isn't perfectly to your liking. If you don't like the way it works, you have alternatives, and I fully support your freedom to choose them.

              To continue with @Cadwizzard's point: this is equally or more egregious, unfortunately, for docker-compose users, because up to this point, the recommended way for stopping Reactor when using docker-compose is to run docker-compose down. This causes Reactor to stop, but also deletes the container. Any upgrades applied to the container are lost in that instant, because the container is discarded. When you later run docker-compose up -d, the container is re-created from the (old) image, and will be whatever version that image is. Maybe not a disaster, maybe it is. This could be addressed by retraining docker-compose users to use docker-compose stop rather than docker-compose down, but the distinction would need to be taught (and learned) as both are useful, and the infrequency of use of these commands would likely suffer from brain-drain over time (i.e. when to use which and what their side effects could be/will be lost on the user a few months from now). But it's such a subtle distinction that people will shoot themselves in the foot easily and regularly, I fear.

              Bare-metal is somewhat easier, because at least the process can be assured it's writing on the one and only (relevant) image, in the install directory, so that's a bit of relief. Unfortunately, a lot of people really don't understand Linux file permissions, their relations to users and groups, etc., and routinely goof up the permissions of files all over their system, including in the Reactor install directory. This isn't a problem for them after the first "fix," because thereafter they do the manual upgrades the same way, logged in as the same user (in some cases, even as root, which is a serious no-no), and so it works for them as that user in that case, good enough. But for an automated process running in an unprivileged environment, it can mean that some files aren't writable, and the upgrade only half-happens... the upgrade process crashes, some files are new and some are old, and the Reactor install is basically dead and broken. I can't fix the permissions from the running instance, because it's running as an unprivileged user (well, hopefully; woe unto those who run anything as root). The user then manually applies the upgrade to recover the system, which goes fine because of course he's running the privileged user with the right permissions. A bug/post for the upgrade process then gets reported, and I then spend hours or days going back and forth, digging through the user's 3,000+ files in a typical Reactor install, looking for the broken ones and teaching the user how to fix them. (Permissions and their potential brokenness is also an issue for doing automated backups/restores, since that was mentioned as well.)

              Oh, and then there's Windows. I won't even start. I've already written a book (again).

              With regard to the suggestion of a standard install method: (a) there is no "one size fits all" — what works for Ubuntu doesn't work for Synology DSM or QNAP, and certainly not for Windows; and (b) the install methods that are recommended are all carefully documented; experience shows that I can write out every detail I can think of, and what actually happens on the user's system is 100% of that or some amount less, or the user has some condition in their system/environment that I could not/did not anticipate that causes problems. My preferred method for most users is docker (and specifically, with docker-compose), because the container strategy removes some of these risks, but that's not always the easiest for their environment (e.g. Synology has docker but no -compose), and the accepted mechanisms for upgrading containers in the docker world in general are ironically exactly the subject of complaints by OP and others here, despite the relative ubiquity and ease of these mechanisms.

              The point is, there is no panacaea here. You run these systems, not me. You do things I have no knowledge of, and sometimes those things bite back. The majority of my time supporting this product is troubleshooting your environments, not my code (I'm not saying I'm perfect — I make mistakes and bugs are a reality, but they're not the majority of support issues here in terms of time spent). Anything and everything I do in the system is looked at not just through the lens of whether it's convenient for the user, but very much through the lens of supportability. There are lots of features I get asked to do, and as you've seen (even recently), there are some that I refuse to consider simply because it would make the system less supportable, in my view. As features get added, not only is the usability of the system required to improve, but its quality is expected to improve as well (fewer bugs, better support, etc.) — those are my expectations, which I'm sure you share. If one doesn't consider supportability (and that means both in user support and code maintenance/reliability/scalability), one ends up with a lot of features that nobody asked for, don't work, and aren't usable (I can point you in the direction of such a product as an alternative if you're really interested in that).

              There is a running, hidden upgrade process in the current build. I've been experimenting with this for a bit, getting to learn it, and discovering these issues. It's not that I won't consider making it available; I'm still studying it, and pondering the wisdom of it. Maybe sometimes I worry too much about things like this, I don't know. But when it goes wrong, there's nobody but you and me to fix it, and there's a lot of you and only one of me, so as I said in another recent conversation, handing out something that feels like a grenade with no pin sometimes doesn't seem like the best idea to me, and there are probably other things this system needs to do that I can better spend my time on. Maybe this is one of those things.

              I'll leave this one up to you guys. If you can tolerate these side effects, I'll release the feature. But know that if you break your install because (docker) you somehow delete the container and recreate it from an old image, or (bare-metal) your install has broken permissions or other issues that the upgrade process can't work through, my answer will be short: that's a risk you accepted, do a clean reinstall from a current image, restore your config/state backup, and start over.

              Author of Multi-system Reactor and Reactor, DelayLight, Switchboard, and about a dozen other plugins that run on Vera and openLuup.

              wmarcolinW PablaP 2 Replies Last reply
              2
              • toggledbitsT toggledbits

                @Cadwizzard said:

                Just like back in the old Vera days

                How soon we forget the tales of bricked Veras. Who among us didn't have a little sense that they were playing Russian roulette every time we hit that button?

                Unsure how easy update would be with the various ways of implementing MSR inside/outside docker etc

                OK. He hits it on the head here. Let me explain some of the complications and my reservations around this.

                The biggest pitfall is for docker users, IMO; that's the majority of you. The first thing you need to understand about docker is that the image and the container are separate objects in the system. The container is created from the image, but it's basically a copy, not linked in any meaningful way. The container can change, so that's good — I can download a release package and apply it to the container, restart, and the container will now be running the new release files. Unfortunately, that has no bearing on what happens to the image. Changing files in the container does nothing to the image. So let's take a scenario... @tunnus (Docker, Synology) downloads the image for Reactor 22274 and creates a container for it, so he's now running 22274. A little later, 22291 is released, so he hits the handy, flashy new "Upgrade" button and the container is upgraded in place. Perfect. Except, not... his image is still 22274. Stay with me now... In all likelihood, because of the "ease" of the automated upgrade, tunnus never needs to download a new image again (so he thinks), so he never bothers (it's a pain anyway on Synology, I'll agree). So build 22293 comes along, and then 22302, and then 22305, and then 22308, and he upgrades to all of them using the automated process, but the image is still sitting there on his NAS at 22274. The problem strikes if, for any reason (DSM major upgrade?), he decides to reset and rebuild the container, or delete it. He will get.... 22274. Because that's the image he has.

                Can I make docker download the newer image as part of the upgrade process? No. Reactor is running inside the container, and the container, by definition, contains Reactor and keeps it from doing anything external to the container (except the limited data volume that's specifically created for the single purpose). So the running Reactor instance has no ability whatsoever to cause docker/DSM to pull later images. Pulling a new image and rebuilding the container is the real "right" way to upgrade, but it's not possible to automate it from within the container itself (and it's darned clunky in Synology's UI, unfortunately).

                It's not hard to imagine that this problem would not bite him for months or years. But when it bites, it has the potential to bite hard. Imagine along Reactor's evolutionary path from 22274 to a future 24107 (released in 2024, all automated updates between, no image refreshes), there are changes that needed to be made to the data structures of rules, reactions, stored states, etc. (not at all hard to imagine, it actually happens all the time). It is easy, although sometimes a bit cumbersome, for me to provide forward compatibility: to make sure that newer versions of code read the old data and upgrade the structures, and the mechanisms for those upgrades remain in the code for some time. But there is no way under the sun for 22274, now running once again unexpectedly, to know what to do with data from the future 24107 build, and there's a chance it could do something really bad to it. Now tunnus has an old version running in his container with corrupt data. I hope he has a backup.

                I'll take the opportunity to say that this is a cautionary tale for all of you who stay on older builds. I keep the code that reads and upgrades the data, when needed, for a while, so that people who skip a few upgrades can safely do so and "jump in time" when they are ready to apply a new build, but I don't and won't keep that code forever; it becomes a maintenance nightmare and it's beyond my available time and sensibilities to test every possible combination of upgrades between versions. If you're running on a Reactor that's more than a year out (21307 or earlier), you're playing with fire as far as I'm concerned, and you should not expect a smooth upgrade when you get around to it. You may need to upgrade to an interim build still available, which works for bare-metal, but isn't an option for docker users. And before the "I can't have something like that in my home" people start in here, please know that I'm sorry that the free software I offer you and for which I provide ready, quick, and free ongoing support (and upgrades) isn't perfectly to your liking. If you don't like the way it works, you have alternatives, and I fully support your freedom to choose them.

                To continue with @Cadwizzard's point: this is equally or more egregious, unfortunately, for docker-compose users, because up to this point, the recommended way for stopping Reactor when using docker-compose is to run docker-compose down. This causes Reactor to stop, but also deletes the container. Any upgrades applied to the container are lost in that instant, because the container is discarded. When you later run docker-compose up -d, the container is re-created from the (old) image, and will be whatever version that image is. Maybe not a disaster, maybe it is. This could be addressed by retraining docker-compose users to use docker-compose stop rather than docker-compose down, but the distinction would need to be taught (and learned) as both are useful, and the infrequency of use of these commands would likely suffer from brain-drain over time (i.e. when to use which and what their side effects could be/will be lost on the user a few months from now). But it's such a subtle distinction that people will shoot themselves in the foot easily and regularly, I fear.

                Bare-metal is somewhat easier, because at least the process can be assured it's writing on the one and only (relevant) image, in the install directory, so that's a bit of relief. Unfortunately, a lot of people really don't understand Linux file permissions, their relations to users and groups, etc., and routinely goof up the permissions of files all over their system, including in the Reactor install directory. This isn't a problem for them after the first "fix," because thereafter they do the manual upgrades the same way, logged in as the same user (in some cases, even as root, which is a serious no-no), and so it works for them as that user in that case, good enough. But for an automated process running in an unprivileged environment, it can mean that some files aren't writable, and the upgrade only half-happens... the upgrade process crashes, some files are new and some are old, and the Reactor install is basically dead and broken. I can't fix the permissions from the running instance, because it's running as an unprivileged user (well, hopefully; woe unto those who run anything as root). The user then manually applies the upgrade to recover the system, which goes fine because of course he's running the privileged user with the right permissions. A bug/post for the upgrade process then gets reported, and I then spend hours or days going back and forth, digging through the user's 3,000+ files in a typical Reactor install, looking for the broken ones and teaching the user how to fix them. (Permissions and their potential brokenness is also an issue for doing automated backups/restores, since that was mentioned as well.)

                Oh, and then there's Windows. I won't even start. I've already written a book (again).

                With regard to the suggestion of a standard install method: (a) there is no "one size fits all" — what works for Ubuntu doesn't work for Synology DSM or QNAP, and certainly not for Windows; and (b) the install methods that are recommended are all carefully documented; experience shows that I can write out every detail I can think of, and what actually happens on the user's system is 100% of that or some amount less, or the user has some condition in their system/environment that I could not/did not anticipate that causes problems. My preferred method for most users is docker (and specifically, with docker-compose), because the container strategy removes some of these risks, but that's not always the easiest for their environment (e.g. Synology has docker but no -compose), and the accepted mechanisms for upgrading containers in the docker world in general are ironically exactly the subject of complaints by OP and others here, despite the relative ubiquity and ease of these mechanisms.

                The point is, there is no panacaea here. You run these systems, not me. You do things I have no knowledge of, and sometimes those things bite back. The majority of my time supporting this product is troubleshooting your environments, not my code (I'm not saying I'm perfect — I make mistakes and bugs are a reality, but they're not the majority of support issues here in terms of time spent). Anything and everything I do in the system is looked at not just through the lens of whether it's convenient for the user, but very much through the lens of supportability. There are lots of features I get asked to do, and as you've seen (even recently), there are some that I refuse to consider simply because it would make the system less supportable, in my view. As features get added, not only is the usability of the system required to improve, but its quality is expected to improve as well (fewer bugs, better support, etc.) — those are my expectations, which I'm sure you share. If one doesn't consider supportability (and that means both in user support and code maintenance/reliability/scalability), one ends up with a lot of features that nobody asked for, don't work, and aren't usable (I can point you in the direction of such a product as an alternative if you're really interested in that).

                There is a running, hidden upgrade process in the current build. I've been experimenting with this for a bit, getting to learn it, and discovering these issues. It's not that I won't consider making it available; I'm still studying it, and pondering the wisdom of it. Maybe sometimes I worry too much about things like this, I don't know. But when it goes wrong, there's nobody but you and me to fix it, and there's a lot of you and only one of me, so as I said in another recent conversation, handing out something that feels like a grenade with no pin sometimes doesn't seem like the best idea to me, and there are probably other things this system needs to do that I can better spend my time on. Maybe this is one of those things.

                I'll leave this one up to you guys. If you can tolerate these side effects, I'll release the feature. But know that if you break your install because (docker) you somehow delete the container and recreate it from an old image, or (bare-metal) your install has broken permissions or other issues that the upgrade process can't work through, my answer will be short: that's a risk you accepted, do a clean reinstall from a current image, restore your config/state backup, and start over.

                wmarcolinW Offline
                wmarcolinW Offline
                wmarcolin
                wrote on last edited by
                #7

                @toggledbits

                One more class of knowledge! Really the desire to have an automatic process would be very good, but your explanation makes clear the difficulty and risks, and I do not want to have it. As you said, the errors we generate are enough, I don't want to implement more risks. I think almost everyone here has a wife, so better to stay in the safety of the system working.

                Well let's remove this from wishlist request, and could you share this list so we know everything that's coming in the future? Also put an item, display the status widget, in a window/iframe inside the HE dashboard 🙂

                1 Reply Last reply
                0
                • CatmanV2C Offline
                  CatmanV2C Offline
                  CatmanV2
                  wrote on last edited by
                  #8

                  Hey I'd love a button. But I'm bare metal and in honesty it takes me 90 seconds to upgrade so why do I need one?

                  Given the choice of support and progress vs button I know which I'd chose.

                  C

                  The Ex-Vera abuser know as CatmanV2.....

                  1 Reply Last reply
                  1
                  • toggledbitsT toggledbits

                    @Cadwizzard said:

                    Just like back in the old Vera days

                    How soon we forget the tales of bricked Veras. Who among us didn't have a little sense that they were playing Russian roulette every time we hit that button?

                    Unsure how easy update would be with the various ways of implementing MSR inside/outside docker etc

                    OK. He hits it on the head here. Let me explain some of the complications and my reservations around this.

                    The biggest pitfall is for docker users, IMO; that's the majority of you. The first thing you need to understand about docker is that the image and the container are separate objects in the system. The container is created from the image, but it's basically a copy, not linked in any meaningful way. The container can change, so that's good — I can download a release package and apply it to the container, restart, and the container will now be running the new release files. Unfortunately, that has no bearing on what happens to the image. Changing files in the container does nothing to the image. So let's take a scenario... @tunnus (Docker, Synology) downloads the image for Reactor 22274 and creates a container for it, so he's now running 22274. A little later, 22291 is released, so he hits the handy, flashy new "Upgrade" button and the container is upgraded in place. Perfect. Except, not... his image is still 22274. Stay with me now... In all likelihood, because of the "ease" of the automated upgrade, tunnus never needs to download a new image again (so he thinks), so he never bothers (it's a pain anyway on Synology, I'll agree). So build 22293 comes along, and then 22302, and then 22305, and then 22308, and he upgrades to all of them using the automated process, but the image is still sitting there on his NAS at 22274. The problem strikes if, for any reason (DSM major upgrade?), he decides to reset and rebuild the container, or delete it. He will get.... 22274. Because that's the image he has.

                    Can I make docker download the newer image as part of the upgrade process? No. Reactor is running inside the container, and the container, by definition, contains Reactor and keeps it from doing anything external to the container (except the limited data volume that's specifically created for the single purpose). So the running Reactor instance has no ability whatsoever to cause docker/DSM to pull later images. Pulling a new image and rebuilding the container is the real "right" way to upgrade, but it's not possible to automate it from within the container itself (and it's darned clunky in Synology's UI, unfortunately).

                    It's not hard to imagine that this problem would not bite him for months or years. But when it bites, it has the potential to bite hard. Imagine along Reactor's evolutionary path from 22274 to a future 24107 (released in 2024, all automated updates between, no image refreshes), there are changes that needed to be made to the data structures of rules, reactions, stored states, etc. (not at all hard to imagine, it actually happens all the time). It is easy, although sometimes a bit cumbersome, for me to provide forward compatibility: to make sure that newer versions of code read the old data and upgrade the structures, and the mechanisms for those upgrades remain in the code for some time. But there is no way under the sun for 22274, now running once again unexpectedly, to know what to do with data from the future 24107 build, and there's a chance it could do something really bad to it. Now tunnus has an old version running in his container with corrupt data. I hope he has a backup.

                    I'll take the opportunity to say that this is a cautionary tale for all of you who stay on older builds. I keep the code that reads and upgrades the data, when needed, for a while, so that people who skip a few upgrades can safely do so and "jump in time" when they are ready to apply a new build, but I don't and won't keep that code forever; it becomes a maintenance nightmare and it's beyond my available time and sensibilities to test every possible combination of upgrades between versions. If you're running on a Reactor that's more than a year out (21307 or earlier), you're playing with fire as far as I'm concerned, and you should not expect a smooth upgrade when you get around to it. You may need to upgrade to an interim build still available, which works for bare-metal, but isn't an option for docker users. And before the "I can't have something like that in my home" people start in here, please know that I'm sorry that the free software I offer you and for which I provide ready, quick, and free ongoing support (and upgrades) isn't perfectly to your liking. If you don't like the way it works, you have alternatives, and I fully support your freedom to choose them.

                    To continue with @Cadwizzard's point: this is equally or more egregious, unfortunately, for docker-compose users, because up to this point, the recommended way for stopping Reactor when using docker-compose is to run docker-compose down. This causes Reactor to stop, but also deletes the container. Any upgrades applied to the container are lost in that instant, because the container is discarded. When you later run docker-compose up -d, the container is re-created from the (old) image, and will be whatever version that image is. Maybe not a disaster, maybe it is. This could be addressed by retraining docker-compose users to use docker-compose stop rather than docker-compose down, but the distinction would need to be taught (and learned) as both are useful, and the infrequency of use of these commands would likely suffer from brain-drain over time (i.e. when to use which and what their side effects could be/will be lost on the user a few months from now). But it's such a subtle distinction that people will shoot themselves in the foot easily and regularly, I fear.

                    Bare-metal is somewhat easier, because at least the process can be assured it's writing on the one and only (relevant) image, in the install directory, so that's a bit of relief. Unfortunately, a lot of people really don't understand Linux file permissions, their relations to users and groups, etc., and routinely goof up the permissions of files all over their system, including in the Reactor install directory. This isn't a problem for them after the first "fix," because thereafter they do the manual upgrades the same way, logged in as the same user (in some cases, even as root, which is a serious no-no), and so it works for them as that user in that case, good enough. But for an automated process running in an unprivileged environment, it can mean that some files aren't writable, and the upgrade only half-happens... the upgrade process crashes, some files are new and some are old, and the Reactor install is basically dead and broken. I can't fix the permissions from the running instance, because it's running as an unprivileged user (well, hopefully; woe unto those who run anything as root). The user then manually applies the upgrade to recover the system, which goes fine because of course he's running the privileged user with the right permissions. A bug/post for the upgrade process then gets reported, and I then spend hours or days going back and forth, digging through the user's 3,000+ files in a typical Reactor install, looking for the broken ones and teaching the user how to fix them. (Permissions and their potential brokenness is also an issue for doing automated backups/restores, since that was mentioned as well.)

                    Oh, and then there's Windows. I won't even start. I've already written a book (again).

                    With regard to the suggestion of a standard install method: (a) there is no "one size fits all" — what works for Ubuntu doesn't work for Synology DSM or QNAP, and certainly not for Windows; and (b) the install methods that are recommended are all carefully documented; experience shows that I can write out every detail I can think of, and what actually happens on the user's system is 100% of that or some amount less, or the user has some condition in their system/environment that I could not/did not anticipate that causes problems. My preferred method for most users is docker (and specifically, with docker-compose), because the container strategy removes some of these risks, but that's not always the easiest for their environment (e.g. Synology has docker but no -compose), and the accepted mechanisms for upgrading containers in the docker world in general are ironically exactly the subject of complaints by OP and others here, despite the relative ubiquity and ease of these mechanisms.

                    The point is, there is no panacaea here. You run these systems, not me. You do things I have no knowledge of, and sometimes those things bite back. The majority of my time supporting this product is troubleshooting your environments, not my code (I'm not saying I'm perfect — I make mistakes and bugs are a reality, but they're not the majority of support issues here in terms of time spent). Anything and everything I do in the system is looked at not just through the lens of whether it's convenient for the user, but very much through the lens of supportability. There are lots of features I get asked to do, and as you've seen (even recently), there are some that I refuse to consider simply because it would make the system less supportable, in my view. As features get added, not only is the usability of the system required to improve, but its quality is expected to improve as well (fewer bugs, better support, etc.) — those are my expectations, which I'm sure you share. If one doesn't consider supportability (and that means both in user support and code maintenance/reliability/scalability), one ends up with a lot of features that nobody asked for, don't work, and aren't usable (I can point you in the direction of such a product as an alternative if you're really interested in that).

                    There is a running, hidden upgrade process in the current build. I've been experimenting with this for a bit, getting to learn it, and discovering these issues. It's not that I won't consider making it available; I'm still studying it, and pondering the wisdom of it. Maybe sometimes I worry too much about things like this, I don't know. But when it goes wrong, there's nobody but you and me to fix it, and there's a lot of you and only one of me, so as I said in another recent conversation, handing out something that feels like a grenade with no pin sometimes doesn't seem like the best idea to me, and there are probably other things this system needs to do that I can better spend my time on. Maybe this is one of those things.

                    I'll leave this one up to you guys. If you can tolerate these side effects, I'll release the feature. But know that if you break your install because (docker) you somehow delete the container and recreate it from an old image, or (bare-metal) your install has broken permissions or other issues that the upgrade process can't work through, my answer will be short: that's a risk you accepted, do a clean reinstall from a current image, restore your config/state backup, and start over.

                    PablaP Offline
                    PablaP Offline
                    Pabla
                    wrote on last edited by
                    #9

                    @toggledbits this makes tons of sense why anyone should want an update button mainly Docker users.

                    In terms of bare metal users, say if a user messed with their files permissions enough that it would cause issues when updating Reactor, wouldn't they run into the same errors even if they manually updated or used the update button? I wouldn't mind an update button for bare metal users, since from your explanation seems like a possible issue with won't come up with the actually update process itself, it can come with something else (like file permissions etc). Meaning that they'd run into these errors even if they manually updated Reactor like we do now.

                    Not arguing though, its a fairly low level request from me. I can clearly see why an update button for Docker users could be a slow and silent death. As @CatmanV2 for bare metal the update process really only takes 90 seconds ahah.

                    toggledbitsT 1 Reply Last reply
                    1
                    • PablaP Pabla

                      @toggledbits this makes tons of sense why anyone should want an update button mainly Docker users.

                      In terms of bare metal users, say if a user messed with their files permissions enough that it would cause issues when updating Reactor, wouldn't they run into the same errors even if they manually updated or used the update button? I wouldn't mind an update button for bare metal users, since from your explanation seems like a possible issue with won't come up with the actually update process itself, it can come with something else (like file permissions etc). Meaning that they'd run into these errors even if they manually updated Reactor like we do now.

                      Not arguing though, its a fairly low level request from me. I can clearly see why an update button for Docker users could be a slow and silent death. As @CatmanV2 for bare metal the update process really only takes 90 seconds ahah.

                      toggledbitsT Offline
                      toggledbitsT Offline
                      toggledbits
                      wrote on last edited by
                      #10

                      @pabla said in Quality of Life Request: Update Button:

                      wouldn't they run into the same errors even if they manually updated or used the update button?

                      Not necessarily... some users... I've seen it... will run into permission problems and their answer, not understanding the problem or how to fix it, is to use sudo tar xvf to just lay tracks over everything. This would eliminate the permissions problem unpackaging the archive, but new files may become root-owned, which isn't right but the code doesn't care as long as its readable. If their umask allows world-readable files (and 022 is a common default that does exactly that), the Reactor runtime will never know permissions are broken, because every file it needs is readable without consideration of ownership. The un-tar'ing doesn't touch logs, config, etc. so any permissions there aren't relevant and aren't changed. And because some of the files are now root-owned that shouldn't be, the permissions problem has been made worse and again, unless they are truly fixed the right way, then sudo will continue to be the only way upgrades will succeed. It perpetuates and exacerbates.

                      I really get how painful the docker upgrades are on Synology. I'm guessing QNAP is probably not much different, and I think several people have been bitten by Portainer oddities regardless of platform.

                      The process just needs more thought. I could, for example, from the next build onward, prevent the system from starting if the config and data are from a newer version. The problem there is that it needs to be detected early in startup, and if the system can't use the data, it has to exit hard, because it can't run without any data at all, and it can't touch what it has. There would be no UI feedback other than "DISCONNECTED" (i.e. the behavior when Reactor can't start). A "click-to-upgrade" to fix it wouldn't be an option because the UI would not be running, so a manual upgrade would be required at that point. And maybe that's OK? Maybe that's such an extreme/infrequent circumstance that it should be that way? A manual upgrade once in a blue moon may not be so bad... I don't know... looking for feedback... trying to figure it out...

                      Author of Multi-system Reactor and Reactor, DelayLight, Switchboard, and about a dozen other plugins that run on Vera and openLuup.

                      1 Reply Last reply
                      0
                      • S Offline
                        S Offline
                        SweetGenius
                        wrote on last edited by
                        #11

                        I personally do not think the update process on Synology docker is that bad. A few more clicks than an easy button but not horrible. All my other docker containers are updated the same way. I like the docker image though. I am not familiar with the other platforms so I can’t comment on those update processes.

                        Synology Docker MSR, Hubitat, Home Assistant, Homebridge, ZwaveJS, MQTT, NUT controller.

                        tunnusT 1 Reply Last reply
                        0
                        • CatmanV2C CatmanV2 referenced this topic on
                        • G Offline
                          G Offline
                          gwp1
                          wrote on last edited by
                          #12

                          I would upvote the backup and restore buttons but don't see a need for the update button. I'm bare metal and it's literally a two-minute process.

                          *Hubitat C-7 2.4.1.151
                          *Proxmox VE v8, Beelink MiniPC 12GBs, SSD

                          *HASS 2025.3.4
                          w/ ZST10-700 fw 7.18.3

                          *Prod MSR in docker/portainer
                          MSR: latest-25082-3c348de6
                          MQTTController: 24257
                          ZWave Controller: 25082

                          1 Reply Last reply
                          0
                          • SnowmanS Offline
                            SnowmanS Offline
                            Snowman
                            wrote on last edited by
                            #13

                            I too am happy with the current process. Super fast for me. I run everything under Synology/Docker. I no longer have issues upgrading containers such as Reactor, HA, etc. since I switched to Portainer several months ago. So not sure what those "Portainer" oddities are/were. Something I should keep an eye on? Or I have just been lucky?

                            Synology NAS, Docker, Zooz Z-Wave Stick 700, Z-Wave JS-UI, Reactor, Home Assistant, Grafana, and InfluxDB.

                            Black CatB 1 Reply Last reply
                            0
                            • SnowmanS Snowman

                              I too am happy with the current process. Super fast for me. I run everything under Synology/Docker. I no longer have issues upgrading containers such as Reactor, HA, etc. since I switched to Portainer several months ago. So not sure what those "Portainer" oddities are/were. Something I should keep an eye on? Or I have just been lucky?

                              Black CatB Offline
                              Black CatB Offline
                              Black Cat
                              wrote on last edited by
                              #14

                              @snowman said in Quality of Life Request: Update Button:

                              Something I should keep an eye on? Or I have just been lucky?

                              I doubt it, it's me who is unlucky with Portainer & MSR.....

                              aka Zedrally

                              1 Reply Last reply
                              0
                              • A Offline
                                A Offline
                                Andr
                                wrote on last edited by
                                #15

                                Some input from a Windows user.
                                An update button would of course be a nice to have feature, but I also agree with several other here. A "normal" update, aka don't need new dependencies, just take a short moment to install.
                                Were I usually stumble is when an update of dependencies are needed. That have taken me hours of search-try--error-tryagain before getting that to work sometimes.

                                My dream would be to have a "windows installer" for MSR, that checks dependencies, install a systemservice etc.
                                Over time I think that would be a safer/more stable way, with fewer user errors.

                                With this said, I can understand really understand that @toggledbits need to handle this "his way" to be able to support differen't enviroments (and users 😉 )

                                1 Reply Last reply
                                0
                                • A Offline
                                  A Offline
                                  Alan_F
                                  wrote on last edited by Alan_F
                                  #16

                                  I don't know if this helps for other Docker users, but not long after I got started with Docker I found Portainer, and I've been running it alongside Reactor and my other containers on my Raspberry Pi 4. With Portainer, there may not be a one-step update button, but I find it makes updates much easier.

                                  I just updated Reactor to the latest. All I had to do was go to the Portainer URL in my browser, then

                                  • Click on the Reactor container in the Containers list

                                  60d60151-1169-4eea-be84-616771dd9928-image.png

                                  • Click 'Recreate"

                                  • Toggle "Always pull new image" on the window that pops up

                                  cd2ddb33-ac06-4da4-97d7-f3cd5389ac2f-image.png

                                  • Click "Recreate"

                                  It isn't one click, but it can be done in a browser tab from any machine with network access to the Docker host. No VNC/SSH into the machine, no Docker commands to run from the command line.

                                  Portainer also has links to view the container logs and to open a command window in the container, which I use all the time. You can also use the "duplicate/edit" button to change or add environment variables while updating, which is how I added the NODE_PATH a few updates back.

                                  1 Reply Last reply
                                  0
                                  • Black CatB Offline
                                    Black CatB Offline
                                    Black Cat
                                    wrote on last edited by
                                    #17

                                    Thanks for the Portainer explanation, I'm certain I've had a spell cast on me. I'll try again once the Pi400 become available once more.

                                    Lastly, I think the point has been lost, it's about QOL, not about how easy it is to do in another way.
                                    From my perspective if it isn't easy to use by 98% of the public then it's too much trouble and they might look at it then discard it for another solution.
                                    The comments so far are from users who are in the 2% and are happy to tinker. I'm happy for you.
                                    If anyone wants to see how Consumer friendly software should be to set up, then have a look at Homeseer4. Update ...no problem with 1 click.

                                    aka Zedrally

                                    G 1 Reply Last reply
                                    0
                                    • Black CatB Black Cat

                                      Thanks for the Portainer explanation, I'm certain I've had a spell cast on me. I'll try again once the Pi400 become available once more.

                                      Lastly, I think the point has been lost, it's about QOL, not about how easy it is to do in another way.
                                      From my perspective if it isn't easy to use by 98% of the public then it's too much trouble and they might look at it then discard it for another solution.
                                      The comments so far are from users who are in the 2% and are happy to tinker. I'm happy for you.
                                      If anyone wants to see how Consumer friendly software should be to set up, then have a look at Homeseer4. Update ...no problem with 1 click.

                                      G Offline
                                      G Offline
                                      gwp1
                                      wrote on last edited by
                                      #18

                                      @black-cat Isn't homeseer a walled garden like Hubitat, Ezlo, etc.? You buy their hub and live within their infrastructure.

                                      That's not MSR. MSR works on various OS/hardware and communicates with multiple hubs.

                                      Whilst I appreciate your POV, it's not apples>apples comparison you're making here.

                                      *Hubitat C-7 2.4.1.151
                                      *Proxmox VE v8, Beelink MiniPC 12GBs, SSD

                                      *HASS 2025.3.4
                                      w/ ZST10-700 fw 7.18.3

                                      *Prod MSR in docker/portainer
                                      MSR: latest-25082-3c348de6
                                      MQTTController: 24257
                                      ZWave Controller: 25082

                                      Black CatB 1 Reply Last reply
                                      0
                                      • G gwp1

                                        @black-cat Isn't homeseer a walled garden like Hubitat, Ezlo, etc.? You buy their hub and live within their infrastructure.

                                        That's not MSR. MSR works on various OS/hardware and communicates with multiple hubs.

                                        Whilst I appreciate your POV, it's not apples>apples comparison you're making here.

                                        Black CatB Offline
                                        Black CatB Offline
                                        Black Cat
                                        wrote on last edited by Black Cat
                                        #19

                                        @gwp1 said in Quality of Life Request: Update Button:

                                        You buy their hub and live within their infrastructure.

                                        Nup, you can use any old Laptop or RasPi. Runs on Windows or Lynx. i'd love to promote MSR to Homeseer users but it lacks the simplicity hence the backing of the request.
                                        Realistically, I'm not going to see it happen which is a shame as Patrick has put a lot of time into development for the 2%.

                                        aka Zedrally

                                        G toggledbitsT 2 Replies Last reply
                                        0
                                        • S SweetGenius

                                          I personally do not think the update process on Synology docker is that bad. A few more clicks than an easy button but not horrible. All my other docker containers are updated the same way. I like the docker image though. I am not familiar with the other platforms so I can’t comment on those update processes.

                                          tunnusT Offline
                                          tunnusT Offline
                                          tunnus
                                          wrote on last edited by
                                          #20

                                          @sweetgenius I agree, Synology Docker container upgrade process is not too bad.

                                          I frequently keep both MSR and Synology UI open on separate browser tabs and either do a quick upgrade using "reset" or a bit careful upgrade using "duplicate settings" and retaining old container as a backup/rollback option.

                                          Originally I favored a simple update button for MSR, but after Patrick's explanations I realized it's not that simple after all.

                                          Using MSR on Docker (Synology NAS), having InfluxDB, Grafana & Home Assistant, Hubitat C-8, Zigbee2MQTT

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          Recent Topics

                                          • Advice reqeusted to migrate MSR from Bare Metal to Container
                                            T
                                            tamorgen
                                            0
                                            5
                                            37

                                          • Reactor (Multi-System/Multi-Hub) Announcements
                                            toggledbitsT
                                            toggledbits
                                            5
                                            122
                                            35.4k

                                          • Z-Wave Future....
                                            CatmanV2C
                                            CatmanV2
                                            0
                                            5
                                            130

                                          • Can´t restart or upgrade/deploy MSR
                                            toggledbitsT
                                            toggledbits
                                            0
                                            4
                                            85

                                          • [Solved] Limit HA Entity in MSR
                                            wmarcolinW
                                            wmarcolin
                                            0
                                            7
                                            191

                                          • Disaster recovery and virtualisation
                                            CatmanV2C
                                            CatmanV2
                                            0
                                            5
                                            660

                                          • Remote access of Zwave stick from Z-wave server
                                            CatmanV2C
                                            CatmanV2
                                            0
                                            3
                                            383

                                          • Organizing/ structuring rule sets and rules
                                            G
                                            gwp1
                                            0
                                            5
                                            374

                                          • Moving MSR from a QNAP container to RP 5 - some issues
                                            G
                                            gwp1
                                            0
                                            5
                                            345

                                          • Widget deletion does not work and landing page (status) is empy
                                            G
                                            gwp1
                                            0
                                            4
                                            338

                                          • Need help reducing false positive notifications
                                            T
                                            tamorgen
                                            0
                                            7
                                            520
                                          Powered by NodeBB | Contributors
                                          Hosted freely by 10RUPTiV - Solutions Technologiques | Contact us
                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • Unsolved