When on my bare metal RPi with MSR I had a rule that ran every minute to check Internet status via a script in MSR called reactor_inet_check.sh
I've moved to containerized MSR and see in the instructions that this cannot be run from the container.
The script cannot run within the Reactor docker container. If you are using Reactor in a docker container, the script needs to be run by cron or an equivalent facility on the host system (e.g. some systems, like Synology NAS, have separate task managers that may be used to schedule the repeated execution of tasks such as this).
I've put a script on my container host that calls the reator_inet_check.sh script and it isn't erroring... but I still see the internet status within MSR as null.
Before I go diving down the rabbit hole... should this work?
My cronjob on the proxmox host:
909fe6f0-77fd-4734-80a4-c9e354c910b6-image.png
The contents of msr_internet_check_caller.sh
16337528-cf31-4968-bffe-af1149f7103e-image.png
And then MSR...
My first issue: I'm logged into the msr CT as reactor (I used the suggested username just to keep things simple as this is new space for me and I was high off my success of migrating HA over).
When I run
docker pull toggledbits/reactor:latest-amd64... it assigns the \reactor\ subdirectory where installed root ownership. I am absolutely logged in with the correct non-root user.
1c58aead-85ca-4b2c-8f48-c3d1f57d7fe3-image.png
Second issue: I copied over the following folders:
67e7e4a5-cee8-4de1-90c7-1df35f1070b9-image.png
When MSR loads, all of my Global Expressions are missing.
Third issue:
All controllers connect wonderfully (Hubitat, etc)... except HA.
After changing ownership of the logs to reactor again I can see this when MSR calls HA:
Yes, I created a fresh new long-lived access token for the MSR containerized install and updated the reactor.yaml config file correctly.
Honestly, all-in-all, for my total lack of expertise here I'm very pleased that I only have these three issues. But they are def blockers atm.
My RPi bare metal install of MSR hooked right up to the new HA and is humming along just fine (I used hostnames were possible and shuffled some IPs in other places so I wouldn't run into things later that were mapped incorrectly that I'd forgotten about.)
Proxmox 8.3.2 MSR lives in an Ubuntu 24.04 Proxmox container MSR is latest docker versionWhat else can I provide to those smarter than me here?
Reactor (Multi-hub) latest-24366-3de60836
Running on Proxmox 8 VM
Ubuntu 22.04.5 LTS
Docker version 27.5.0, build a187fa5
Docker Compose version v2.32.3
Browsers being used on Mac OS Sequoia: Safari, Firefox also occurs with Safari on iPhone 16 Pro 18.2.1
This occurs on two different instances of MSR running at two different locations having the same environment detailed above.
When I select "Reactions->Create Reaction" I get an error window with a red “Runtime Error:” banner. Note that I can edit and save existing Reactions
—-------------------<SNIP>————————————
Runtime Error:
@http://192.168.119.137:8111/reactor/en-US/lib/js/reactor-ui-reactions.js:445:34
You may report this error, but do not screen shot it. Copy-paste the complete text. Remember to include a description of the operation you were performing in as much detail as possible. Report using the Reactor Bug Tracker (in your left navigation) or at the SmartHome Community.
---------------------</SNIP>————————
apt update, apt upgrade, reboot have been performed as well as
docker system prune -a
docker compose down
docker compose up -d
Many thanks in advance,
-bh
Build 21228 has been released. Docker images available from DockerHub as usual, and bare-metal packages here.
Home Assistant up to version 2021.8.6 supported; the online version of the manual will now state the current supported versions; Fix an error in OWMWeatherController that could cause it to stop updating; Unify the approach to entity filtering on all hub interface classes (controllers); this works for device entities only; it may be extended to other entities later; Improve error detail in messages for EzloController during auth phase; Add isRuleSet() and isRuleEnabled() functions to expressions extensions; Implement set action for lock and passage capabilities (makes them more easily scriptable in some cases); Fix a place in the UI where 24-hour time was not being displayed.I may have posted this in the wrong section. MSR running on Bare metal Debian bullseye. Both Openluup and MSR are on the same device (an Intel NUC) at IP 192.168.70.249. Any suggestions as to where I go to resolve?
TIA
Happy new year, everyone! Hope all are well!
Looking for some pointers troubleshooting a slightly puzzling to me issue. When digging around on a different issue I noticed this happening regularly in the MSR logs:
[latest-24366]2025-01-10T19:50:07.630Z <Engine:NOTICE> Starting reaction Garden lights on when the doors are open<SET> (rule-lb2h69nb:S) [latest-24366]2025-01-10T19:50:07.630Z <VeraController:INFO> VeraController#vera perform action power_switch.on on Switch#vera>device_20060 with [Object]{ } [latest-24366]2025-01-10T19:50:07.630Z <VeraController:INFO> VeraController#vera perform action power_switch.set on Switch#vera>device_20060 with [Object]{ "state": true } [latest-24366]2025-01-10T19:50:07.670Z <VeraController:NOTICE> VeraController#vera action power_switch.set([Object]{ "state": true }) on Switch#vera>device_20060 succeeded [latest-24366]2025-01-10T19:50:07.671Z <Engine:INFO> Resuming reaction Garden lights on when the doors are open<SET> (rule-lb2h69nb:S) from step 1 [latest-24366]2025-01-10T19:50:07.672Z <Engine:NOTICE> Garden lights on when the doors are open<SET> delaying until 1736538787672<10/01/2025, 19:53:07> [latest-24366]2025-01-10T19:50:19.595Z <Rule:INFO> Garden lights on when the doors are open (rule-lb2h69nb in Outside Lights) evaluated; rule state transition from SET to RESET! [latest-24366]2025-01-10T19:52:16.506Z <Rule:INFO> Garden lights on when the doors are open (rule-lb2h69nb in Outside Lights) evaluated; rule state transition from RESET to SET! [latest-24366]2025-01-10T19:52:16.515Z <Engine:INFO> [Engine]Engine#1 not enqueueing rule-lb2h69nb:S: already in queue with status 2 [latest-24366]2025-01-10T19:52:20.823Z <Rule:INFO> Garden lights on when the doors are open (rule-lb2h69nb in Outside Lights) evaluated; rule state transition from SET to RESET! [latest-24366]2025-01-10T19:53:07.676Z <Engine:INFO> Resuming reaction Garden lights on when the doors are open<SET> (rule-lb2h69nb:S) from step 2 [latest-24366]2025-01-10T19:53:07.677Z <VeraController:INFO> VeraController#vera perform action power_switch.off on Switch#vera>device_20060 with [Object]{ } [latest-24366]2025-01-10T19:53:07.678Z <VeraController:INFO> VeraController#vera perform action power_switch.set on Switch#vera>device_20060 with [Object]{ "state": false } [latest-24366]2025-01-10T19:53:07.719Z <VeraController:NOTICE> VeraController#vera action power_switch.set([Object]{ "state": false }) on Switch#vera>device_20060 succeeded [latest-24366]2025-01-10T19:53:07.720Z <Engine:INFO> Resuming reaction Garden lights on when the doors are open<SET> (rule-lb2h69nb:S) from step 3 [latest-24366]2025-01-10T19:53:07.721Z <Engine:INFO> Garden lights on when the doors are open<SET> all actions completed. [latest-24366]2025-01-10T19:55:04.468Z <VeraController:ERR> VeraController#vera update request failed: [FetchError] network timeout at: http://192.168.70.249:3480/data_request?id=status&Timeout=15&DataVersion=416912953&MinimumDelay=50&output_format=json&_r=1736538886459 [-] [latest-24366]2025-01-10T19:55:09.646Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20050: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20050) [-] [latest-24366]2025-01-10T19:55:09.646Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.646Z <VeraController:CRIT> *Entity#vera>device_20050 [latest-24366]2025-01-10T19:55:09.656Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20570: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20570) [-] [latest-24366]2025-01-10T19:55:09.656Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.656Z <VeraController:CRIT> *Entity#vera>device_20570 [latest-24366]2025-01-10T19:55:09.678Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20610: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20610) [-] [latest-24366]2025-01-10T19:55:09.679Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.679Z <VeraController:CRIT> *Entity#vera>device_20610 [latest-24366]2025-01-10T19:55:09.744Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20631: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20631) [-] [latest-24366]2025-01-10T19:55:09.744Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.744Z <VeraController:CRIT> *Entity#vera>device_20631 [latest-24366]2025-01-10T19:55:09.889Z <VeraController:NOTICE> VeraController#vera reload detected! [latest-24366]2025-01-10T19:55:09.910Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20050: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20050) [-] [latest-24366]2025-01-10T19:55:09.910Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.910Z <VeraController:CRIT> *Entity#vera>device_20050 [latest-24366]2025-01-10T19:55:09.935Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20570: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20570) [-] [latest-24366]2025-01-10T19:55:09.936Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.936Z <VeraController:CRIT> *Entity#vera>device_20570 [latest-24366]2025-01-10T19:55:09.937Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20610: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20610) [-] [latest-24366]2025-01-10T19:55:09.937Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.937Z <VeraController:CRIT> *Entity#vera>device_20610 [latest-24366]2025-01-10T19:55:09.939Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20631: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20631) [-] [latest-24366]2025-01-10T19:55:09.939Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.939Z <VeraController:CRIT> *Entity#vera>device_20631 [latest-24366]2025-01-10T19:55:09.968Z <Controller:INFO> VeraController#vera 0 dead entities older than 86400000s purged [latest-24366]2025-01-10T19:55:10.037Z <VeraController:NOTICE> VeraController#vera reload detected!That repeats until something like this:
[latest-24366]2025-01-10T19:55:10.049Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20050: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20050) [-] [latest-24366]2025-01-10T19:55:10.049Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:10.049Z <VeraController:CRIT> *Entity#vera>device_20050 [latest-24366]2025-01-10T19:55:10.053Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20570: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20570) [-] [latest-24366]2025-01-10T19:55:10.053Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:10.053Z <VeraController:CRIT> *Entity#vera>device_20570 [latest-24366]2025-01-10T19:55:10.062Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20610: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20610) [-] [latest-24366]2025-01-10T19:55:10.062Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:10.062Z <VeraController:CRIT> *Entity#vera>device_20610 [latest-24366]2025-01-10T19:55:10.112Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20631: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20631) [-] [latest-24366]2025-01-10T19:55:10.112Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:10.113Z <VeraController:CRIT> *Entity#vera>device_20631 [latest-24366]2025-01-10T20:00:05.003Z <Engine:INFO> [Engine]Engine#1 master timer tick, local time "10/01/2025 20:00:05" (TZ offset 0 mins from UTC) [latest-24366]2025-01-10T20:13:51.872Z <Rule:INFO> No motion in Cinema (rule-m4ocglke in Cinema Environment) evaluated; rule state transition from SET to RESET! [latest-24366]2025-01-10T20:13:51.882Z <Rule:INFO> Cinema Heater On (rule-m4ocf1di in Cinema Environment) evaluated; rule state transition from RESET to SET! [latest-24366]2025-01-10T20:13:51.888Z <Engine:INFO> Enqueueing "Cinema Heater On<SET>" (rule-m4ocf1di:S)And the errors / reloads just stop.
From Openluup:
2025-01-10 19:49:56.379 luup_log:63: BroadLink_Mk2 debug: RM3 Mini - IR 1: urn:schemas-micasaverde-com:device:IrTransmitter:1 2025-01-10 19:50:00.085 luup_log:0: 14Mb, 1.7%cpu, 36.1days 2025-01-10 19:50:07.591 luup.variable_set:: 20160.urn:micasaverde-com:serviceId:EnergyMetering1.KWH was: 18.6793008 now: 18.6805008 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20160.urn:micasaverde-com:serviceId:EnergyMetering1.KWHReading was: 1736538000 now: 1736538600 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20160.urn:micasaverde-com:serviceId:EnergyMetering1.Watts was: 7.4 now: 7.3 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20170.urn:micasaverde-com:serviceId:EnergyMetering1.KWH was: 32.2417984 now: 32.2470016 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20170.urn:micasaverde-com:serviceId:EnergyMetering1.KWHReading was: 1736538000 now: 1736538600 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20330.urn:micasaverde-com:serviceId:EnergyMetering1.KWHReading was: 1736538000 now: 1736538600 #hooks:0 2025-01-10 19:50:07.592 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 0 now: 1 #hooks:0 2025-01-10 19:50:07.592 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.LastTrip was: 1736534850 now: 1736538607 #hooks:0 2025-01-10 19:50:07.593 openLuup.server:: request completed (3392 bytes, 1 chunks, 12875 ms) tcp{client}: 0x55c3299a9cf8 2025-01-10 19:50:07.618 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3299a9cf8 2025-01-10 19:50:07.624 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329d0a5b8 2025-01-10 19:50:07.624 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912906&MinimumDelay=50&output_format=json&_r=1736538607623 HTTP/1.1 tcp{client}: 0x55c329d0a5b8 2025-01-10 19:50:07.632 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c3292ed678 2025-01-10 19:50:07.633 openLuup.server:: GET /data_request?newTargetValue=1&DeviceNum=20060&id=action&serviceId=urn%3Aupnp-org%3AserviceId%3ASwitchPower1&action=SetTarget&output_format=json&_r=1736538607631 HTTP/1.1 tcp{client}: 0x55c3 292ed678 2025-01-10 19:50:07.633 luup.call_action:: 20060.urn:upnp-org:serviceId:SwitchPower1.SetTarget 2025-01-10 19:50:07.633 luup.call_action:: action will be handled by parent: 37 2025-01-10 19:50:07.633 luup.variable_set:: 20060.urn:upnp-org:serviceId:SwitchPower1.Target was: 0 now: 1 #hooks:0 2025-01-10 19:50:07.669 openLuup.server:: request completed (35 bytes, 1 chunks, 35 ms) tcp{client}: 0x55c3292ed678 2025-01-10 19:50:07.673 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3292ed678 2025-01-10 19:50:07.776 openLuup.server:: request completed (821 bytes, 1 chunks, 151 ms) tcp{client}: 0x55c329d0a5b8 2025-01-10 19:50:07.784 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329d0a5b8 2025-01-10 19:50:07.795 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c3287bc8f8 2025-01-10 19:50:07.796 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912907&MinimumDelay=50&output_format=json&_r=1736538607794 HTTP/1.1 tcp{client}: 0x55c3287bc8f8 2025-01-10 19:50:08.644 luup.variable_set:: 20060.urn:upnp-org:serviceId:SwitchPower1.Status was: 0 now: 1 #hooks:0 2025-01-10 19:50:08.950 openLuup.server:: request completed (821 bytes, 1 chunks, 1154 ms) tcp{client}: 0x55c3287bc8f8 2025-01-10 19:50:08.958 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3287bc8f8 2025-01-10 19:50:08.969 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c3297e95a8 2025-01-10 19:50:08.970 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912908&MinimumDelay=50&output_format=json&_r=1736538608969 HTTP/1.1 tcp{client}: 0x55c3297e95a8 2025-01-10 19:50:19.181 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 1 now: 0 #hooks:0 2025-01-10 19:50:19.585 openLuup.server:: request completed (832 bytes, 1 chunks, 10615 ms) tcp{client}: 0x55c3297e95a8 2025-01-10 19:50:19.602 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3297e95a8 2025-01-10 19:50:19.605 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328d298a8 2025-01-10 19:50:19.605 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912909&MinimumDelay=50&output_format=json&_r=1736538619604 HTTP/1.1 tcp{client}: 0x55c328d298a8 2025-01-10 19:50:34.950 openLuup.server:: request completed (593 bytes, 1 chunks, 15344 ms) tcp{client}: 0x55c328d298a8 2025-01-10 19:50:34.953 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328d298a8 2025-01-10 19:50:34.965 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328c48a58 2025-01-10 19:50:34.966 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912909&MinimumDelay=50&output_format=json&_r=1736538634964 HTTP/1.1 tcp{client}: 0x55c328c48a58 2025-01-10 19:50:34.989 luup.variable_set:: 25019.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 0 now: 1 #hooks:0 2025-01-10 19:50:34.990 luup.variable_set:: 25019.urn:micasaverde-com:serviceId:SecuritySensor1.LastTrip was: 1736534437 now: 1736538634 #hooks:0 2025-01-10 19:50:35.094 openLuup.server:: request completed (975 bytes, 1 chunks, 127 ms) tcp{client}: 0x55c328c48a58 2025-01-10 19:50:35.101 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328c48a58 2025-01-10 19:50:35.113 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c32985e298 2025-01-10 19:50:35.113 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912911&MinimumDelay=50&output_format=json&_r=1736538635111 HTTP/1.1 tcp{client}: 0x55c32985e298 2025-01-10 19:50:40.255 luup.variable_set:: 25021.urn:micasaverde-com:serviceId:LightSensor1.CurrentLevel was: 0 now: 30 #hooks:1 2025-01-10 19:50:40.256 scheduler.watch_callback:: 25021.urn:micasaverde-com:serviceId:LightSensor1.CurrentLevel called [20]DataWatcherCallback() function: 0x55c3288a8d20 2025-01-10 19:50:40.460 openLuup.server:: request completed (835 bytes, 1 chunks, 5346 ms) tcp{client}: 0x55c32985e298 2025-01-10 19:50:40.472 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c32985e298 2025-01-10 19:50:40.478 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329b28238 2025-01-10 19:50:40.479 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912912&MinimumDelay=50&output_format=json&_r=1736538640478 HTTP/1.1 tcp{client}: 0x55c329b28238 2025-01-10 19:50:44.471 luup.variable_set:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 0 now: 1 #hooks:1 2025-01-10 19:50:44.472 luup.variable_set:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.LastTrip was: 1736538400 now: 1736538644 #hooks:0 2025-01-10 19:50:44.472 scheduler.watch_callback:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped called [20]DataWatcherCallback() function: 0x55c3288a8d20 2025-01-10 19:50:44.775 openLuup.server:: request completed (975 bytes, 1 chunks, 4296 ms) tcp{client}: 0x55c329b28238 2025-01-10 19:50:44.775 openLuup.server:: request completed (975 bytes, 1 chunks, 4296 ms) tcp{client}: 0x55c329b28238 2025-01-10 19:50:44.782 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329b28238 2025-01-10 19:50:44.793 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328f1e968 2025-01-10 19:50:44.793 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912914&MinimumDelay=50&output_format=json&_r=1736538644791 HTTP/1.1 tcp{client}: 0x55c328f1e968 2025-01-10 19:51:00.122 openLuup.server:: request completed (593 bytes, 1 chunks, 15328 ms) tcp{client}: 0x55c328f1e968 2025-01-10 19:51:00.125 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328f1e968 2025-01-10 19:51:00.136 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c32995b318 2025-01-10 19:51:00.136 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912914&MinimumDelay=50&output_format=json&_r=1736538660134 HTTP/1.1 tcp{client}: 0x55c32995b318 2025-01-10 19:51:15.481 openLuup.server:: request completed (593 bytes, 1 chunks, 15344 ms) tcp{client}: 0x55c32995b318 2025-01-10 19:51:15.484 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c32995b318 2025-01-10 19:51:15.495 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c32998b068 2025-01-10 19:51:15.497 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912914&MinimumDelay=50&output_format=json&_r=1736538675493 HTTP/1.1 tcp{client}: 0x55c32998b068 2025-01-10 19:51:30.869 openLuup.server:: request completed (593 bytes, 1 chunks, 15371 ms) tcp{client}: 0x55c32998b068 2025-01-10 19:51:30.872 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c32998b068 2025-01-10 19:51:30.884 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c32905bda8 2025-01-10 19:51:30.885 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912914&MinimumDelay=50&output_format=json&_r=1736538690882 HTTP/1.1 tcp{client}: 0x55c32905bda8 2025-01-10 19:51:32.886 luup.variable_set:: 20380.urn:upnp-org:serviceId:TemperatureSensor1.CurrentTemperature was: 21 now: 22 #hooks:0 2025-01-10 19:51:33.090 openLuup.server:: request completed (841 bytes, 1 chunks, 2205 ms) tcp{client}: 0x55c32905bda8 2025-01-10 19:51:33.100 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c32905bda8 2025-01-10 19:51:33.112 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328de0d58 2025-01-10 19:51:33.112 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912915&MinimumDelay=50&output_format=json&_r=1736538693111 HTTP/1.1 tcp{client}: 0x55c328de0d58 2025-01-10 19:51:36.064 luup.variable_set:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 1 now: 0 #hooks:1 2025-01-10 19:51:36.065 scheduler.watch_callback:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped called [20]DataWatcherCallback() function: 0x55c3288a8d20 2025-01-10 19:51:36.369 openLuup.server:: request completed (832 bytes, 1 chunks, 3256 ms) tcp{client}: 0x55c328de0d58 2025-01-10 19:51:36.377 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328de0d58 2025-01-10 19:51:36.387 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329054188 2025-01-10 19:51:36.388 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912916&MinimumDelay=50&output_format=json&_r=1736538696386 HTTP/1.1 tcp{client}: 0x55c329054188 2025-01-10 19:51:37.134 luup.variable_set:: 20380.urn:upnp-org:serviceId:TemperatureSensor1.CurrentTemperature was: 22 now: 21 #hooks:0 2025-01-10 19:51:37.540 openLuup.server:: request completed (841 bytes, 1 chunks, 1152 ms) tcp{client}: 0x55c329054188 2025-01-10 19:51:37.553 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329054188 2025-01-10 19:51:37.566 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328d97568 2025-01-10 19:51:37.566 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912917&MinimumDelay=50&output_format=json&_r=1736538697564 HTTP/1.1 tcp{client}: 0x55c328d97568 2025-01-10 19:51:41.367 luup.variable_set:: 20380.urn:upnp-org:serviceId:TemperatureSensor1.CurrentTemperature was: 21 now: 22 #hooks:0 2025-01-10 19:51:41.874 openLuup.server:: request completed (841 bytes, 1 chunks, 4307 ms) tcp{client}: 0x55c328d97568 2025-01-10 19:51:41.884 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328d97568 2025-01-10 19:51:41.895 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329385678 2025-01-10 19:51:41.896 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912918&MinimumDelay=50&output_format=json&_r=1736538701894 HTTP/1.1 tcp{client}: 0x55c329385678 2025-01-10 19:51:57.168 openLuup.server:: request completed (593 bytes, 1 chunks, 15272 ms) tcp{client}: 0x55c329385678 2025-01-10 19:51:57.171 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329385678 2025-01-10 19:51:57.183 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329b092b8 2025-01-10 19:51:57.184 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912918&MinimumDelay=50&output_format=json&_r=1736538717182 HTTP/1.1 tcp{client}: 0x55c329b092b8 2025-01-10 19:52:00.124 luup_log:0: 14Mb, 1.6%cpu, 36.1days 2025-01-10 19:52:00.476 openLuup.server:: request completed (1841 bytes, 1 chunks, 3292 ms) tcp{client}: 0x55c329b092b8 2025-01-10 19:52:00.483 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329b092b8 2025-01-10 19:52:00.495 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c3297be088 2025-01-10 19:52:00.495 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912929&MinimumDelay=50&output_format=json&_r=1736538720494 HTTP/1.1 tcp{client}: 0x55c3297be088 2025-01-10 19:52:09.867 luup.variable_set:: 25021.urn:micasaverde-com:serviceId:LightSensor1.CurrentLevel was: 30 now: 0 #hooks:1 2025-01-10 19:52:09.868 scheduler.watch_callback:: 25021.urn:micasaverde-com:serviceId:LightSensor1.CurrentLevel called [20]DataWatcherCallback() function: 0x55c3288a8d20 2025-01-10 19:52:10.071 openLuup.server:: request completed (834 bytes, 1 chunks, 9575 ms) tcp{client}: 0x55c3297be088 2025-01-10 19:52:10.079 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3297be088 2025-01-10 19:52:10.088 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329c16a08 2025-01-10 19:52:10.089 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912930&MinimumDelay=50&output_format=json&_r=1736538730087 HTTP/1.1 tcp{client}: 0x55c329c16a08 2025-01-10 19:52:16.194 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 0 now: 1 #hooks:0 2025-01-10 19:52:16.195 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.LastTrip was: 1736538607 now: 1736538736 #hooks:0 2025-01-10 19:52:16.498 openLuup.server:: request completed (976 bytes, 1 chunks, 6409 ms) tcp{client}: 0x55c329c16a08 2025-01-10 19:52:16.515 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329c16a08 2025-01-10 19:52:16.516 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328dbad18Nothing I can see indicating that Openluup is reloading?
.249 IP address is the internal IP of the NUC that hosts both Openluup and MSR.
Any thoughts as to how I can troubleshoot this? It's not a big deal, but would like to get to the bottom of it.
I should add that all the devices listed in entries like this:
[latest-24366]2025-01-10T19:55:09.744Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.744Z <VeraController:CRIT> *Entity#vera>device_20631Are the tamper switches on Fibaro FGMS001 multifunction detectors, of which I have 4, and they correspond exactly to the devices listed.
TIA
C
Hi
I was looking at an old rule and I wanted to edit it, to add another Constraint, however I cannot seem to do it.
On this screen shot you can see an existing entry in the Constraints and on its pull down menu the "Changes" option is available.
09735de3-8e92-4e12-bfa2-5191f48924a7-image.png
However on the new line I just added I have no changes option in its pull down menu.
d85c4067-880e-4281-a12b-dac4d316a4da-image.png
Here is the original now locked post about this topic.
https://smarthome.community/topic/395/contact-sensor-opened-1-minute-ago-how?_=1736354690742
If you look on the old screen shots on that post, I was using the "changes" operator. Like this:
a1262e01-d3fd-4723-872f-872f1f6d9899-image.png
However today when I edited this rule the operators are showing as == and not as changes on all the entries in the Constraints area.
Also the old entries now say -- and the value is blank. But on the new line I just added it says that is not valid, so not sure how the old lines are like that.
a458d52d-214d-4862-a2b7-d31009f89cde-image.png
So I am a bit confused what happened.
Thanks
@toggledbits I understand that you do not perform testing on Mac computers but thought I'd share the following with you in case something can be done.
I started seeing these errors with version 24302. I thought that upgrading to 24343 would have fixed the issue but unfortunately not. I either have to close the browser or clear the cache for the errors to stop popping-up but they slowly come back.
I see these errors on the following browsers:
Safari 16.6.1 on macOS Big Sur Safari 18.1.1 on MacOS Sonoma DuckDuckGo 1.118.0 on macOS Big Sur and Sonoma Firefox 133.0.3 on macOS Big Sur Chrome 131.0.6778 on macOS Big SurHere are the errors
Safari while creating/updating an expression
@http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:543:91 makeExprMenu@http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:537:28 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:92:64 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:89:68 each@http://192.168.0.13:8111/node_modules/jquery/dist/jquery.min.js:2:3133 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:89:35 @http://192.168.0.13:8111/client/MessageBus.js:98:44 forEach@[native code] @http://192.168.0.13:8111/client/MessageBus.js:95:54 @http://192.168.0.13:8111/client/MessageBus.js:106:44 @http://192.168.0.13:8111/client/Observable.js:78:28 signalModified@http://192.168.0.13:8111/reactor/en-ca/lib/js/ee.js:146:21 signalModified@http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:40:29 reindexExpressions@http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:71:32 @http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:608:40 dispatch@http://192.168.0.13:8111/node_modules/jquery/dist/jquery.min.js:2:40040DuckDuckGo while clicking on status
http://192.168.0.13:8111/reactor/en-ca/lib/js/reactor-ui-status.js:789:44 asyncFunctionResume@[native code] saveGridLayout@[native code] dispatchEvent@[native code] _triggerEvent@http://192.168.0.13:8111/node_modules/gridstack/dist/gridstack.js:1401:30 _triggerAddEvent@http://192.168.0.13:8111/node_modules/gridstack/dist/gridstack.js:1383:31 makeWidget@http://192.168.0.13:8111/node_modules/gridstack/dist/gridstack.js:968:30 addWidget@http://192.168.0.13:8111/node_modules/gridstack/dist/gridstack.js:388:24 placeWidgetAdder@http://192.168.0.13:8111/reactor/en-ca/lib/js/reactor-ui-status.js:183:44Firefox while updating a rule
@http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:543:91 makeExprMenu@http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:537:28 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:92:64 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:89:68 each@http://192.168.0.13:8111/node_modules/jquery/dist/jquery.min.js:2:3133 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:89:35 @http://192.168.0.13:8111/client/MessageBus.js:98:44 forEach@[native code] @http://192.168.0.13:8111/client/MessageBus.js:95:54 @http://192.168.0.13:8111/client/MessageBus.js:106:44 @http://192.168.0.13:8111/client/Observable.js:78:28 notifySaved@http://192.168.0.13:8111/reactor/en-ca/lib/js/ee.js:82:21 notifySaved@http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:47:26 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reactor-ui-rules.js:1460:39 forEach@[native code] @http://192.168.0.13:8111/reactor/en-ca/lib/js/reactor-ui-rules.js:1459:58Chrome while creating/updating an expression
TypeError: Cannot read properties of undefined (reading 'getEditor') at RuleEditor.makeExprMenu (http://192.168.0.13:8111/reactor/en-ca/lib/js/rule-editor.js:1788:86) at Object.handler (http://192.168.0.13:8111/reactor/en-ca/lib/js/rule-editor.js:2174:54) at http://192.168.0.13:8111/client/MessageBus.js:98:44 at Array.forEach (<anonymous>) at MessageBus._sendToBus (http://192.168.0.13:8111/client/MessageBus.js:95:54) at MessageBus.send (http://192.168.0.13:8111/client/MessageBus.js:106:44) at ExpressionEditor.publish (http://192.168.0.13:8111/client/Observable.js:78:28) at ExpressionEditor.signalModified (http://192.168.0.13:8111/reactor/en-ca/lib/js/ee.js:146:14) at ExpressionEditor.signalModified (http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:40:15) at ExpressionEditor.reindexExpressions (http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:71:18) ``Not sure that it is the same issue but just got this on built 24302 when running a reaction for testing purpose. Despite the error message, the reaction ran properly.
Error: Command timeout (195 start_reaction)
at _ClientAPI._commandTimeout (http://192.168.2.163:8111/client/ClientAPI.js:552:136)
1a3422eb-d760-4609-a740-a40d04a6bab2-Screenshot 2024-12-29 231851.png
Thanks to @toggledbits for adding a custom CSS. I've started doing a darker Reactor style.
Here's the file: https://gist.github.com/dbochicchio/825098ac13b7f8cac22012eae37ff7ce
A couple of things are still too bright and I'll eventually catch-up. Just place it under your /config directory, naming the file as customstyles.css. Hard refresh your browser.
Hi
Having to rebuild my Linux Debian box as the SSD failed. And I have forgotten exactly what I did the first time to get it all setup.
I have Debian 12 up and running on the new SSD, I only have console no Desktop GUI.
I am trying to do the bare metal install for MSR. However I am not sure if I am meant to install nodejs whlist logged in as the root user or as the none root user with my name ?
I used putty and connected via SSH and logged in as root and I installed nodejs but I think this was wrong as when logged in as my user name and I do a node -v command it says node is not installed or doesn't show any version number anyway.
But when logged in as root and I do a node -v command it does show me its installed and displays the version number. maybe its a path issue for my username and he can't see node is installed?
So now I am thinking I should of installed node whilst logged in as my user name and not as the root user.
This is how I installed nodejs as whilst logged in as root
ac7bf6c3-23ad-46fc-8ada-44af6704e63e-image.png
Thanks in advance.
As the title says, here's my OpenAI Controller for Reactor:
OpenAI Controller per Reactor. Contribute to dbochicchio/reactor-openai development by creating an account on GitHub.
It supports both OpenAI and Azure OpenAI endpoints. You'll need keys/endpoints, according to each service.
The controller supports multiple models, and each one could be mapped as an entity.
It's quite easy to use, and responses can be stored in variables, for easy access. Or sent to another action (Text To Speech, another endpoint, etc).
9013ae50-fd68-42a2-87c3-97479132e465-image.png
80a88eec-7c89-464a-8196-690b4b72d044-image.png
Have fun with LLM into your scenes!
In Home Assistant I have an integration that if I add entities to it, I will get the following error in MSR as certain entity values I'm using in expressions are null for a moment. This is more or less cosmetic issue and happens very rarely as I rarely modify that integration on the hass side.
Screenshot 2024-11-28 at 22.20.41.png
And the expression is
Screenshot 2024-11-28 at 22.38.19.png
Could I "wrap" hass-entity shown above somewhat differently to prevent this error from happening? Using build 24302.
Hello
I am trying to set up Multi System Reactor to automate routines across multiple smart home devices & platforms (e.g., Home Assistant, SmartThings, and Hubitat). While I have successfully linked the systems; I am facing issues with:
-Delays in triggering actions on secondary devices.
-Inconsistent execution of complex logic conditions.
-Synchronization of states between devices when one system updates.
Is there a recommended way to optimize performance & confirm seamless state sharing across systems?
I have checked https://smarthome.community/category/22/multi-system-reactor-msbi guide for reference but still need advice.
Any tips on debugging or log analysis to pinpoint where the issue arises would also be appreciated.
Thank you !
I've managed to use MSR UI on iOS devices to some degree*, so that although UI elements (e.g. rule sets) are not visible in portrait mode, you've seen them in landscape. Now with recents builds (24302) this does not work anymore, elements (rule sets, entities) are not anymore visible in landscape mode.
Does anyone have similar experiences? Using iOS 18 and Safari/Chrome browser.
( *Drag & drop of rule conditions have never worked on a mobile)
@toggledbits Since I have upgraded ZWaveJSController to 24293 from 24257 I am seeing entries related to registering action set_volume, but action is not defined by the capability 143 every time I restart Reactor.
The Siren seems to be doing what it is supposed to do. The volume levels are fine. Should I worry about it?
Reactor version 24302
ZWaveJSController version 24293
Z-Wave JS UI version 9.27.4
zwave-js version 14.3.4
I have the following ACL defined:
groups: admin: users: - admin applications: true api_acls: # This ACL allows users in the "admin" group to access the API - url: "/api" group: admin allow: true log: true # This ACL allows anyone/thing to access the /api/v1/alive API endpoint - url: "/api/v1/alive" allow: trueAnd I have authenticated to MSR as "admin" user. However, I'm getting "access denied" when trying to access http://*******:8111/api/v1/log
So what I'm missing, is my ACL incorrectly defined?
Using build 24302 on Docker.
Quality of Life Request: Update Button
-
Massive upvote on all 3. Just like back in the old Vera days
(Unsure how easy update would be with the various ways of implementing MSR inside/outside docker etc) - unless a standard install method is specified, and one of the features of that is 'update' capability.
Docker-Compose please -
@Cadwizzard said:
Just like back in the old Vera days
How soon we forget the tales of bricked Veras. Who among us didn't have a little sense that they were playing Russian roulette every time we hit that button?
Unsure how easy update would be with the various ways of implementing MSR inside/outside docker etc
OK. He hits it on the head here. Let me explain some of the complications and my reservations around this.
The biggest pitfall is for docker users, IMO; that's the majority of you. The first thing you need to understand about docker is that the image and the container are separate objects in the system. The container is created from the image, but it's basically a copy, not linked in any meaningful way. The container can change, so that's good — I can download a release package and apply it to the container, restart, and the container will now be running the new release files. Unfortunately, that has no bearing on what happens to the image. Changing files in the container does nothing to the image. So let's take a scenario... @tunnus (Docker, Synology) downloads the image for Reactor 22274 and creates a container for it, so he's now running 22274. A little later, 22291 is released, so he hits the handy, flashy new "Upgrade" button and the container is upgraded in place. Perfect. Except, not... his image is still 22274. Stay with me now... In all likelihood, because of the "ease" of the automated upgrade, tunnus never needs to download a new image again (so he thinks), so he never bothers (it's a pain anyway on Synology, I'll agree). So build 22293 comes along, and then 22302, and then 22305, and then 22308, and he upgrades to all of them using the automated process, but the image is still sitting there on his NAS at 22274. The problem strikes if, for any reason (DSM major upgrade?), he decides to reset and rebuild the container, or delete it. He will get.... 22274. Because that's the image he has.
Can I make docker download the newer image as part of the upgrade process? No. Reactor is running inside the container, and the container, by definition, contains Reactor and keeps it from doing anything external to the container (except the limited data volume that's specifically created for the single purpose). So the running Reactor instance has no ability whatsoever to cause docker/DSM to pull later images. Pulling a new image and rebuilding the container is the real "right" way to upgrade, but it's not possible to automate it from within the container itself (and it's darned clunky in Synology's UI, unfortunately).
It's not hard to imagine that this problem would not bite him for months or years. But when it bites, it has the potential to bite hard. Imagine along Reactor's evolutionary path from 22274 to a future 24107 (released in 2024, all automated updates between, no image refreshes), there are changes that needed to be made to the data structures of rules, reactions, stored states, etc. (not at all hard to imagine, it actually happens all the time). It is easy, although sometimes a bit cumbersome, for me to provide forward compatibility: to make sure that newer versions of code read the old data and upgrade the structures, and the mechanisms for those upgrades remain in the code for some time. But there is no way under the sun for 22274, now running once again unexpectedly, to know what to do with data from the future 24107 build, and there's a chance it could do something really bad to it. Now tunnus has an old version running in his container with corrupt data. I hope he has a backup.
I'll take the opportunity to say that this is a cautionary tale for all of you who stay on older builds. I keep the code that reads and upgrades the data, when needed, for a while, so that people who skip a few upgrades can safely do so and "jump in time" when they are ready to apply a new build, but I don't and won't keep that code forever; it becomes a maintenance nightmare and it's beyond my available time and sensibilities to test every possible combination of upgrades between versions. If you're running on a Reactor that's more than a year out (21307 or earlier), you're playing with fire as far as I'm concerned, and you should not expect a smooth upgrade when you get around to it. You may need to upgrade to an interim build still available, which works for bare-metal, but isn't an option for docker users. And before the "I can't have something like that in my home" people start in here, please know that I'm sorry that the free software I offer you and for which I provide ready, quick, and free ongoing support (and upgrades) isn't perfectly to your liking. If you don't like the way it works, you have alternatives, and I fully support your freedom to choose them.
To continue with @Cadwizzard's point: this is equally or more egregious, unfortunately, for docker-compose users, because up to this point, the recommended way for stopping Reactor when using docker-compose is to run
docker-compose down
. This causes Reactor to stop, but also deletes the container. Any upgrades applied to the container are lost in that instant, because the container is discarded. When you later rundocker-compose up -d
, the container is re-created from the (old) image, and will be whatever version that image is. Maybe not a disaster, maybe it is. This could be addressed by retraining docker-compose users to usedocker-compose stop
rather thandocker-compose down
, but the distinction would need to be taught (and learned) as both are useful, and the infrequency of use of these commands would likely suffer from brain-drain over time (i.e. when to use which and what their side effects could be/will be lost on the user a few months from now). But it's such a subtle distinction that people will shoot themselves in the foot easily and regularly, I fear.Bare-metal is somewhat easier, because at least the process can be assured it's writing on the one and only (relevant) image, in the install directory, so that's a bit of relief. Unfortunately, a lot of people really don't understand Linux file permissions, their relations to users and groups, etc., and routinely goof up the permissions of files all over their system, including in the Reactor install directory. This isn't a problem for them after the first "fix," because thereafter they do the manual upgrades the same way, logged in as the same user (in some cases, even as root, which is a serious no-no), and so it works for them as that user in that case, good enough. But for an automated process running in an unprivileged environment, it can mean that some files aren't writable, and the upgrade only half-happens... the upgrade process crashes, some files are new and some are old, and the Reactor install is basically dead and broken. I can't fix the permissions from the running instance, because it's running as an unprivileged user (well, hopefully; woe unto those who run anything as root). The user then manually applies the upgrade to recover the system, which goes fine because of course he's running the privileged user with the right permissions. A bug/post for the upgrade process then gets reported, and I then spend hours or days going back and forth, digging through the user's 3,000+ files in a typical Reactor install, looking for the broken ones and teaching the user how to fix them. (Permissions and their potential brokenness is also an issue for doing automated backups/restores, since that was mentioned as well.)
Oh, and then there's Windows. I won't even start. I've already written a book (again).
With regard to the suggestion of a standard install method: (a) there is no "one size fits all" — what works for Ubuntu doesn't work for Synology DSM or QNAP, and certainly not for Windows; and (b) the install methods that are recommended are all carefully documented; experience shows that I can write out every detail I can think of, and what actually happens on the user's system is 100% of that or some amount less, or the user has some condition in their system/environment that I could not/did not anticipate that causes problems. My preferred method for most users is docker (and specifically, with docker-compose), because the container strategy removes some of these risks, but that's not always the easiest for their environment (e.g. Synology has docker but no -compose), and the accepted mechanisms for upgrading containers in the docker world in general are ironically exactly the subject of complaints by OP and others here, despite the relative ubiquity and ease of these mechanisms.
The point is, there is no panacaea here. You run these systems, not me. You do things I have no knowledge of, and sometimes those things bite back. The majority of my time supporting this product is troubleshooting your environments, not my code (I'm not saying I'm perfect — I make mistakes and bugs are a reality, but they're not the majority of support issues here in terms of time spent). Anything and everything I do in the system is looked at not just through the lens of whether it's convenient for the user, but very much through the lens of supportability. There are lots of features I get asked to do, and as you've seen (even recently), there are some that I refuse to consider simply because it would make the system less supportable, in my view. As features get added, not only is the usability of the system required to improve, but its quality is expected to improve as well (fewer bugs, better support, etc.) — those are my expectations, which I'm sure you share. If one doesn't consider supportability (and that means both in user support and code maintenance/reliability/scalability), one ends up with a lot of features that nobody asked for, don't work, and aren't usable (I can point you in the direction of such a product as an alternative if you're really interested in that).
There is a running, hidden upgrade process in the current build. I've been experimenting with this for a bit, getting to learn it, and discovering these issues. It's not that I won't consider making it available; I'm still studying it, and pondering the wisdom of it. Maybe sometimes I worry too much about things like this, I don't know. But when it goes wrong, there's nobody but you and me to fix it, and there's a lot of you and only one of me, so as I said in another recent conversation, handing out something that feels like a grenade with no pin sometimes doesn't seem like the best idea to me, and there are probably other things this system needs to do that I can better spend my time on. Maybe this is one of those things.
I'll leave this one up to you guys. If you can tolerate these side effects, I'll release the feature. But know that if you break your install because (docker) you somehow delete the container and recreate it from an old image, or (bare-metal) your install has broken permissions or other issues that the upgrade process can't work through, my answer will be short: that's a risk you accepted, do a clean reinstall from a current image, restore your config/state backup, and start over.
-
One more class of knowledge! Really the desire to have an automatic process would be very good, but your explanation makes clear the difficulty and risks, and I do not want to have it. As you said, the errors we generate are enough, I don't want to implement more risks. I think almost everyone here has a wife, so better to stay in the safety of the system working.
Well let's remove this from wishlist request, and could you share this list so we know everything that's coming in the future? Also put an item, display the status widget, in a window/iframe inside the HE dashboard
-
@toggledbits this makes tons of sense why anyone should want an update button mainly Docker users.
In terms of bare metal users, say if a user messed with their files permissions enough that it would cause issues when updating Reactor, wouldn't they run into the same errors even if they manually updated or used the update button? I wouldn't mind an update button for bare metal users, since from your explanation seems like a possible issue with won't come up with the actually update process itself, it can come with something else (like file permissions etc). Meaning that they'd run into these errors even if they manually updated Reactor like we do now.
Not arguing though, its a fairly low level request from me. I can clearly see why an update button for Docker users could be a slow and silent death. As @CatmanV2 for bare metal the update process really only takes 90 seconds ahah.
-
@pabla said in Quality of Life Request: Update Button:
wouldn't they run into the same errors even if they manually updated or used the update button?
Not necessarily... some users... I've seen it... will run into permission problems and their answer, not understanding the problem or how to fix it, is to use
sudo tar xvf
to just lay tracks over everything. This would eliminate the permissions problem unpackaging the archive, but new files may become root-owned, which isn't right but the code doesn't care as long as its readable. If their umask allows world-readable files (and 022 is a common default that does exactly that), the Reactor runtime will never know permissions are broken, because every file it needs is readable without consideration of ownership. The un-tar'ing doesn't touchlogs
,config
, etc. so any permissions there aren't relevant and aren't changed. And because some of the files are now root-owned that shouldn't be, the permissions problem has been made worse and again, unless they are truly fixed the right way, thensudo
will continue to be the only way upgrades will succeed. It perpetuates and exacerbates.I really get how painful the docker upgrades are on Synology. I'm guessing QNAP is probably not much different, and I think several people have been bitten by Portainer oddities regardless of platform.
The process just needs more thought. I could, for example, from the next build onward, prevent the system from starting if the config and data are from a newer version. The problem there is that it needs to be detected early in startup, and if the system can't use the data, it has to exit hard, because it can't run without any data at all, and it can't touch what it has. There would be no UI feedback other than "DISCONNECTED" (i.e. the behavior when Reactor can't start). A "click-to-upgrade" to fix it wouldn't be an option because the UI would not be running, so a manual upgrade would be required at that point. And maybe that's OK? Maybe that's such an extreme/infrequent circumstance that it should be that way? A manual upgrade once in a blue moon may not be so bad... I don't know... looking for feedback... trying to figure it out...
-
I personally do not think the update process on Synology docker is that bad. A few more clicks than an easy button but not horrible. All my other docker containers are updated the same way. I like the docker image though. I am not familiar with the other platforms so I can’t comment on those update processes.
-
-
I too am happy with the current process. Super fast for me. I run everything under Synology/Docker. I no longer have issues upgrading containers such as Reactor, HA, etc. since I switched to Portainer several months ago. So not sure what those "Portainer" oddities are/were. Something I should keep an eye on? Or I have just been lucky?
-
Some input from a Windows user.
An update button would of course be a nice to have feature, but I also agree with several other here. A "normal" update, aka don't need new dependencies, just take a short moment to install.
Were I usually stumble is when an update of dependencies are needed. That have taken me hours of search-try--error-tryagain before getting that to work sometimes.My dream would be to have a "windows installer" for MSR, that checks dependencies, install a systemservice etc.
Over time I think that would be a safer/more stable way, with fewer user errors.With this said, I can understand really understand that @toggledbits need to handle this "his way" to be able to support differen't enviroments (and users )
-
I don't know if this helps for other Docker users, but not long after I got started with Docker I found Portainer, and I've been running it alongside Reactor and my other containers on my Raspberry Pi 4. With Portainer, there may not be a one-step update button, but I find it makes updates much easier.
I just updated Reactor to the latest. All I had to do was go to the Portainer URL in my browser, then
- Click on the Reactor container in the Containers list
-
Click 'Recreate"
-
Toggle "Always pull new image" on the window that pops up
- Click "Recreate"
It isn't one click, but it can be done in a browser tab from any machine with network access to the Docker host. No VNC/SSH into the machine, no Docker commands to run from the command line.
Portainer also has links to view the container logs and to open a command window in the container, which I use all the time. You can also use the "duplicate/edit" button to change or add environment variables while updating, which is how I added the NODE_PATH a few updates back.
-
Thanks for the Portainer explanation, I'm certain I've had a spell cast on me. I'll try again once the Pi400 become available once more.
Lastly, I think the point has been lost, it's about QOL, not about how easy it is to do in another way.
From my perspective if it isn't easy to use by 98% of the public then it's too much trouble and they might look at it then discard it for another solution.
The comments so far are from users who are in the 2% and are happy to tinker. I'm happy for you.
If anyone wants to see how Consumer friendly software should be to set up, then have a look at Homeseer4. Update ...no problem with 1 click. -
@black-cat Isn't homeseer a walled garden like Hubitat, Ezlo, etc.? You buy their hub and live within their infrastructure.
That's not MSR. MSR works on various OS/hardware and communicates with multiple hubs.
Whilst I appreciate your POV, it's not apples>apples comparison you're making here.
-
@gwp1 said in Quality of Life Request: Update Button:
You buy their hub and live within their infrastructure.
Nup, you can use any old Laptop or RasPi. Runs on Windows or Lynx. i'd love to promote MSR to Homeseer users but it lacks the simplicity hence the backing of the request.
Realistically, I'm not going to see it happen which is a shame as Patrick has put a lot of time into development for the 2%. -
@sweetgenius I agree, Synology Docker container upgrade process is not too bad.
I frequently keep both MSR and Synology UI open on separate browser tabs and either do a quick upgrade using "reset" or a bit careful upgrade using "duplicate settings" and retaining old container as a backup/rollback option.
Originally I favored a simple update button for MSR, but after Patrick's explanations I realized it's not that simple after all.