When on my bare metal RPi with MSR I had a rule that ran every minute to check Internet status via a script in MSR called reactor_inet_check.sh
I've moved to containerized MSR and see in the instructions that this cannot be run from the container.
The script cannot run within the Reactor docker container. If you are using Reactor in a docker container, the script needs to be run by cron or an equivalent facility on the host system (e.g. some systems, like Synology NAS, have separate task managers that may be used to schedule the repeated execution of tasks such as this).
I've put a script on my container host that calls the reator_inet_check.sh script and it isn't erroring... but I still see the internet status within MSR as null.
Before I go diving down the rabbit hole... should this work?
My cronjob on the proxmox host:
909fe6f0-77fd-4734-80a4-c9e354c910b6-image.png
The contents of msr_internet_check_caller.sh
16337528-cf31-4968-bffe-af1149f7103e-image.png
And then MSR...
My first issue: I'm logged into the msr CT as reactor (I used the suggested username just to keep things simple as this is new space for me and I was high off my success of migrating HA over).
When I run
docker pull toggledbits/reactor:latest-amd64... it assigns the \reactor\ subdirectory where installed root ownership. I am absolutely logged in with the correct non-root user.
1c58aead-85ca-4b2c-8f48-c3d1f57d7fe3-image.png
Second issue: I copied over the following folders:
67e7e4a5-cee8-4de1-90c7-1df35f1070b9-image.png
When MSR loads, all of my Global Expressions are missing.
Third issue:
All controllers connect wonderfully (Hubitat, etc)... except HA.
After changing ownership of the logs to reactor again I can see this when MSR calls HA:
Yes, I created a fresh new long-lived access token for the MSR containerized install and updated the reactor.yaml config file correctly.
Honestly, all-in-all, for my total lack of expertise here I'm very pleased that I only have these three issues. But they are def blockers atm.
My RPi bare metal install of MSR hooked right up to the new HA and is humming along just fine (I used hostnames were possible and shuffled some IPs in other places so I wouldn't run into things later that were mapped incorrectly that I'd forgotten about.)
Proxmox 8.3.2 MSR lives in an Ubuntu 24.04 Proxmox container MSR is latest docker versionWhat else can I provide to those smarter than me here?
Reactor (Multi-hub) latest-24366-3de60836
Running on Proxmox 8 VM
Ubuntu 22.04.5 LTS
Docker version 27.5.0, build a187fa5
Docker Compose version v2.32.3
Browsers being used on Mac OS Sequoia: Safari, Firefox also occurs with Safari on iPhone 16 Pro 18.2.1
This occurs on two different instances of MSR running at two different locations having the same environment detailed above.
When I select "Reactions->Create Reaction" I get an error window with a red “Runtime Error:” banner. Note that I can edit and save existing Reactions
—-------------------<SNIP>————————————
Runtime Error:
@http://192.168.119.137:8111/reactor/en-US/lib/js/reactor-ui-reactions.js:445:34
You may report this error, but do not screen shot it. Copy-paste the complete text. Remember to include a description of the operation you were performing in as much detail as possible. Report using the Reactor Bug Tracker (in your left navigation) or at the SmartHome Community.
---------------------</SNIP>————————
apt update, apt upgrade, reboot have been performed as well as
docker system prune -a
docker compose down
docker compose up -d
Many thanks in advance,
-bh
Build 21228 has been released. Docker images available from DockerHub as usual, and bare-metal packages here.
Home Assistant up to version 2021.8.6 supported; the online version of the manual will now state the current supported versions; Fix an error in OWMWeatherController that could cause it to stop updating; Unify the approach to entity filtering on all hub interface classes (controllers); this works for device entities only; it may be extended to other entities later; Improve error detail in messages for EzloController during auth phase; Add isRuleSet() and isRuleEnabled() functions to expressions extensions; Implement set action for lock and passage capabilities (makes them more easily scriptable in some cases); Fix a place in the UI where 24-hour time was not being displayed.I may have posted this in the wrong section. MSR running on Bare metal Debian bullseye. Both Openluup and MSR are on the same device (an Intel NUC) at IP 192.168.70.249. Any suggestions as to where I go to resolve?
TIA
Happy new year, everyone! Hope all are well!
Looking for some pointers troubleshooting a slightly puzzling to me issue. When digging around on a different issue I noticed this happening regularly in the MSR logs:
[latest-24366]2025-01-10T19:50:07.630Z <Engine:NOTICE> Starting reaction Garden lights on when the doors are open<SET> (rule-lb2h69nb:S) [latest-24366]2025-01-10T19:50:07.630Z <VeraController:INFO> VeraController#vera perform action power_switch.on on Switch#vera>device_20060 with [Object]{ } [latest-24366]2025-01-10T19:50:07.630Z <VeraController:INFO> VeraController#vera perform action power_switch.set on Switch#vera>device_20060 with [Object]{ "state": true } [latest-24366]2025-01-10T19:50:07.670Z <VeraController:NOTICE> VeraController#vera action power_switch.set([Object]{ "state": true }) on Switch#vera>device_20060 succeeded [latest-24366]2025-01-10T19:50:07.671Z <Engine:INFO> Resuming reaction Garden lights on when the doors are open<SET> (rule-lb2h69nb:S) from step 1 [latest-24366]2025-01-10T19:50:07.672Z <Engine:NOTICE> Garden lights on when the doors are open<SET> delaying until 1736538787672<10/01/2025, 19:53:07> [latest-24366]2025-01-10T19:50:19.595Z <Rule:INFO> Garden lights on when the doors are open (rule-lb2h69nb in Outside Lights) evaluated; rule state transition from SET to RESET! [latest-24366]2025-01-10T19:52:16.506Z <Rule:INFO> Garden lights on when the doors are open (rule-lb2h69nb in Outside Lights) evaluated; rule state transition from RESET to SET! [latest-24366]2025-01-10T19:52:16.515Z <Engine:INFO> [Engine]Engine#1 not enqueueing rule-lb2h69nb:S: already in queue with status 2 [latest-24366]2025-01-10T19:52:20.823Z <Rule:INFO> Garden lights on when the doors are open (rule-lb2h69nb in Outside Lights) evaluated; rule state transition from SET to RESET! [latest-24366]2025-01-10T19:53:07.676Z <Engine:INFO> Resuming reaction Garden lights on when the doors are open<SET> (rule-lb2h69nb:S) from step 2 [latest-24366]2025-01-10T19:53:07.677Z <VeraController:INFO> VeraController#vera perform action power_switch.off on Switch#vera>device_20060 with [Object]{ } [latest-24366]2025-01-10T19:53:07.678Z <VeraController:INFO> VeraController#vera perform action power_switch.set on Switch#vera>device_20060 with [Object]{ "state": false } [latest-24366]2025-01-10T19:53:07.719Z <VeraController:NOTICE> VeraController#vera action power_switch.set([Object]{ "state": false }) on Switch#vera>device_20060 succeeded [latest-24366]2025-01-10T19:53:07.720Z <Engine:INFO> Resuming reaction Garden lights on when the doors are open<SET> (rule-lb2h69nb:S) from step 3 [latest-24366]2025-01-10T19:53:07.721Z <Engine:INFO> Garden lights on when the doors are open<SET> all actions completed. [latest-24366]2025-01-10T19:55:04.468Z <VeraController:ERR> VeraController#vera update request failed: [FetchError] network timeout at: http://192.168.70.249:3480/data_request?id=status&Timeout=15&DataVersion=416912953&MinimumDelay=50&output_format=json&_r=1736538886459 [-] [latest-24366]2025-01-10T19:55:09.646Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20050: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20050) [-] [latest-24366]2025-01-10T19:55:09.646Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.646Z <VeraController:CRIT> *Entity#vera>device_20050 [latest-24366]2025-01-10T19:55:09.656Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20570: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20570) [-] [latest-24366]2025-01-10T19:55:09.656Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.656Z <VeraController:CRIT> *Entity#vera>device_20570 [latest-24366]2025-01-10T19:55:09.678Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20610: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20610) [-] [latest-24366]2025-01-10T19:55:09.679Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.679Z <VeraController:CRIT> *Entity#vera>device_20610 [latest-24366]2025-01-10T19:55:09.744Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20631: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20631) [-] [latest-24366]2025-01-10T19:55:09.744Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.744Z <VeraController:CRIT> *Entity#vera>device_20631 [latest-24366]2025-01-10T19:55:09.889Z <VeraController:NOTICE> VeraController#vera reload detected! [latest-24366]2025-01-10T19:55:09.910Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20050: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20050) [-] [latest-24366]2025-01-10T19:55:09.910Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.910Z <VeraController:CRIT> *Entity#vera>device_20050 [latest-24366]2025-01-10T19:55:09.935Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20570: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20570) [-] [latest-24366]2025-01-10T19:55:09.936Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.936Z <VeraController:CRIT> *Entity#vera>device_20570 [latest-24366]2025-01-10T19:55:09.937Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20610: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20610) [-] [latest-24366]2025-01-10T19:55:09.937Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.937Z <VeraController:CRIT> *Entity#vera>device_20610 [latest-24366]2025-01-10T19:55:09.939Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20631: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20631) [-] [latest-24366]2025-01-10T19:55:09.939Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.939Z <VeraController:CRIT> *Entity#vera>device_20631 [latest-24366]2025-01-10T19:55:09.968Z <Controller:INFO> VeraController#vera 0 dead entities older than 86400000s purged [latest-24366]2025-01-10T19:55:10.037Z <VeraController:NOTICE> VeraController#vera reload detected!That repeats until something like this:
[latest-24366]2025-01-10T19:55:10.049Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20050: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20050) [-] [latest-24366]2025-01-10T19:55:10.049Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:10.049Z <VeraController:CRIT> *Entity#vera>device_20050 [latest-24366]2025-01-10T19:55:10.053Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20570: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20570) [-] [latest-24366]2025-01-10T19:55:10.053Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:10.053Z <VeraController:CRIT> *Entity#vera>device_20570 [latest-24366]2025-01-10T19:55:10.062Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20610: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20610) [-] [latest-24366]2025-01-10T19:55:10.062Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:10.062Z <VeraController:CRIT> *Entity#vera>device_20610 [latest-24366]2025-01-10T19:55:10.112Z <VeraController:WARN> VeraController#vera failed to apply attribute scene_activation.scene_id to Entity#vera>device_20631: [TypeError] Can't set NaN on attribute scene_activation.scene_id (vera>device_20631) [-] [latest-24366]2025-01-10T19:55:10.112Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:10.113Z <VeraController:CRIT> *Entity#vera>device_20631 [latest-24366]2025-01-10T20:00:05.003Z <Engine:INFO> [Engine]Engine#1 master timer tick, local time "10/01/2025 20:00:05" (TZ offset 0 mins from UTC) [latest-24366]2025-01-10T20:13:51.872Z <Rule:INFO> No motion in Cinema (rule-m4ocglke in Cinema Environment) evaluated; rule state transition from SET to RESET! [latest-24366]2025-01-10T20:13:51.882Z <Rule:INFO> Cinema Heater On (rule-m4ocf1di in Cinema Environment) evaluated; rule state transition from RESET to SET! [latest-24366]2025-01-10T20:13:51.888Z <Engine:INFO> Enqueueing "Cinema Heater On<SET>" (rule-m4ocf1di:S)And the errors / reloads just stop.
From Openluup:
2025-01-10 19:49:56.379 luup_log:63: BroadLink_Mk2 debug: RM3 Mini - IR 1: urn:schemas-micasaverde-com:device:IrTransmitter:1 2025-01-10 19:50:00.085 luup_log:0: 14Mb, 1.7%cpu, 36.1days 2025-01-10 19:50:07.591 luup.variable_set:: 20160.urn:micasaverde-com:serviceId:EnergyMetering1.KWH was: 18.6793008 now: 18.6805008 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20160.urn:micasaverde-com:serviceId:EnergyMetering1.KWHReading was: 1736538000 now: 1736538600 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20160.urn:micasaverde-com:serviceId:EnergyMetering1.Watts was: 7.4 now: 7.3 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20170.urn:micasaverde-com:serviceId:EnergyMetering1.KWH was: 32.2417984 now: 32.2470016 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20170.urn:micasaverde-com:serviceId:EnergyMetering1.KWHReading was: 1736538000 now: 1736538600 #hooks:0 2025-01-10 19:50:07.591 luup.variable_set:: 20330.urn:micasaverde-com:serviceId:EnergyMetering1.KWHReading was: 1736538000 now: 1736538600 #hooks:0 2025-01-10 19:50:07.592 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 0 now: 1 #hooks:0 2025-01-10 19:50:07.592 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.LastTrip was: 1736534850 now: 1736538607 #hooks:0 2025-01-10 19:50:07.593 openLuup.server:: request completed (3392 bytes, 1 chunks, 12875 ms) tcp{client}: 0x55c3299a9cf8 2025-01-10 19:50:07.618 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3299a9cf8 2025-01-10 19:50:07.624 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329d0a5b8 2025-01-10 19:50:07.624 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912906&MinimumDelay=50&output_format=json&_r=1736538607623 HTTP/1.1 tcp{client}: 0x55c329d0a5b8 2025-01-10 19:50:07.632 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c3292ed678 2025-01-10 19:50:07.633 openLuup.server:: GET /data_request?newTargetValue=1&DeviceNum=20060&id=action&serviceId=urn%3Aupnp-org%3AserviceId%3ASwitchPower1&action=SetTarget&output_format=json&_r=1736538607631 HTTP/1.1 tcp{client}: 0x55c3 292ed678 2025-01-10 19:50:07.633 luup.call_action:: 20060.urn:upnp-org:serviceId:SwitchPower1.SetTarget 2025-01-10 19:50:07.633 luup.call_action:: action will be handled by parent: 37 2025-01-10 19:50:07.633 luup.variable_set:: 20060.urn:upnp-org:serviceId:SwitchPower1.Target was: 0 now: 1 #hooks:0 2025-01-10 19:50:07.669 openLuup.server:: request completed (35 bytes, 1 chunks, 35 ms) tcp{client}: 0x55c3292ed678 2025-01-10 19:50:07.673 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3292ed678 2025-01-10 19:50:07.776 openLuup.server:: request completed (821 bytes, 1 chunks, 151 ms) tcp{client}: 0x55c329d0a5b8 2025-01-10 19:50:07.784 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329d0a5b8 2025-01-10 19:50:07.795 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c3287bc8f8 2025-01-10 19:50:07.796 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912907&MinimumDelay=50&output_format=json&_r=1736538607794 HTTP/1.1 tcp{client}: 0x55c3287bc8f8 2025-01-10 19:50:08.644 luup.variable_set:: 20060.urn:upnp-org:serviceId:SwitchPower1.Status was: 0 now: 1 #hooks:0 2025-01-10 19:50:08.950 openLuup.server:: request completed (821 bytes, 1 chunks, 1154 ms) tcp{client}: 0x55c3287bc8f8 2025-01-10 19:50:08.958 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3287bc8f8 2025-01-10 19:50:08.969 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c3297e95a8 2025-01-10 19:50:08.970 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912908&MinimumDelay=50&output_format=json&_r=1736538608969 HTTP/1.1 tcp{client}: 0x55c3297e95a8 2025-01-10 19:50:19.181 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 1 now: 0 #hooks:0 2025-01-10 19:50:19.585 openLuup.server:: request completed (832 bytes, 1 chunks, 10615 ms) tcp{client}: 0x55c3297e95a8 2025-01-10 19:50:19.602 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3297e95a8 2025-01-10 19:50:19.605 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328d298a8 2025-01-10 19:50:19.605 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912909&MinimumDelay=50&output_format=json&_r=1736538619604 HTTP/1.1 tcp{client}: 0x55c328d298a8 2025-01-10 19:50:34.950 openLuup.server:: request completed (593 bytes, 1 chunks, 15344 ms) tcp{client}: 0x55c328d298a8 2025-01-10 19:50:34.953 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328d298a8 2025-01-10 19:50:34.965 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328c48a58 2025-01-10 19:50:34.966 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912909&MinimumDelay=50&output_format=json&_r=1736538634964 HTTP/1.1 tcp{client}: 0x55c328c48a58 2025-01-10 19:50:34.989 luup.variable_set:: 25019.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 0 now: 1 #hooks:0 2025-01-10 19:50:34.990 luup.variable_set:: 25019.urn:micasaverde-com:serviceId:SecuritySensor1.LastTrip was: 1736534437 now: 1736538634 #hooks:0 2025-01-10 19:50:35.094 openLuup.server:: request completed (975 bytes, 1 chunks, 127 ms) tcp{client}: 0x55c328c48a58 2025-01-10 19:50:35.101 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328c48a58 2025-01-10 19:50:35.113 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c32985e298 2025-01-10 19:50:35.113 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912911&MinimumDelay=50&output_format=json&_r=1736538635111 HTTP/1.1 tcp{client}: 0x55c32985e298 2025-01-10 19:50:40.255 luup.variable_set:: 25021.urn:micasaverde-com:serviceId:LightSensor1.CurrentLevel was: 0 now: 30 #hooks:1 2025-01-10 19:50:40.256 scheduler.watch_callback:: 25021.urn:micasaverde-com:serviceId:LightSensor1.CurrentLevel called [20]DataWatcherCallback() function: 0x55c3288a8d20 2025-01-10 19:50:40.460 openLuup.server:: request completed (835 bytes, 1 chunks, 5346 ms) tcp{client}: 0x55c32985e298 2025-01-10 19:50:40.472 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c32985e298 2025-01-10 19:50:40.478 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329b28238 2025-01-10 19:50:40.479 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912912&MinimumDelay=50&output_format=json&_r=1736538640478 HTTP/1.1 tcp{client}: 0x55c329b28238 2025-01-10 19:50:44.471 luup.variable_set:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 0 now: 1 #hooks:1 2025-01-10 19:50:44.472 luup.variable_set:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.LastTrip was: 1736538400 now: 1736538644 #hooks:0 2025-01-10 19:50:44.472 scheduler.watch_callback:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped called [20]DataWatcherCallback() function: 0x55c3288a8d20 2025-01-10 19:50:44.775 openLuup.server:: request completed (975 bytes, 1 chunks, 4296 ms) tcp{client}: 0x55c329b28238 2025-01-10 19:50:44.775 openLuup.server:: request completed (975 bytes, 1 chunks, 4296 ms) tcp{client}: 0x55c329b28238 2025-01-10 19:50:44.782 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329b28238 2025-01-10 19:50:44.793 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328f1e968 2025-01-10 19:50:44.793 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912914&MinimumDelay=50&output_format=json&_r=1736538644791 HTTP/1.1 tcp{client}: 0x55c328f1e968 2025-01-10 19:51:00.122 openLuup.server:: request completed (593 bytes, 1 chunks, 15328 ms) tcp{client}: 0x55c328f1e968 2025-01-10 19:51:00.125 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328f1e968 2025-01-10 19:51:00.136 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c32995b318 2025-01-10 19:51:00.136 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912914&MinimumDelay=50&output_format=json&_r=1736538660134 HTTP/1.1 tcp{client}: 0x55c32995b318 2025-01-10 19:51:15.481 openLuup.server:: request completed (593 bytes, 1 chunks, 15344 ms) tcp{client}: 0x55c32995b318 2025-01-10 19:51:15.484 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c32995b318 2025-01-10 19:51:15.495 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c32998b068 2025-01-10 19:51:15.497 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912914&MinimumDelay=50&output_format=json&_r=1736538675493 HTTP/1.1 tcp{client}: 0x55c32998b068 2025-01-10 19:51:30.869 openLuup.server:: request completed (593 bytes, 1 chunks, 15371 ms) tcp{client}: 0x55c32998b068 2025-01-10 19:51:30.872 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c32998b068 2025-01-10 19:51:30.884 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c32905bda8 2025-01-10 19:51:30.885 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912914&MinimumDelay=50&output_format=json&_r=1736538690882 HTTP/1.1 tcp{client}: 0x55c32905bda8 2025-01-10 19:51:32.886 luup.variable_set:: 20380.urn:upnp-org:serviceId:TemperatureSensor1.CurrentTemperature was: 21 now: 22 #hooks:0 2025-01-10 19:51:33.090 openLuup.server:: request completed (841 bytes, 1 chunks, 2205 ms) tcp{client}: 0x55c32905bda8 2025-01-10 19:51:33.100 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c32905bda8 2025-01-10 19:51:33.112 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328de0d58 2025-01-10 19:51:33.112 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912915&MinimumDelay=50&output_format=json&_r=1736538693111 HTTP/1.1 tcp{client}: 0x55c328de0d58 2025-01-10 19:51:36.064 luup.variable_set:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 1 now: 0 #hooks:1 2025-01-10 19:51:36.065 scheduler.watch_callback:: 25007.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped called [20]DataWatcherCallback() function: 0x55c3288a8d20 2025-01-10 19:51:36.369 openLuup.server:: request completed (832 bytes, 1 chunks, 3256 ms) tcp{client}: 0x55c328de0d58 2025-01-10 19:51:36.377 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328de0d58 2025-01-10 19:51:36.387 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329054188 2025-01-10 19:51:36.388 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912916&MinimumDelay=50&output_format=json&_r=1736538696386 HTTP/1.1 tcp{client}: 0x55c329054188 2025-01-10 19:51:37.134 luup.variable_set:: 20380.urn:upnp-org:serviceId:TemperatureSensor1.CurrentTemperature was: 22 now: 21 #hooks:0 2025-01-10 19:51:37.540 openLuup.server:: request completed (841 bytes, 1 chunks, 1152 ms) tcp{client}: 0x55c329054188 2025-01-10 19:51:37.553 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329054188 2025-01-10 19:51:37.566 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328d97568 2025-01-10 19:51:37.566 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912917&MinimumDelay=50&output_format=json&_r=1736538697564 HTTP/1.1 tcp{client}: 0x55c328d97568 2025-01-10 19:51:41.367 luup.variable_set:: 20380.urn:upnp-org:serviceId:TemperatureSensor1.CurrentTemperature was: 21 now: 22 #hooks:0 2025-01-10 19:51:41.874 openLuup.server:: request completed (841 bytes, 1 chunks, 4307 ms) tcp{client}: 0x55c328d97568 2025-01-10 19:51:41.884 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c328d97568 2025-01-10 19:51:41.895 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329385678 2025-01-10 19:51:41.896 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912918&MinimumDelay=50&output_format=json&_r=1736538701894 HTTP/1.1 tcp{client}: 0x55c329385678 2025-01-10 19:51:57.168 openLuup.server:: request completed (593 bytes, 1 chunks, 15272 ms) tcp{client}: 0x55c329385678 2025-01-10 19:51:57.171 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329385678 2025-01-10 19:51:57.183 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329b092b8 2025-01-10 19:51:57.184 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912918&MinimumDelay=50&output_format=json&_r=1736538717182 HTTP/1.1 tcp{client}: 0x55c329b092b8 2025-01-10 19:52:00.124 luup_log:0: 14Mb, 1.6%cpu, 36.1days 2025-01-10 19:52:00.476 openLuup.server:: request completed (1841 bytes, 1 chunks, 3292 ms) tcp{client}: 0x55c329b092b8 2025-01-10 19:52:00.483 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329b092b8 2025-01-10 19:52:00.495 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c3297be088 2025-01-10 19:52:00.495 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912929&MinimumDelay=50&output_format=json&_r=1736538720494 HTTP/1.1 tcp{client}: 0x55c3297be088 2025-01-10 19:52:09.867 luup.variable_set:: 25021.urn:micasaverde-com:serviceId:LightSensor1.CurrentLevel was: 30 now: 0 #hooks:1 2025-01-10 19:52:09.868 scheduler.watch_callback:: 25021.urn:micasaverde-com:serviceId:LightSensor1.CurrentLevel called [20]DataWatcherCallback() function: 0x55c3288a8d20 2025-01-10 19:52:10.071 openLuup.server:: request completed (834 bytes, 1 chunks, 9575 ms) tcp{client}: 0x55c3297be088 2025-01-10 19:52:10.079 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c3297be088 2025-01-10 19:52:10.088 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c329c16a08 2025-01-10 19:52:10.089 openLuup.server:: GET /data_request?id=status&Timeout=15&DataVersion=416912930&MinimumDelay=50&output_format=json&_r=1736538730087 HTTP/1.1 tcp{client}: 0x55c329c16a08 2025-01-10 19:52:16.194 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.Tripped was: 0 now: 1 #hooks:0 2025-01-10 19:52:16.195 luup.variable_set:: 20770.urn:micasaverde-com:serviceId:SecuritySensor1.LastTrip was: 1736538607 now: 1736538736 #hooks:0 2025-01-10 19:52:16.498 openLuup.server:: request completed (976 bytes, 1 chunks, 6409 ms) tcp{client}: 0x55c329c16a08 2025-01-10 19:52:16.515 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x55c329c16a08 2025-01-10 19:52:16.516 openLuup.io.server:: HTTP:3480 connection from 192.168.70.249 tcp{client}: 0x55c328dbad18Nothing I can see indicating that Openluup is reloading?
.249 IP address is the internal IP of the NUC that hosts both Openluup and MSR.
Any thoughts as to how I can troubleshoot this? It's not a big deal, but would like to get to the bottom of it.
I should add that all the devices listed in entries like this:
[latest-24366]2025-01-10T19:55:09.744Z <VeraController:INFO> VeraController#vera class scene_controller meta [Object]{ "source": "urn:micasaverde-com:serviceId:SceneController1/sl_SceneActivated", "expr": "int(value)" } orig final NaN [latest-24366]2025-01-10T19:55:09.744Z <VeraController:CRIT> *Entity#vera>device_20631Are the tamper switches on Fibaro FGMS001 multifunction detectors, of which I have 4, and they correspond exactly to the devices listed.
TIA
C
Hi
I was looking at an old rule and I wanted to edit it, to add another Constraint, however I cannot seem to do it.
On this screen shot you can see an existing entry in the Constraints and on its pull down menu the "Changes" option is available.
09735de3-8e92-4e12-bfa2-5191f48924a7-image.png
However on the new line I just added I have no changes option in its pull down menu.
d85c4067-880e-4281-a12b-dac4d316a4da-image.png
Here is the original now locked post about this topic.
https://smarthome.community/topic/395/contact-sensor-opened-1-minute-ago-how?_=1736354690742
If you look on the old screen shots on that post, I was using the "changes" operator. Like this:
a1262e01-d3fd-4723-872f-872f1f6d9899-image.png
However today when I edited this rule the operators are showing as == and not as changes on all the entries in the Constraints area.
Also the old entries now say -- and the value is blank. But on the new line I just added it says that is not valid, so not sure how the old lines are like that.
a458d52d-214d-4862-a2b7-d31009f89cde-image.png
So I am a bit confused what happened.
Thanks
@toggledbits I understand that you do not perform testing on Mac computers but thought I'd share the following with you in case something can be done.
I started seeing these errors with version 24302. I thought that upgrading to 24343 would have fixed the issue but unfortunately not. I either have to close the browser or clear the cache for the errors to stop popping-up but they slowly come back.
I see these errors on the following browsers:
Safari 16.6.1 on macOS Big Sur Safari 18.1.1 on MacOS Sonoma DuckDuckGo 1.118.0 on macOS Big Sur and Sonoma Firefox 133.0.3 on macOS Big Sur Chrome 131.0.6778 on macOS Big SurHere are the errors
Safari while creating/updating an expression
@http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:543:91 makeExprMenu@http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:537:28 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:92:64 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:89:68 each@http://192.168.0.13:8111/node_modules/jquery/dist/jquery.min.js:2:3133 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:89:35 @http://192.168.0.13:8111/client/MessageBus.js:98:44 forEach@[native code] @http://192.168.0.13:8111/client/MessageBus.js:95:54 @http://192.168.0.13:8111/client/MessageBus.js:106:44 @http://192.168.0.13:8111/client/Observable.js:78:28 signalModified@http://192.168.0.13:8111/reactor/en-ca/lib/js/ee.js:146:21 signalModified@http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:40:29 reindexExpressions@http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:71:32 @http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:608:40 dispatch@http://192.168.0.13:8111/node_modules/jquery/dist/jquery.min.js:2:40040DuckDuckGo while clicking on status
http://192.168.0.13:8111/reactor/en-ca/lib/js/reactor-ui-status.js:789:44 asyncFunctionResume@[native code] saveGridLayout@[native code] dispatchEvent@[native code] _triggerEvent@http://192.168.0.13:8111/node_modules/gridstack/dist/gridstack.js:1401:30 _triggerAddEvent@http://192.168.0.13:8111/node_modules/gridstack/dist/gridstack.js:1383:31 makeWidget@http://192.168.0.13:8111/node_modules/gridstack/dist/gridstack.js:968:30 addWidget@http://192.168.0.13:8111/node_modules/gridstack/dist/gridstack.js:388:24 placeWidgetAdder@http://192.168.0.13:8111/reactor/en-ca/lib/js/reactor-ui-status.js:183:44Firefox while updating a rule
@http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:543:91 makeExprMenu@http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:537:28 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:92:64 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:89:68 each@http://192.168.0.13:8111/node_modules/jquery/dist/jquery.min.js:2:3133 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reaction-editor.js:89:35 @http://192.168.0.13:8111/client/MessageBus.js:98:44 forEach@[native code] @http://192.168.0.13:8111/client/MessageBus.js:95:54 @http://192.168.0.13:8111/client/MessageBus.js:106:44 @http://192.168.0.13:8111/client/Observable.js:78:28 notifySaved@http://192.168.0.13:8111/reactor/en-ca/lib/js/ee.js:82:21 notifySaved@http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:47:26 @http://192.168.0.13:8111/reactor/en-ca/lib/js/reactor-ui-rules.js:1460:39 forEach@[native code] @http://192.168.0.13:8111/reactor/en-ca/lib/js/reactor-ui-rules.js:1459:58Chrome while creating/updating an expression
TypeError: Cannot read properties of undefined (reading 'getEditor') at RuleEditor.makeExprMenu (http://192.168.0.13:8111/reactor/en-ca/lib/js/rule-editor.js:1788:86) at Object.handler (http://192.168.0.13:8111/reactor/en-ca/lib/js/rule-editor.js:2174:54) at http://192.168.0.13:8111/client/MessageBus.js:98:44 at Array.forEach (<anonymous>) at MessageBus._sendToBus (http://192.168.0.13:8111/client/MessageBus.js:95:54) at MessageBus.send (http://192.168.0.13:8111/client/MessageBus.js:106:44) at ExpressionEditor.publish (http://192.168.0.13:8111/client/Observable.js:78:28) at ExpressionEditor.signalModified (http://192.168.0.13:8111/reactor/en-ca/lib/js/ee.js:146:14) at ExpressionEditor.signalModified (http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:40:15) at ExpressionEditor.reindexExpressions (http://192.168.0.13:8111/reactor/en-ca/lib/js/expression-editor.js:71:18) ``Not sure that it is the same issue but just got this on built 24302 when running a reaction for testing purpose. Despite the error message, the reaction ran properly.
Error: Command timeout (195 start_reaction)
at _ClientAPI._commandTimeout (http://192.168.2.163:8111/client/ClientAPI.js:552:136)
1a3422eb-d760-4609-a740-a40d04a6bab2-Screenshot 2024-12-29 231851.png
Thanks to @toggledbits for adding a custom CSS. I've started doing a darker Reactor style.
Here's the file: https://gist.github.com/dbochicchio/825098ac13b7f8cac22012eae37ff7ce
A couple of things are still too bright and I'll eventually catch-up. Just place it under your /config directory, naming the file as customstyles.css. Hard refresh your browser.
Hi
Having to rebuild my Linux Debian box as the SSD failed. And I have forgotten exactly what I did the first time to get it all setup.
I have Debian 12 up and running on the new SSD, I only have console no Desktop GUI.
I am trying to do the bare metal install for MSR. However I am not sure if I am meant to install nodejs whlist logged in as the root user or as the none root user with my name ?
I used putty and connected via SSH and logged in as root and I installed nodejs but I think this was wrong as when logged in as my user name and I do a node -v command it says node is not installed or doesn't show any version number anyway.
But when logged in as root and I do a node -v command it does show me its installed and displays the version number. maybe its a path issue for my username and he can't see node is installed?
So now I am thinking I should of installed node whilst logged in as my user name and not as the root user.
This is how I installed nodejs as whilst logged in as root
ac7bf6c3-23ad-46fc-8ada-44af6704e63e-image.png
Thanks in advance.
As the title says, here's my OpenAI Controller for Reactor:
OpenAI Controller per Reactor. Contribute to dbochicchio/reactor-openai development by creating an account on GitHub.
It supports both OpenAI and Azure OpenAI endpoints. You'll need keys/endpoints, according to each service.
The controller supports multiple models, and each one could be mapped as an entity.
It's quite easy to use, and responses can be stored in variables, for easy access. Or sent to another action (Text To Speech, another endpoint, etc).
9013ae50-fd68-42a2-87c3-97479132e465-image.png
80a88eec-7c89-464a-8196-690b4b72d044-image.png
Have fun with LLM into your scenes!
In Home Assistant I have an integration that if I add entities to it, I will get the following error in MSR as certain entity values I'm using in expressions are null for a moment. This is more or less cosmetic issue and happens very rarely as I rarely modify that integration on the hass side.
Screenshot 2024-11-28 at 22.20.41.png
And the expression is
Screenshot 2024-11-28 at 22.38.19.png
Could I "wrap" hass-entity shown above somewhat differently to prevent this error from happening? Using build 24302.
Hello
I am trying to set up Multi System Reactor to automate routines across multiple smart home devices & platforms (e.g., Home Assistant, SmartThings, and Hubitat). While I have successfully linked the systems; I am facing issues with:
-Delays in triggering actions on secondary devices.
-Inconsistent execution of complex logic conditions.
-Synchronization of states between devices when one system updates.
Is there a recommended way to optimize performance & confirm seamless state sharing across systems?
I have checked https://smarthome.community/category/22/multi-system-reactor-msbi guide for reference but still need advice.
Any tips on debugging or log analysis to pinpoint where the issue arises would also be appreciated.
Thank you !
I've managed to use MSR UI on iOS devices to some degree*, so that although UI elements (e.g. rule sets) are not visible in portrait mode, you've seen them in landscape. Now with recents builds (24302) this does not work anymore, elements (rule sets, entities) are not anymore visible in landscape mode.
Does anyone have similar experiences? Using iOS 18 and Safari/Chrome browser.
( *Drag & drop of rule conditions have never worked on a mobile)
@toggledbits Since I have upgraded ZWaveJSController to 24293 from 24257 I am seeing entries related to registering action set_volume, but action is not defined by the capability 143 every time I restart Reactor.
The Siren seems to be doing what it is supposed to do. The volume levels are fine. Should I worry about it?
Reactor version 24302
ZWaveJSController version 24293
Z-Wave JS UI version 9.27.4
zwave-js version 14.3.4
I have the following ACL defined:
groups: admin: users: - admin applications: true api_acls: # This ACL allows users in the "admin" group to access the API - url: "/api" group: admin allow: true log: true # This ACL allows anyone/thing to access the /api/v1/alive API endpoint - url: "/api/v1/alive" allow: trueAnd I have authenticated to MSR as "admin" user. However, I'm getting "access denied" when trying to access http://*******:8111/api/v1/log
So what I'm missing, is my ACL incorrectly defined?
Using build 24302 on Docker.
[Solved] latest-22328 restart fails
-
Solution: An update to openLuups MQTT implementation on handling acknowledge packets for QoS > 0 solved this issue.
EDIT: Seems related to any restart without any configuration changes. If I revert to 22310 I can restart Reactor from UI and with systemd within seconds, but with 22328 it fails to restart., both from UI and systemd. I have to stop the service and then start again.
I tested to comment out my http (not https)?baseurl
in the config on my bare metal Ubuntu install latest-22328 and triggered a restart from the UI but Reactor would not start after that.
If I uncomment the key and restart the service, Reactor comes back to life.
Is my setup an exceptional circumstance or is this only applicable on new installs -
It's working for me, and I've done some fresh installs of 22328 while testing alternatives to Raspberry Pi (a couple of promising boards so far). I hate to say it, but your post is right on the line of "I tried X and it didn't work for me," so without more detail, I can't really guide you.
-
@toggledbits said in latest-22328 and baseurl [EDIT]: restarts fail:
It's working for me
This is at first all I wanted to know, if anyone else was having the same issue or it's just my setup.
Sorry for the lack of details, I did not know what else to provide as the log is dead silent after shutting down...
I will debug further.
-
Looks like it's related to openLuups MQTT server. The shutdown process hangs after sending LWT to openLuup (MQTTController#mqtt in log).
Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.775Z <app:NOTICE> Closing Structure... Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.776Z <Structure:INFO> Structure#1 Stopping controllers... Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.776Z <Controller:NOTICE> VeraController#vera stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.778Z <Controller:ERR> Controller VeraController#vera is off-line! Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.799Z <EzloController:NOTICE> EzloController#ezlo stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.800Z <wsapi:WARN> client close from unknown connection? "192.168.1.2#5" Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.800Z <wsapi:WARN> client close from unknown connection? "192.168.1.238#4" Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.800Z <wsapi:WARN> client close from unknown connection? "192.168.1.238#3" Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.800Z <wsapi:WARN> client close from unknown connection? "192.168.1.238#2" Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.800Z <wsapi:WARN> client close from unknown connection? "192.168.1.2#1" Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.802Z <EzloController:NOTICE> EzloController#ezlo connection closed: 1000 closing Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.804Z <Controller:ERR> Controller EzloController#ezlo is off-line! Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.804Z <Controller:NOTICE> EzloController#ezlo stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.806Z <DynamicGroupController:null> DynamicGroupController#groups stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.806Z <Controller:NOTICE> DynamicGroupController#groups stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.808Z <Controller:ERR> Controller DynamicGroupController#groups is off-line! Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.809Z <HassController:NOTICE> HassController#hass stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.810Z <HassController:NOTICE> HassController#hass websocket closing, 1000 Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.810Z <Controller:NOTICE> HassController#hass stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.811Z <Controller:ERR> Controller HassController#hass is off-line! Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.823Z <Controller:NOTICE> OWMWeatherController#weather stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.824Z <Controller:ERR> Controller OWMWeatherController#weather is off-line! Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.825Z <Controller:NOTICE> SystemController#reactor_system stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.826Z <Controller:ERR> Controller SystemController#reactor_system is off-line! Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.827Z <MQTTController:NOTICE> MQTTController#mosquitto-mqtt stopping, sending LWT Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.870Z <MQTTController:NOTICE> LWT sent; closing broker connection Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.870Z <Controller:NOTICE> MQTTController#mosquitto-mqtt stopping Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.871Z <Controller:ERR> Controller MQTTController#mosquitto-mqtt is off-line! Nov 26 10:45:58 homebridge node[686832]: [latest-22328]2022-11-26T09:45:58.874Z <MQTTController:NOTICE> MQTTController#mqtt stopping, sending LWT Nov 26 10:46:03 homebridge node[686832]: [latest-22328]2022-11-26T09:46:03.452Z <httpapi:INFO> HTTP server closed.
If I disable that controller Reactor restarts fine as usual.
@toggledbits Is this the moment openLuup MQTT support ends in Reactor or can I change something to make it work again?
@akbooer I've updated openLuup from 22.9.3 to 22.11.22 but no success to this issue.The thing is it's running great when it finally comes up again after a long wait of a forced restart with systemd but it doesn't feel right to force kill the process even though the shutdown process is almost finished.
-
@crille said in latest-22328 restart fails:
Looks like it's related to openLuups MQTT server. The shutdown process hangs after sending LWT to openLuup (MQTTController#mqtt in log).
I think we've seen an issue like that previously. I don't think I see that line in the log you posted?
There may be a problem with retained messages (ie. LWT) and wildcard subscriptions in the openLuup server... I'd have to check.
-
I use a popular package to handle the MQTT broker connection, so I can't see the innards of communications and confirm, but from the rhythm of the log output, it appears that the broker is not sending an ACK to the publish of the LWT; the
publish()
call appears to be sitting there waiting for it. -
Does this mean 22328 require an ACK but 22310 does not? or has something else changed?
-
22310 still required an ACK, in a sense. It didn't wait for it. But because the ACK never arrived, the task in the
mqtt
package also never cleared (they don't have a time-out mechanism), so it just stayed in the queue forever... as did every other topic sent with a non-zero QoS. This actually causes a memory leak that could lead to exhaustion and a crash, because the ACK never comes so the task is never removed from the queue, and those tasks remain and proliferate in the queue.IMO, I think it's fine if @akbooer doesn't truly support QoS levels 1 and 2 in his MQTT implementation just for basic use, but not sending an ACK regardless isn't the right choice, in my view, and it's going to cause problems for a lot of clients that may not be immediately evident (like memory leaks/exhaustion).
-
@toggledbits said in latest-22328 restart fails:
I think it's fine if @akbooer doesn't truly support QoS levels 1 and 2 in his MQTT implementation just for basic use
It only supports QoS 0 .
Nevertheless, the protocol should be respected. AFAIK all PUBLISH requests receive an ACK unless the connection goes down in between times.
-
This is the end of
parse.PUBLISH()
:-- ACKNOWLEDGEMENT -- The receiver of a PUBLISH Packet MUST respond according to Table 3.4 - Expected Publish Packet -- response as determined by the QoS in the PUBLISH packet [MQTT-3.3.4-1] --[[ Table 3.4 - Expected Publish Packet response QoS Level Expected Response QoS 0 None QoS 1 PUBACK Packet QoS 2 PUBREC Packet --]] local ack -- None, because we only handle QoS 0 return ack, nil, TopicName, ApplicationMessage, RETAIN end
Comments to the contrary, it appears it returns
ack
, which is declared butnil
... so... no ACK? -
@toggledbits does Reactor publish it's LWT message with QoS > 0 even though the MQTTController config is at
qos: 0
? otherwise the expected response would be none. -
Yes, it uses QoS 1 (and retain true) because it's a "vital" message. The
qos
you can set in config is for the echo/entity publish functionality; it does not affect other messages. Still, this only requires that the broker acknowledge its receipt (3.3.4), not any delivery, and does not even enforce that QoS on subscribers (3.8.3) -
"The receiver of a PUBLISH Packet MUST respond according to Table 3.4 - Expected Publish Packet response as determined by the QoS in the PUBLISH Packet."
So even though the server only supports QoS 0 it's obligated to send a PUBACK on a QoS 1 level packet as described in table 3.4, correct?
-
Yes, PUBACK for QoS 1, and PUBREC for QoS 2. I don't think that would be a big problem for @akbooer when he gets the time, because all of the information contained in the response can be sourced from the request (i.e. topic, packet identifier, etc.). And otherwise the treatment of the PUBLISH packet can be the same (no further changes beyond sending those ACKs). Not actually having guaranteed delivery behind that is, in my view, an acceptable variance. On the sending side (repeat to subscribers), even though a subscriber may request QoS 1 or 2 for packets from the broker, it still must accept QoS 0 packets (because the requested QoS is a maximum, not an absolute), so everything outbound at QoS 0 isn't likely going to cause problems, especially in this world.
To summarize: if he just provides the PUBACK and PUBREC responses to fix that layer of the protocol, that's good enough. No further actions required above that layer. Not fully compliant (no guaranteed delivery), but at that point, few if any would ever notice.
The other question I don't have an answer for (or I've forgotten; and haven't dug through code to figure out)... @akbooer, does it support retain? If so, what's the storage mechanism, and is it persistent?
-
Well, I must be reading the MQTT 3.1.1 spec all wrong.
When I read:
"The SUBACK Packet sent by the Server to the Client MUST contain a return code for each Topic Filter/QoS pair. This return code MUST either show the maximum QoS that was granted for that Subscription or indicate that the subscription failed [MQTT-3.8.4-5]. The Server might grant a lower maximum QoS than the subscriber requested. The QoS of Payload Messages sent in response to a Subscription MUST be the minimum of the QoS of the originally published message and the maximum QoS granted by the Server. The server is permitted to send duplicate copies of a message to a subscriber in the case where the original message was published with QoS 1 and the maximum QoS granted was QoS 0 [MQTT-3.8.4-6]."
...I understood it to mean that, since I only ever grant QoS 0, then no message would require a PUBACK or a PUBREC.
I realise that this is my bad for writing my own MQTT broker, but it made sense in the context of openLuup, especially in support of Shelly devices which was the reason I did it in the first place. I apologise if this has led to some difficulties, and I'm starting to look at an MQTT validation suite to check out my implementation further. I also realize that Mosquitto is the de-facto standard, but it turns out that having an internal server confers some significant benefits in terms of the internal openLuup architecture.
However, if anyone can clarify the about QoS response issue further, I'm very happy to comply. If a simple fix is to send PUBACK or PUBREC, then I'll do it, but I want to know the reason why. I do understand that the original CONNECT request contains a LWT QoS, per this paragraph:
3.1.2.6 Will QoS
"Position: bits 4 and 3 of the Connect Flags."
"These two bits specify the QoS level to be used when publishing the Will Message."
"If the Will Flag is set to 0, then the Will QoS MUST be set to 0 (0x00) [MQTT-3.1.2-13]."
"If the Will Flag is set to 1, the value of Will QoS can be 0 (0x00), 1 (0x01), or 2 (0x02). It MUST NOT be 3 (0x03) [MQTT-3.1.2-14]."
...but I had assumed that QoS to be overridden by the actual level established in SUBSCRIBE / SUBACK. However, now that I write that, it seems the LWT actually may have a separate life from standard messages?
-
@toggledbits said in latest-22328 restart fails:
The other question I don't have an answer for (or I've forgotten; and haven't dug through code to figure out)... @akbooer, does it support retain? If so, what's the storage mechanism, and is it persistent?
Yes, it supports retained messages.
Is it persistent? Not across openLuup restarts.
The openLuup console page: openLuupIP:3480/console?page=mqtt shows all current subscriptions and also (at the bottom) retained messages.
-
@akbooer said in latest-22328 restart fails:
...I understood it to mean that, since I only ever grant QoS 0, then no message would require a PUBACK or a PUBREC.
This section is about your response to a SUBSCRIBE (SUBACK)and what QoS you use to PUBLISH. If you only grant QoS 0 on subscribe, then you will never expect a PUBACK or PUBREC because you never publish anything to a subscriber other than QoS 0.
It in no way limits what a client may publish to the broker, which could include any QoS.