Hi @toggledbits.
After a couple of weeks, I noticed that my Remotec zrc90 isn't working as expected.
Scenes are working in ZWaveJS, but this device has a strange behavior: the scene change, but then it's set again to null. In Reactor, this remains null:
battery_power.level=0.7 battery_power.since=1725817957361 x_debug.dt={"description":"Scene master 8 button remote","model":"BW8510/ZRC-90US","default_name":"Scene master 8 button remote","manufacturerId":21076,"productType":0,"productId":34064} x_zwave_values.Battery_isLow=false x_zwave_values.Battery_level=70 x_zwave_values.Central_Scene_scene_001=null x_zwave_values.Central_Scene_scene_002=null x_zwave_values.Central_Scene_scene_003=null x_zwave_values.Central_Scene_scene_004=null x_zwave_values.Central_Scene_scene_005=null x_zwave_values.Central_Scene_scene_006=null x_zwave_values.Central_Scene_scene_007=null x_zwave_values.Central_Scene_scene_008=null x_zwave_values.Central_Scene_slowRefresh=null x_zwave_values.Manufacturer_Specific_manufacturerId=21076 x_zwave_values.Manufacturer_Specific_productId=34064 x_zwave_values.Manufacturer_Specific_productType=1 x_zwave_values.Version_firmwareVersions=["1.1","1.1"] x_zwave_values.Version_hardwareVersion=3 x_zwave_values.Version_libraryType=2 x_zwave_values.Version_protocolVersion="4.5" x_zwave_values.Wake_Up_controllerNodeId=1 x_zwave_values.Wake_Up_wakeUpInterval=0 zwave_device.capabilities=[91,114,128,132,134] zwave_device.endpoint=0 zwave_device.failed=false zwave_device.generic_class="Remote Controller" zwave_device.impl_sig="24242:1:22315:1" zwave_device.is_beaming=false zwave_device.is_listening=false zwave_device.is_routing=false zwave_device.is_secure=false zwave_device.manufacturer_info=[21076,1,34064] zwave_device.max_data_rate=null zwave_device.node_id=154 zwave_device.specific_class="Simple Remote Control" zwave_device.status=2 zwave_device.status_text="awake" zwave_device.version_info=[null,"1.1"] zwave_device.wakeup_interval=0Anything I could look at? Thanks.
Hi, @toggledbits!
I have a question about the execution behavior. See the code below, and I'll explain the situation.
12957c3e-ff06-46c9-929d-b53f936665df-image.png
This is a routine that, at a certain point, determines that the desktop on which the VM hosting the Reactor is located receives an instruction to perform a shutdown (Shell Command).
When this happens, the desktop is turned off, and then Hubitat detects by a "ping" that the VM has been down, waits 15 seconds, turns off the power to this desktop, and then 15 seconds later turns on the desktop with the Reactor VM again.
After restarting the desktop, the VM is loaded, and the Reactor is triggered. Still, the following problem occurs: I expected that when the rule was continued to be executed again, the next step would be executed, that of the 900-second delay after shutdown, but the Shell command is executed again, and then it goes into a loop, the rule does not advance.
To break the loop, I first have to make the VM not load, change the desktop password, and then start the VM. In this case, Reactor generates an error when trying to execute the Shell Command because of the invalid password and then finishes the routine following the 900 delay step.
b58b0d4a-d6c1-4fe3-bab7-4222acea9607-image.png
Is my interpretation that when it returns, the routine should continue to the next step that has not yet been executed incorrectly? Or does Reactor, through the shutdown command, interpret that it hasn't finished this step and keep trying, which is the correct reaction?
Thanks for clarifying.
Build 21228 has been released. Docker images available from DockerHub as usual, and bare-metal packages here.
Home Assistant up to version 2021.8.6 supported; the online version of the manual will now state the current supported versions; Fix an error in OWMWeatherController that could cause it to stop updating; Unify the approach to entity filtering on all hub interface classes (controllers); this works for device entities only; it may be extended to other entities later; Improve error detail in messages for EzloController during auth phase; Add isRuleSet() and isRuleEnabled() functions to expressions extensions; Implement set action for lock and passage capabilities (makes them more easily scriptable in some cases); Fix a place in the UI where 24-hour time was not being displayed.Hi @toggledbits ,
I'm slowly moving my ZWave network from Vera to ZWaveJS. I successfully cloned my ZWave network using a spare Vera Edge (a new post for the community later when I'll be fully back from vacation) and I'm testing a couple of things before moving everything to ZWaveJS.
In the meanwhile, I have a couple of venetian blinds connected to Fibaro Roller Shutters 2 (FGR222) and I'm using some proprietary ZWave commands to control the tilt position, that right now I'm sending via Vera (with some code from the old place, messing with this):
af7f883c-f49e-419c-a2fe-8669572e3792-image.png
The ZWaveJS values are reported via this:
x_zwave_values.Manufacturer_Proprietary_fibaro_venetianBlindsPosition=0 x_zwave_values.Manufacturer_Proprietary_fibaro_venetianBlindsTilt=0I hope there's a way to expose a separate device to control the tilt position directly, without doing the mess I'm doing now. Let me know if you need some files. Thanks.
As per @toggledbits request, new topic.
Position and cover commands not working and position/cover attributes are incorrect. Dimming is OK.
cover.state=null dimming.level=1 dimming.step=0.1 energy_sensor.units="kWh" energy_sensor.value=0.41 position.value=null power_sensor.units="W" power_sensor.value=0 power_switch.state=true x_debug.dt={"entity_class":"Cover","match":"deviceClass.generic.key=17;deviceClass.specific.key=6","capabilities":["cover","toggle","position"],"primary_attribute":"cover.state"} x_zwave_values.Meter_reset=null x_zwave_values.Meter_value_65537=0.41 x_zwave_values.Meter_value_66049=0 x_zwave_values.Multilevel_Switch_Down=null x_zwave_values.Multilevel_Switch_Up=null x_zwave_values.Multilevel_Switch_currentValue=99 x_zwave_values.Multilevel_Switch_duration="unknown" x_zwave_values.Multilevel_Switch_restorePrevious=null x_zwave_values.Multilevel_Switch_targetValue=99 x_zwave_values.Notification_Power_Management_Over_current_status=0 x_zwave_values.Notification_System_Hardware_status=0 x_zwave_values.Notification_alarmLevel=null x_zwave_values.Notification_alarmType=null zwave_device.capabilities=[38,50,113] zwave_device.endpoint=1 zwave_device.failed=null zwave_device.impl_sig="24225:1:22315:1" zwave_device.manufacturer_info=null zwave_device.node_id=148 zwave_device.version_info=nullThanks!
Another one for you, @toggledbits.
I have two water sensors (same device, NAS-WS01Z), but one is reporting leak_detector.state=true even if no alarm is detected (I double checked from ZWaveJS UI):
battery_power.level=0.86 battery_power.since=null leak_detector.state=true x_debug.dt={"entity_class":"Notification Sensor","match":"deviceClass.generic.key=7"} x_zwave_values.Battery_isLow=false x_zwave_values.Battery_level=86 x_zwave_values.Binary_Sensor_Water=false x_zwave_values.Configuration_Alarm_Activity_Duration=5 x_zwave_values.Configuration_Alarm_Beep=1 x_zwave_values.Configuration_Alarm_Duration=120 x_zwave_values.Configuration_Alarm_Interval=null x_zwave_values.Configuration_Basic_Set_Level=255 x_zwave_values.Configuration_First_Alarm_Activity_Duration=null x_zwave_values.Configuration_Water_Detection=1 x_zwave_values.Manufacturer_Specific_manufacturerId=600 x_zwave_values.Manufacturer_Specific_productId=4229 x_zwave_values.Manufacturer_Specific_productType=3 x_zwave_values.Notification_Water_Alarm_Sensor_status=null x_zwave_values.Notification_alarmLevel=0 x_zwave_values.Notification_alarmType=0 x_zwave_values.Version_firmwareVersions=null x_zwave_values.Version_hardwareVersion=null x_zwave_values.Version_libraryType=null x_zwave_values.Version_protocolVersion=null x_zwave_values.Wake_Up_controllerNodeId=1 x_zwave_values.Wake_Up_wakeUpInterval=43200 zwave_device.capabilities=[48,112,113,114,128,132,134] zwave_device.endpoint=0 zwave_device.failed=false zwave_device.generic_class="Notification Sensor" zwave_device.impl_sig="24225:1:22315:1" zwave_device.is_beaming=false zwave_device.is_listening=false zwave_device.is_routing=true zwave_device.is_secure=false zwave_device.last_wakeup=1724143899220 zwave_device.manufacturer_info=[600,3,4229] zwave_device.max_data_rate=null zwave_device.node_id=114 zwave_device.specific_class="Notification Sensor" zwave_device.status=1 zwave_device.status_text="asleep" zwave_device.version_info=[null,null] zwave_device.wakeup_interval=43200here's the other one, correctly report the leak status:
battery_power.level=1 battery_power.since=null leak_detector.state=false x_debug.dt={"entity_class":"Notification Sensor","match":"deviceClass.generic.key=7"} x_zwave_values.Battery_isLow=false x_zwave_values.Battery_level=100 x_zwave_values.Binary_Sensor_Water=false x_zwave_values.Configuration_Alarm_Activity_Duration=5 x_zwave_values.Configuration_Alarm_Beep=1 x_zwave_values.Configuration_Alarm_Duration=120 x_zwave_values.Configuration_Alarm_Interval=1 x_zwave_values.Configuration_Basic_Set_Level=255 x_zwave_values.Configuration_First_Alarm_Activity_Duration=60 x_zwave_values.Configuration_Water_Detection=1 x_zwave_values.Manufacturer_Specific_manufacturerId=600 x_zwave_values.Manufacturer_Specific_productId=4229 x_zwave_values.Manufacturer_Specific_productType=3 x_zwave_values.Notification_Water_Alarm_Sensor_status=0 x_zwave_values.Notification_alarmLevel=null x_zwave_values.Notification_alarmType=null x_zwave_values.Version_firmwareVersions=["2.54"] x_zwave_values.Version_hardwareVersion=48 x_zwave_values.Version_libraryType=6 x_zwave_values.Version_protocolVersion="4.5" x_zwave_values.Wake_Up_controllerNodeId=1 x_zwave_values.Wake_Up_wakeUpInterval=43200 zwave_device.capabilities=[48,112,113,114,128,132,134] zwave_device.endpoint=0 zwave_device.failed=false zwave_device.generic_class="Notification Sensor" zwave_device.impl_sig="24225:1:22315:1" zwave_device.is_beaming=false zwave_device.is_listening=false zwave_device.is_routing=true zwave_device.is_secure=false zwave_device.last_wakeup=1724105239533 zwave_device.manufacturer_info=[600,3,4229] zwave_device.max_data_rate=null zwave_device.node_id=113 zwave_device.specific_class="Notification Sensor" zwave_device.status=1 zwave_device.status_text="asleep" zwave_device.version_info=[null,"2.54"] zwave_device.wakeup_interval=43200Also, both seems to have no primary value. Thanks.
Good morning,
I have a service MQTT service that needs a restart occasionally. The add-on (Smartbed MQTT) is for the smart bed base for my bed. It has a "safety light" that I can control from HAAS & MSR as a light entity, and also moves the head of the bed to a preset at bedtime, and then lies it back flat in the morning The problem is, from time to time, the light becomes "unavailable" Restarting from the Add-ons tab in HAAS always fixes it, but I should be able to detect when it happens when "light.tempur_pedic_safety_lights" is not true or false, i.e., unavailable.
What I don't know how to do is how to restart that service. Does anybody have experience in restarting add-ons from MSR?
Running:
Reactor (Multi-hub) latest-24212-3ce15e25 ZWaveJSController [0.1.24232]HAAS:
RPi5-64 (8GB) Core 2024.7.3 Supervisor 2024.08.0 Operating System 13.0 Frontend 20240710.0Hi-
I have an android media player entity publishing from HA. I watch for changes in transport state and media title to trigger some actions.
Though those attributes report as expected, the set rule is being throttled for possible flapping.
There is an attribute for media position that continually updates, I suspect it is causing the evaluations to run constantly.
The workaround I am seeking is to ignore those attributes in HA or MSR. Anyone know how, or have a better idea??
Thx
Btw- this problem has spanned versions of HA and reactor, but I am current on both. Too current on HA for transparency, but the issue has survived several updates.
Referencing an expression inside a reaction is in the form of ${{ expression }}. When referenced inside my shell command to set the watering delay duration for my Rachio sprinkler system, it just does not work.
If I enter "86400" instead of referencing the expression lWateringDelayDuration, it works. Either I am doing something wrong or referencing an expression inside a shell command is not supported.
Reactor version: 24212
Local Expression
lWateringDelayDuration =
Setting Reaction using Shell command
curl -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer xxxxxxxxxx -d '{ "id" : "xxxxxxxxxx", "duration" : ${{ lWateringDelayDuration}} }' https://api.rach.io/1/public/device/rain_delayThanks in advance
As per @toggledbits request, here's a new topic.
My Fibaro Door Window Sensor 2 (FGDW002) is always reporting as open, even if
x_zwave_values.Notification_Access_Control_Door_state=23 x_zwave_values.Notification_Access_Control_Door_state_simple=23which means that the door is closed. It was working before and I could downgrade to test, if necessary. Thanks.
Hi @toggledbits,
I'm not sure if it's a bug or something, but I have a lot of Fibaro Double Switch (FGS223) as follows.
In the example, it's zwavejs>65-2:
energy_sensor.units="kWh" energy_sensor.value=0.21 power_sensor.units="W" power_sensor.value=0 power_switch.state=false x_debug.dt={"entity_class":"Switch","match":"deviceClass.generic.key=16","capabilities":["power_switch","toggle"],"primary_attribute":"power_switch.state"} x_zwave_values.Binary_Switch_currentValue=false x_zwave_values.Binary_Switch_targetValue=false x_zwave_values.Meter_reset=null x_zwave_values.Meter_value_65537=0.21 x_zwave_values.Meter_value_66049=0 zwave_device.capabilities=[37,50] zwave_device.endpoint=2 zwave_device.failed=null zwave_device.impl_sig="23326:1:22315:1" zwave_device.manufacturer_info=null zwave_device.node_id=65 zwave_device.version_info=nullWhen operating endpoint 2, it's triggered endpoint 1. Endpoint 1 is fine. This is causing a lot of troubles, as you may imagine.
Also, endpoint 0 is not really a switch, and the associated actions are not doing anything at all. Maybe these could be removed. Also, I see battery_maintenance and power_source capabilities, all with null values.
battery_maintenance.charging=null battery_maintenance.rechargeable=false battery_maintenance.replace=false battery_maintenance.state=null heat_detector.state=false power_source.source=null power_switch.state=null x_debug.dt={"entity_class":"Switch","match":"deviceClass.generic.key=16","capabilities":["power_switch","toggle"],"primary_attribute":"power_switch.state","description":"Double Switch 2","model":"FGS223","default_name":"Double Switch 2","manufacturerId":271,"productType":515,"productId":4096} x_zwave_values.Central_Scene_scene_001=null x_zwave_values.Central_Scene_scene_002=null x_zwave_values.Central_Scene_slowRefresh=null x_zwave_values.Configuration_First_Channel_Energy_Reports_Threshold=100 x_zwave_values.Configuration_First_Channel_Operating_Mode=0 x_zwave_values.Configuration_First_Channel_Power_Reports_Minimum_Time_Between_Reports=10 x_zwave_values.Configuration_First_Channel_Power_Reports_Threshold=20 x_zwave_values.Configuration_First_Channel_Pulse_Time_for_Blink_Mode=5 x_zwave_values.Configuration_First_Channel_Reaction_to_Key_S1_for_Delay_Auto_ON_OFF_Modes=0 x_zwave_values.Configuration_First_Channel_Time_Parameter_for_Delay_Auto_ON_OFF_Modes=50 x_zwave_values.Configuration_General_Purpose_Alarm_Response=3 x_zwave_values.Configuration_Include_Consumption_By_Device_Itself_in_Reports=0 x_zwave_values.Configuration_Input_Button_Switch_Configuration=2 x_zwave_values.Configuration_Key_S1_Associations_Double_Click_Value_Sent=99 x_zwave_values.Configuration_Key_S1_Associations_Send_OFF_With_Single_Click_2=0 x_zwave_values.Configuration_Key_S1_Associations_Send_ON_With_Single_Click_1=0 x_zwave_values.Configuration_Key_S1_Associations_Send_When_Double_Clicking_8=0 x_zwave_values.Configuration_Key_S1_Associations_Send_When_Holding_and_Releasing_4=0 x_zwave_values.Configuration_Key_S1_Associations_Switch_OFF_Value_Sent=0 x_zwave_values.Configuration_Key_S1_Associations_Switch_ON_Value_Sent=255 x_zwave_values.Configuration_Key_S1_Send_Scenes_When_Held_Down_and_Released_8=1 x_zwave_values.Configuration_Key_S1_Send_Scenes_When_Pressed_1_Time_1=1 x_zwave_values.Configuration_Key_S1_Send_Scenes_When_Pressed_2_Times_2=1 x_zwave_values.Configuration_Key_S1_Send_Scenes_When_Pressed_3_Times_4=1 x_zwave_values.Configuration_Key_S2_Associations_Double_Click_Value_Sent=99 x_zwave_values.Configuration_Key_S2_Associations_Send_OFF_With_Single_Click_2=0 x_zwave_values.Configuration_Key_S2_Associations_Send_ON_With_Single_Click_1=0 x_zwave_values.Configuration_Key_S2_Associations_Send_When_Double_Clicking_8=0 x_zwave_values.Configuration_Key_S2_Associations_Send_When_Holding_and_Releasing_4=0 x_zwave_values.Configuration_Key_S2_Associations_Switch_OFF_Value_Sent=0 x_zwave_values.Configuration_Key_S2_Associations_Switch_ON_Value_Sent=255 x_zwave_values.Configuration_Key_S2_Send_Scenes_When_Held_Down_and_Released_8=1 x_zwave_values.Configuration_Key_S2_Send_Scenes_When_Pressed_1_Time_1=1 x_zwave_values.Configuration_Key_S2_Send_Scenes_When_Pressed_2_Times_2=1 x_zwave_values.Configuration_Key_S2_Send_Scenes_When_Pressed_3_Times_4=1 x_zwave_values.Configuration_Periodic_Active_Power_Reports=3600 x_zwave_values.Configuration_Periodic_Energy_Reports=3600 x_zwave_values.Configuration_Report_During_Blink_Mode=0 x_zwave_values.Configuration_Second_Channel_Energy_Reports_Threshold=100 x_zwave_values.Configuration_Second_Channel_Operating_Mode=0 x_zwave_values.Configuration_Second_Channel_Power_Reports_Minimum_Time_Between_Reports=10 x_zwave_values.Configuration_Second_Channel_Power_Reports_Threshold=20 x_zwave_values.Configuration_Second_Channel_Pulse_Time_for_Blink_Mode=5 x_zwave_values.Configuration_Second_Channel_Reaction_to_Key_S2_for_Delay_Auto_ON_OFF_Modes=0 x_zwave_values.Configuration_Second_Channel_Time_Parameter_for_Delay_Auto_ON_OFF_Modes=50 x_zwave_values.Configuration_Send_Secure_Commands_to_2nd_Association_Group_1=1 x_zwave_values.Configuration_Send_Secure_Commands_to_3rd_Association_Group_2=1 x_zwave_values.Configuration_Send_Secure_Commands_to_4th_Association_Group_4=1 x_zwave_values.Configuration_Send_Secure_Commands_to_5th_Association_Group_8=1 x_zwave_values.Configuration_Smoke_CO_or_CO2_Alarm_Response=3 x_zwave_values.Configuration_State_After_Power_Failure=1 x_zwave_values.Configuration_Temperature_Alarm_Response=1 x_zwave_values.Configuration_Time_of_Alarm_State=600 x_zwave_values.Configuration_Water_Flood_Alarm_Response=2 x_zwave_values.Manufacturer_Specific_manufacturerId=271 x_zwave_values.Manufacturer_Specific_productId=4096 x_zwave_values.Manufacturer_Specific_productType=515 x_zwave_values.Notification_Heat_Alarm_Heat_sensor_status=0 x_zwave_values.Notification_Power_Management_Over_current_status=0 x_zwave_values.Notification_alarmLevel=null x_zwave_values.Notification_alarmType=null x_zwave_values.Protection_exclusiveControlNodeId=null x_zwave_values.Protection_local=0 x_zwave_values.Protection_rf=0 x_zwave_values.Protection_timeout=null x_zwave_values.Version_firmwareVersions=["3.2"] x_zwave_values.Version_hardwareVersion=3 x_zwave_values.Version_libraryType=3 x_zwave_values.Version_protocolVersion="4.5" zwave_device.capabilities=[91,112,113,114,117,134] zwave_device.endpoint=0 zwave_device.failed=false zwave_device.generic_class="Binary Switch" zwave_device.impl_sig="23326:1:22315:1" zwave_device.is_beaming=false zwave_device.is_listening=true zwave_device.is_routing=true zwave_device.is_secure=false zwave_device.manufacturer_info=[271,515,4096] zwave_device.max_data_rate=null zwave_device.node_id=65 zwave_device.specific_class="Binary Power Switch" zwave_device.status=4 zwave_device.status_text="alive" zwave_device.version_info=[null,"3.2"]Thanks.
Good morning,
I'm having an issue with controlling my Zooz Zen14 outdoor double outlet. I should be able to control each outlet individually, and this does work when use Home Assistant (haas) from Reactor.
When I use zwavejs, I see 3 entries:
8305eccf-a99e-421f-ad18-1f08da9c8c9c-image.png
The first entry is for the overall device. I can turn both outlets on and off (in theory) by setting the power_switch state to on or off. This does turn them on and off when using zwavejs.
When I go to the individual outlets, performing the power_switch.on or power_switch.off actions turns them all (main, 1 and 2) on or off, and not just the individual outlets. When I perform the same action from haas, turning on outlet 1 will turn on the main switch and 1, but not 2.
I reviewed the logs for that node and I'm not seeing anything obvious.
:~/reactor/logs$ cat reactor.log.1 | grep ZWaveJSController#zwavejs | grep "node 216" [latest-24212]2024-08-07T00:19:00.233Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs update node 216 value "0:37:targetValue:" data [Object]{ "source": "node", "event": "value updated", "nodeId": 216, "args": { "commandClassName": "Binary Switch", "commandClass": 37, "endpoint": 0, "property": "targetValue", "newValue": true, "prevValue": false, "propertyName": "targetValue" } } [latest-24212]2024-08-07T00:19:00.235Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs update node 216 value "0:37:currentValue:" data [Object]{ "source": "node", "event": "value updated", "nodeId": 216, "args": { "commandClassName": "Binary Switch", "commandClass": 37, "property": "currentValue", "endpoint": 0, "newValue": true, "prevValue": false, "propertyName": "currentValue" } } [latest-24212]2024-08-07T00:19:00.321Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs update node 216 value "0:37:currentValue:" data [Object]{ "source": "node", "event": "value updated", "nodeId": 216, "args": { "commandClassName": "Binary Switch", "commandClass": 37, "property": "currentValue", "endpoint": 0, "newValue": true, "prevValue": true, "propertyName": "currentValue" } } [latest-24212]2024-08-07T00:19:00.322Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs update node 216 value "0:37:targetValue:" data [Object]{ "source": "node", "event": "value updated", "nodeId": 216, "args": { "commandClassName": "Binary Switch", "commandClass": 37, "property": "targetValue", "endpoint": 0, "newValue": true, "prevValue": true, "propertyName": "targetValue" } } [latest-24212]2024-08-07T00:19:00.323Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs update node 216 value "0:37:duration:" data [Object]{ "source": "node", "event": "value updated", "nodeId": 216, "args": { "commandClassName": "Binary Switch", "commandClass": 37, "property": "duration", "endpoint": 0, "newValue": { "value": 0, "unit": "seconds" }, "prevValue": { "value": 0, "unit": "seconds" }, "propertyName": "duration" } } [latest-24212]2024-08-07T00:19:02.189Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs update node 216 value "1:37:currentValue:" data [Object]{ "source": "node", "event": "value updated", "nodeId": 216, "args": { "commandClassName": "Binary Switch", "commandClass": 37, "property": "currentValue", "endpoint": 1, "newValue": true, "prevValue": false, "propertyName": "currentValue" } } [latest-24212]2024-08-07T00:19:02.192Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs update node 216 value "1:37:targetValue:" data [Object]{ "source": "node", "event": "value updated", "nodeId": 216, "args": { "commandClassName": "Binary Switch", "commandClass": 37, "property": "targetValue", "endpoint": 1, "newValue": true, "prevValue": false, "propertyName": "targetValue" } } [latest-24212]2024-08-07T00:19:02.193Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs update node 216 value "1:37:duration:" data [Object]{ "source": "node", "event": "value updated", "nodeId": 216, "args": { "commandClassName": "Binary Switch", "commandClass": 37, "property": "duration", "endpoint": 1, "newValue": { "value": 0, "unit": "seconds" }, "prevValue": { "value": 0, "unit": "seconds" }, "propertyName": "duration" } } [latest-24212]2024-08-07T05:32:30.127Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs configuring node 216 endpoint 0 (entity "216-0") [latest-24212]2024-08-07T05:32:30.127Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs configuring node 216 endpoint 1 (entity "216-1") [latest-24212]2024-08-07T05:32:30.128Z <ZWaveJSController:INFO> ZWaveJSController#zwavejs configuring node 216 endpoint 2 (entity "216-2")I'm running:
Reactor (Multi-hub) latest-24212-3ce15e25
ZWaveJSController [0.1.23326] (with zwavejs_data from 7/25/2024)
HA:
Core 2024.7.3
Supervisor 2024.08.0
Operating System 12.3
Frontend 20240710.0
I think this feature request could be accomplished with the use of two or more rules, but it would be great if there was a way to wait for an event or trigger to occur before continuing on in the reactions.
For example, I have a rule that will turn on some exterior lights if you arrive home after the porch lights have been turned off. Right now this rule randomly will turn off between 5-10 minutes after the person has entered the geofence. On some occasions this 5-10 minutes isn't long enough, say if you are unloading the car or something. I would like to kick off the reaction, but pause it part way through and wait for the door to close and lock, then continue it on. Hubitat Rule Machine has a "Wait for event" option, but I really want to keep all my logic within MSR.
Hi,
Running the latest version 24212 on docker.
I want to run some appliances when my solar panels deliver over 1200 watts to the net. In this example the dryer. So when the dryer is turned on, the rule starts (SET). There first is a 3 min delay in case the dryer is turned on manually, that is group "Check if auto start is needed". If the dryer is not running, the status is set to waiting, and the shown Repeat Until should wait as long the power delivered is higher than -1200 for at least 120 seconds and the status is still waiting. Higher as the value is negative. For some reason that repeat until "DR Wait for solar high or manual start" looks skipped. Nothing in the logs. What am I missing?
Reactor Automation.png (g_DryerStatus should have been "waiting" and not "running" the moment the log was captured, so that should not be the issue. Unless it takes some time for a variable to actually change?)
Cheers Rene
I have got some warnings i reactor that a rule is throttled because it update rate is over 120/minute, happens now and when with a week or two in between.
The rule monitors my house power consumption when charging my car, and if if consumption risks to kill a main fuse, it stops the charging.
Thought I was clever when I in Constraints added that my car should be located at home and charging is active, but apparantly it wasn't clever enough...
I had understand it that if Constraints isn't true, then the triggers isn't evaluated.
I have a Shelly 3EM meter that measure three channel/phases and updates every second. So just for this three channels it is 180 updates per minute...
And now when I start to take a closer look I can se in my logs that this rule is evaluated with every value change on any of the three phases...
So ~11MB of logs only goes back in time around 16 hours...🙀
So what is the best solution?
Should move my "Constraints" in an new trigger group, before my existing triggers?
Rule as it is now
e5486a1b-7c3c-4ffc-b259-beed64948f2e-image.png
Hi!
I'm wondering if it's possible now, or if it's something that can be solved in a future version, to be able to remove the entities that no longer exist and are crossed out all at once. Currently, you have to mark them one by one, which takes a lot of time. Is there any way to make this more efficient?
I'm on MSR version 24212-3ce15e25.
Thanks!
/Fanan
I'm slowly migrating all my stuff to MQTT under MSR, so I have a central place to integrate everything (and, in a not-so-distant future, to remove virtual devices from my Vera and leave it running zwave only).
Anyway, here's my reactor-mqtt-contrib package:
Contrib MQTT templates for Reactor. Contribute to dbochicchio/reactor-mqtt-contrib development by creating an account on GitHub.
Simply download yaml files (everything or just the ones you need) and you're good to go.
I have mapped my most useful devices, but I'll add others soon. Feel free to ask for specific templates, since I've worked a lot in the last weeks to understand and operate them.
The templates are supporting both init and query, so you have always up-to-date devices at startup, and the ability to poll them. Online status is supported as well, so you can get disconnected devices with a simple expression.
Many-many thanks to @toggledbits for its dedication, support, and patience with me and my requests 🙂
Quality of Life Request: Update Button
-
Massive upvote on all 3. Just like back in the old Vera days
(Unsure how easy update would be with the various ways of implementing MSR inside/outside docker etc) - unless a standard install method is specified, and one of the features of that is 'update' capability.
Docker-Compose please -
@Cadwizzard said:
Just like back in the old Vera days
How soon we forget the tales of bricked Veras. Who among us didn't have a little sense that they were playing Russian roulette every time we hit that button?
Unsure how easy update would be with the various ways of implementing MSR inside/outside docker etc
OK. He hits it on the head here. Let me explain some of the complications and my reservations around this.
The biggest pitfall is for docker users, IMO; that's the majority of you. The first thing you need to understand about docker is that the image and the container are separate objects in the system. The container is created from the image, but it's basically a copy, not linked in any meaningful way. The container can change, so that's good — I can download a release package and apply it to the container, restart, and the container will now be running the new release files. Unfortunately, that has no bearing on what happens to the image. Changing files in the container does nothing to the image. So let's take a scenario... @tunnus (Docker, Synology) downloads the image for Reactor 22274 and creates a container for it, so he's now running 22274. A little later, 22291 is released, so he hits the handy, flashy new "Upgrade" button and the container is upgraded in place. Perfect. Except, not... his image is still 22274. Stay with me now... In all likelihood, because of the "ease" of the automated upgrade, tunnus never needs to download a new image again (so he thinks), so he never bothers (it's a pain anyway on Synology, I'll agree). So build 22293 comes along, and then 22302, and then 22305, and then 22308, and he upgrades to all of them using the automated process, but the image is still sitting there on his NAS at 22274. The problem strikes if, for any reason (DSM major upgrade?), he decides to reset and rebuild the container, or delete it. He will get.... 22274. Because that's the image he has.
Can I make docker download the newer image as part of the upgrade process? No. Reactor is running inside the container, and the container, by definition, contains Reactor and keeps it from doing anything external to the container (except the limited data volume that's specifically created for the single purpose). So the running Reactor instance has no ability whatsoever to cause docker/DSM to pull later images. Pulling a new image and rebuilding the container is the real "right" way to upgrade, but it's not possible to automate it from within the container itself (and it's darned clunky in Synology's UI, unfortunately).
It's not hard to imagine that this problem would not bite him for months or years. But when it bites, it has the potential to bite hard. Imagine along Reactor's evolutionary path from 22274 to a future 24107 (released in 2024, all automated updates between, no image refreshes), there are changes that needed to be made to the data structures of rules, reactions, stored states, etc. (not at all hard to imagine, it actually happens all the time). It is easy, although sometimes a bit cumbersome, for me to provide forward compatibility: to make sure that newer versions of code read the old data and upgrade the structures, and the mechanisms for those upgrades remain in the code for some time. But there is no way under the sun for 22274, now running once again unexpectedly, to know what to do with data from the future 24107 build, and there's a chance it could do something really bad to it. Now tunnus has an old version running in his container with corrupt data. I hope he has a backup.
I'll take the opportunity to say that this is a cautionary tale for all of you who stay on older builds. I keep the code that reads and upgrades the data, when needed, for a while, so that people who skip a few upgrades can safely do so and "jump in time" when they are ready to apply a new build, but I don't and won't keep that code forever; it becomes a maintenance nightmare and it's beyond my available time and sensibilities to test every possible combination of upgrades between versions. If you're running on a Reactor that's more than a year out (21307 or earlier), you're playing with fire as far as I'm concerned, and you should not expect a smooth upgrade when you get around to it. You may need to upgrade to an interim build still available, which works for bare-metal, but isn't an option for docker users. And before the "I can't have something like that in my home" people start in here, please know that I'm sorry that the free software I offer you and for which I provide ready, quick, and free ongoing support (and upgrades) isn't perfectly to your liking. If you don't like the way it works, you have alternatives, and I fully support your freedom to choose them.
To continue with @Cadwizzard's point: this is equally or more egregious, unfortunately, for docker-compose users, because up to this point, the recommended way for stopping Reactor when using docker-compose is to run
docker-compose down
. This causes Reactor to stop, but also deletes the container. Any upgrades applied to the container are lost in that instant, because the container is discarded. When you later rundocker-compose up -d
, the container is re-created from the (old) image, and will be whatever version that image is. Maybe not a disaster, maybe it is. This could be addressed by retraining docker-compose users to usedocker-compose stop
rather thandocker-compose down
, but the distinction would need to be taught (and learned) as both are useful, and the infrequency of use of these commands would likely suffer from brain-drain over time (i.e. when to use which and what their side effects could be/will be lost on the user a few months from now). But it's such a subtle distinction that people will shoot themselves in the foot easily and regularly, I fear.Bare-metal is somewhat easier, because at least the process can be assured it's writing on the one and only (relevant) image, in the install directory, so that's a bit of relief. Unfortunately, a lot of people really don't understand Linux file permissions, their relations to users and groups, etc., and routinely goof up the permissions of files all over their system, including in the Reactor install directory. This isn't a problem for them after the first "fix," because thereafter they do the manual upgrades the same way, logged in as the same user (in some cases, even as root, which is a serious no-no), and so it works for them as that user in that case, good enough. But for an automated process running in an unprivileged environment, it can mean that some files aren't writable, and the upgrade only half-happens... the upgrade process crashes, some files are new and some are old, and the Reactor install is basically dead and broken. I can't fix the permissions from the running instance, because it's running as an unprivileged user (well, hopefully; woe unto those who run anything as root). The user then manually applies the upgrade to recover the system, which goes fine because of course he's running the privileged user with the right permissions. A bug/post for the upgrade process then gets reported, and I then spend hours or days going back and forth, digging through the user's 3,000+ files in a typical Reactor install, looking for the broken ones and teaching the user how to fix them. (Permissions and their potential brokenness is also an issue for doing automated backups/restores, since that was mentioned as well.)
Oh, and then there's Windows. I won't even start. I've already written a book (again).
With regard to the suggestion of a standard install method: (a) there is no "one size fits all" — what works for Ubuntu doesn't work for Synology DSM or QNAP, and certainly not for Windows; and (b) the install methods that are recommended are all carefully documented; experience shows that I can write out every detail I can think of, and what actually happens on the user's system is 100% of that or some amount less, or the user has some condition in their system/environment that I could not/did not anticipate that causes problems. My preferred method for most users is docker (and specifically, with docker-compose), because the container strategy removes some of these risks, but that's not always the easiest for their environment (e.g. Synology has docker but no -compose), and the accepted mechanisms for upgrading containers in the docker world in general are ironically exactly the subject of complaints by OP and others here, despite the relative ubiquity and ease of these mechanisms.
The point is, there is no panacaea here. You run these systems, not me. You do things I have no knowledge of, and sometimes those things bite back. The majority of my time supporting this product is troubleshooting your environments, not my code (I'm not saying I'm perfect — I make mistakes and bugs are a reality, but they're not the majority of support issues here in terms of time spent). Anything and everything I do in the system is looked at not just through the lens of whether it's convenient for the user, but very much through the lens of supportability. There are lots of features I get asked to do, and as you've seen (even recently), there are some that I refuse to consider simply because it would make the system less supportable, in my view. As features get added, not only is the usability of the system required to improve, but its quality is expected to improve as well (fewer bugs, better support, etc.) — those are my expectations, which I'm sure you share. If one doesn't consider supportability (and that means both in user support and code maintenance/reliability/scalability), one ends up with a lot of features that nobody asked for, don't work, and aren't usable (I can point you in the direction of such a product as an alternative if you're really interested in that).
There is a running, hidden upgrade process in the current build. I've been experimenting with this for a bit, getting to learn it, and discovering these issues. It's not that I won't consider making it available; I'm still studying it, and pondering the wisdom of it. Maybe sometimes I worry too much about things like this, I don't know. But when it goes wrong, there's nobody but you and me to fix it, and there's a lot of you and only one of me, so as I said in another recent conversation, handing out something that feels like a grenade with no pin sometimes doesn't seem like the best idea to me, and there are probably other things this system needs to do that I can better spend my time on. Maybe this is one of those things.
I'll leave this one up to you guys. If you can tolerate these side effects, I'll release the feature. But know that if you break your install because (docker) you somehow delete the container and recreate it from an old image, or (bare-metal) your install has broken permissions or other issues that the upgrade process can't work through, my answer will be short: that's a risk you accepted, do a clean reinstall from a current image, restore your config/state backup, and start over.
-
One more class of knowledge! Really the desire to have an automatic process would be very good, but your explanation makes clear the difficulty and risks, and I do not want to have it. As you said, the errors we generate are enough, I don't want to implement more risks. I think almost everyone here has a wife, so better to stay in the safety of the system working.
Well let's remove this from wishlist request, and could you share this list so we know everything that's coming in the future? Also put an item, display the status widget, in a window/iframe inside the HE dashboard
-
@toggledbits this makes tons of sense why anyone should want an update button mainly Docker users.
In terms of bare metal users, say if a user messed with their files permissions enough that it would cause issues when updating Reactor, wouldn't they run into the same errors even if they manually updated or used the update button? I wouldn't mind an update button for bare metal users, since from your explanation seems like a possible issue with won't come up with the actually update process itself, it can come with something else (like file permissions etc). Meaning that they'd run into these errors even if they manually updated Reactor like we do now.
Not arguing though, its a fairly low level request from me. I can clearly see why an update button for Docker users could be a slow and silent death. As @CatmanV2 for bare metal the update process really only takes 90 seconds ahah.
-
@pabla said in Quality of Life Request: Update Button:
wouldn't they run into the same errors even if they manually updated or used the update button?
Not necessarily... some users... I've seen it... will run into permission problems and their answer, not understanding the problem or how to fix it, is to use
sudo tar xvf
to just lay tracks over everything. This would eliminate the permissions problem unpackaging the archive, but new files may become root-owned, which isn't right but the code doesn't care as long as its readable. If their umask allows world-readable files (and 022 is a common default that does exactly that), the Reactor runtime will never know permissions are broken, because every file it needs is readable without consideration of ownership. The un-tar'ing doesn't touchlogs
,config
, etc. so any permissions there aren't relevant and aren't changed. And because some of the files are now root-owned that shouldn't be, the permissions problem has been made worse and again, unless they are truly fixed the right way, thensudo
will continue to be the only way upgrades will succeed. It perpetuates and exacerbates.I really get how painful the docker upgrades are on Synology. I'm guessing QNAP is probably not much different, and I think several people have been bitten by Portainer oddities regardless of platform.
The process just needs more thought. I could, for example, from the next build onward, prevent the system from starting if the config and data are from a newer version. The problem there is that it needs to be detected early in startup, and if the system can't use the data, it has to exit hard, because it can't run without any data at all, and it can't touch what it has. There would be no UI feedback other than "DISCONNECTED" (i.e. the behavior when Reactor can't start). A "click-to-upgrade" to fix it wouldn't be an option because the UI would not be running, so a manual upgrade would be required at that point. And maybe that's OK? Maybe that's such an extreme/infrequent circumstance that it should be that way? A manual upgrade once in a blue moon may not be so bad... I don't know... looking for feedback... trying to figure it out...
-
I personally do not think the update process on Synology docker is that bad. A few more clicks than an easy button but not horrible. All my other docker containers are updated the same way. I like the docker image though. I am not familiar with the other platforms so I can’t comment on those update processes.
-
-
I too am happy with the current process. Super fast for me. I run everything under Synology/Docker. I no longer have issues upgrading containers such as Reactor, HA, etc. since I switched to Portainer several months ago. So not sure what those "Portainer" oddities are/were. Something I should keep an eye on? Or I have just been lucky?
-
Some input from a Windows user.
An update button would of course be a nice to have feature, but I also agree with several other here. A "normal" update, aka don't need new dependencies, just take a short moment to install.
Were I usually stumble is when an update of dependencies are needed. That have taken me hours of search-try--error-tryagain before getting that to work sometimes.My dream would be to have a "windows installer" for MSR, that checks dependencies, install a systemservice etc.
Over time I think that would be a safer/more stable way, with fewer user errors.With this said, I can understand really understand that @toggledbits need to handle this "his way" to be able to support differen't enviroments (and users )
-
I don't know if this helps for other Docker users, but not long after I got started with Docker I found Portainer, and I've been running it alongside Reactor and my other containers on my Raspberry Pi 4. With Portainer, there may not be a one-step update button, but I find it makes updates much easier.
I just updated Reactor to the latest. All I had to do was go to the Portainer URL in my browser, then
- Click on the Reactor container in the Containers list
-
Click 'Recreate"
-
Toggle "Always pull new image" on the window that pops up
- Click "Recreate"
It isn't one click, but it can be done in a browser tab from any machine with network access to the Docker host. No VNC/SSH into the machine, no Docker commands to run from the command line.
Portainer also has links to view the container logs and to open a command window in the container, which I use all the time. You can also use the "duplicate/edit" button to change or add environment variables while updating, which is how I added the NODE_PATH a few updates back.
-
Thanks for the Portainer explanation, I'm certain I've had a spell cast on me. I'll try again once the Pi400 become available once more.
Lastly, I think the point has been lost, it's about QOL, not about how easy it is to do in another way.
From my perspective if it isn't easy to use by 98% of the public then it's too much trouble and they might look at it then discard it for another solution.
The comments so far are from users who are in the 2% and are happy to tinker. I'm happy for you.
If anyone wants to see how Consumer friendly software should be to set up, then have a look at Homeseer4. Update ...no problem with 1 click. -
@black-cat Isn't homeseer a walled garden like Hubitat, Ezlo, etc.? You buy their hub and live within their infrastructure.
That's not MSR. MSR works on various OS/hardware and communicates with multiple hubs.
Whilst I appreciate your POV, it's not apples>apples comparison you're making here.
-
@gwp1 said in Quality of Life Request: Update Button:
You buy their hub and live within their infrastructure.
Nup, you can use any old Laptop or RasPi. Runs on Windows or Lynx. i'd love to promote MSR to Homeseer users but it lacks the simplicity hence the backing of the request.
Realistically, I'm not going to see it happen which is a shame as Patrick has put a lot of time into development for the 2%. -
@sweetgenius I agree, Synology Docker container upgrade process is not too bad.
I frequently keep both MSR and Synology UI open on separate browser tabs and either do a quick upgrade using "reset" or a bit careful upgrade using "duplicate settings" and retaining old container as a backup/rollback option.
Originally I favored a simple update button for MSR, but after Patrick's explanations I realized it's not that simple after all.