Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Unsolved
Collapse
Discussion Forum to share and further the development of home control and automation, independent of platforms.
D

Donato

@Donato
Condition for trend
T
Topic thumbnail image
Multi-System Reactor
Set reaction triggering wrong z-wave device
T
Topic thumbnail image
Multi-System Reactor
Can you run MSR on Home Assistant OS ?
cw-kidC
Looking at using Home Assistant for the first time, either on a Home Assistant Green, their own hardware or buying a cheap second hand mini PC. Sounds like Home Assistant OS is linux based using Docker for HA etc. Would I also be able to install things like MSR as well on their OS ? On the same box? Thanks.
Multi-System Reactor
RPi Alternative: Orange Pi 4 LTS (3GB RAM/16GB eMMC)
toggledbitsT
The last of four boards I'm trying in this batch is the Orange Pi 4 LTS. I purchased a 3GB RAM + 16GB eMMC model from Amazon for $83, making it the most costly of the four boards tried, but still well under my US$100 limit. This board is powered by a Rockchip RK3399-T processor, ARM-compatible with dual Cortex-A72 cores and quad Cortex-A53 cores at 1.6Ghz (1.8Ghz for the 4GB model); compare this to the RPi 3B+ with four Cortex-A53 and the RPi 4B with four Cortex-A72, this board is a hybrid that I would expect to stand in the performance middle between the two RPi models. It's available in 3GB and 4GB DDR4 RAM configurations, with and without 16GB eMMC storage. It has a MicroSDHC slot, gigabit Ethernet, WiFi and BT, two USB 2.0 type A ports, one USB 3.0 type C port, a mini PCIe ribbon-cable connector (requires add-on board for standard connector), two each RPi-compatible camera and LCD ports, HDMI type A, and can be powered (5VDC/3A) via USB-C or DC type C (3.8mm OD/1.1mm ID) jack (center-positive), an odd and perhaps unwelcome departure from the more common type A (5.5mm/2.1mm). A serial port for console/debug can be connected by using a (not included) USB-TTL adapter (3.3V) via pin headers like the Orange Pi Zero 2. The included dual-band antenna connects via U.FL connector to the board, so it's easy substituting for another if you prefer. The manufacturer recommends use of a heat sink (which was included in the box). A metal cooling case is also offered by the manufacturer (a bundle with the metal case and a power supply is sold on Amazon for $90 as of this writing). The Orange Pi 4 LTS is somewhat longer than the RPi 4B, and although the boards are the same width, the mounting hole placement is different both in length and (oddly) width. Between this and the differences in connector locations, neither board is a drop-in replacecment for the other and their respective cases are not interchangeable. The 26-pin header is a subset of the RPi 4B's 40-pin header, so some HATs for the RPi may work (although the mounting hole differences will make securing them "interesting"), and some HATs will surely not. Models with eMMC storage have an OS installed and boot immediately with SSH daemon running and ready for login. Mine was running Debian Bullseye, which would probably be fine for most users. It had clearly been on there a while, because it needed a lot of updates, but it's a current distro, so you're running out of the box with something that will last. A different OS can be installed by downloading an image (once again I chose Ubuntu Jammy) and writing it to a MicroSD card, then booting the system from the SD card. You can either leave the system in that state (running the OS from the SD card), or copy the OS from the SD card to the eMMC. The latter is done by a script; documentation for the process is best described in the downloadable PDF User Manual. This took about 10 minutes and went smoothly, and I was able to boot the system without the SD card after the process completed. I have lingering questions around the value of the eMMC storage. It's definitely faster than using MicroSD or USB-based storage (I got 311MB/s average on a 4GB write, compared to MicroSD performance around 15MB/s), but it would take a long-term test of this product to determine if the on-board eMMC option has the stamina to take the write counts typical of Linux systems, and if its wear-leveling and error correction are sufficient to assure a long, error-free life. Given the high premium apparently being paid for including eMMC on the board, it should be fast and durable, but only time and experience (perhaps painful) would tell the latter. A careful configuration with other Flash-friendly filesystems could be used to reduce wear, but this is an advanced configuration/cookbook topic and beyond the scope of this writing. This question is also not unique to eMMC — MicroSD cards are also known to fail with high write cycles, so the use of a "high endurance" product is recommended for any and all systems using MicroSD as primary storage. The board has Mini PCIe capability, and that may be a storage alternative, but read on... Also bear in mind that the eMMC storage is fixed-size forever; it cannot be expanded, and 16GB can run out pretty quickly these days. Users of MicroSD cards for primary storage can upgrade to bigger cards, but when users of eMMC primary storage outgrow it, the only choice is to add a MicroSD card or other "external" storage to the system, move part of the filesystem to it, and then manage both storage devices and deal with the limitations and risks of both. As I mentioned with the Orange Pi Zero 2, if you are going to use this board as a home automation controller/gateway or similar role, it should (IMO) have a battery-backed real time clock (RTC), and Orange Pi offers an add-on module that connects directly to the 26-pin header on the board. An available expansion board provides a standard Mini PCIe interface and SIM card slot (hmm...), but it connects to the main board via a short ribbon cable, and its mounting holes have no complement on the main board, so it seems like it would be a fragile dangly thing that's a nuisance to deal with. I want to like this board more, and it's very capable, but I'm concerned about value. The limited options for eMMC (16GB or none), the question mark of the eMMC's longevity vs cost, the strange DC power connector choice, the lack of 40-pin GPIO on a full-size (plus) board, the inconsistent hole placement, and the fragile Mini PCIe arrangement, are all "cons" that devalue this board in my view. The price point is clearly driven by the additional capabilities of the board (camera support, ports, six core CPU, extra RAM, on-board eMMC storage), but unfortunately, a great many of these features may not be useful for home automation, and therefore potentially a waste of money. In terms of overall value, I still believe the Libre "Le Potato" seems a better choice to me, and the Orange Pi Zero 2 (very) a close second, but I'll admit I'm focused on a particular application and your needs may be better suited to what this board offers than mine. Passmark Results: OrangePi 4 LTS Cortex-A72 (aarch64) 6 cores @ 1200 MHz | 2.9 GiB RAM Number of Processes: 6 | Test Iterations: 1 | Test Duration: Medium -------------------------------------------------------------------------- CPU Mark: 583 Integer Math 12037 Million Operations/s Floating Point Math 2542 Million Operations/s Prime Numbers 4.5 Million Primes/s Sorting 3141 Thousand Strings/s Encryption 153 MB/s Compression 4049 KB/s CPU Single Threaded 154 Million Operations/s Physics 80.5 Frames/s Extended Instructions (NEON) 244 Million Matrices/s Memory Mark: 498 Database Operations 551 Thousand Operations/s Memory Read Cached 2524 MB/s Memory Read Uncached 2602 MB/s Memory Write 3182 MB/s Available RAM 1947 Megabytes Memory Latency 119 Nanoseconds Memory Threaded 6243 MB/s --------------- eMMC storage write 311MB/s average for 4GB; MicroSD (Samsung 32GB class 10) storage write 15MB/s.
SBC
RPi Alternative: Orange Pi Zero 2 (1GB)
toggledbitsT
Topic thumbnail image
SBC
RPi Alternative: Libre Computer AML-S905X-CC "Le Potato" (2GB RAM)
toggledbitsT
With Raspberry Pi boards continuing to be relatively scarce, I've been trying a few alternatives to see what may be usable and good. I had previously written about the Jetson Nano 2GB, which is great, but a little pricey, so I'm trying to find sub-US$100 boards that will run Reactor. I've got four that I'm trying now, but one in particular goes right to work in the most predictable way and seems worth a mention immediately: the Libre Computer Board AML-S905X-CC 2GB (known as "Le Potato"). The form factor is very similar to that of the Raspberry Pi 3 B+, and has comparable CPU (ARM Cortex-A53, quad 64-bit cores at 1.5+GHz -- slightly higher clock speed). It's US$35 on Amazon and LoverPi in the (recommended) 2GB configuration, and easy to get. Startup is like RPi: download one of the available OS images (Ubuntu, Raspbian, Debian, ARMbian, etc.) from their site and write the image to a MicroSD card, insert into slot, power up, and off you go. I tried the Ubuntu 22.04 image first and it comes right up. No problem getting nodejs 18.12.1 installed and running (with Reactor). No WiFi on board, but I don't see that as a minus for use as a controller/hub (which should be hard-wired, IMO). The 40-pin GPIO connector is compatible with typical RPi HATs (PoE, breakouts, etc.). There is an available eMMC (solid state storage) module to use instead of MicroSD, which I would recommend for long-term use. It runs US$25 for 32GB (64GB and 128GB available). The module is scarcely larger than the chip it carries, and has the smallest board-to-board connector I've ever seen. Next up: ESPRESSObin 2GB (spoiler: it's... technical...)
SBC
HA and AI
CatmanV2C
Having hours of (actually quite fun) interaction with AI (Chat GPT) making up dashboards and sensors for HA. It's OK (well it's better than I am!) but it makes soooo many mistakes. Gets there in the end though, if you've half a clue (which I do half the time) C
Home Assistant
How to upgrade from an old version of MSR?
cw-kidC
Hello I haven't updated my installation of MSR in a very long time. Its a bare metal Linux install currently on version 24366-3de60836 I see the latest version is now latest-26011-c621bbc7 I assume I cannot just jump from a very old version to the latest version? Or can I? Thanks
Multi-System Reactor
This trigger no longer working - complaining about the operator needing changing
cw-kidC
Topic thumbnail image
Multi-System Reactor
Self test
CatmanV2C
Having been messing around with some stuff I worked a way to self trigger some tests that I wanted to do on the HA <> MSR integration This got me wondering if there's an entity that changes state / is exposed when a configured controller goes off line? I can't see one but thought it might be hidden or something? Cheers C
Multi-System Reactor
Access control - allowing anonymous user to dashboard
tunnusT
Using build 25328 and having the following users.yaml configuration: users: # This section defines your valid users. admin: ******* groups: # This section defines your user groups. Optionally, it defines application # and API access restrictions (ACLs) for the group. Users may belong to # more than one group. Again, no required or special groups here. admin_group: users: - admin applications: true # special form allows access to ALL applications guests: users: "*" applications: - dashboard api_acls: # This ACL allows users in the "admin" group to access the API - url: "/api" group: admin_group allow: true log: true # This ACL allows anyone/thing to access the /api/v1/alive API endpoint - url: "/api/v1/alive" allow: true session: timeout: 7200 # (seconds) rolling: true # activity extends timeout when true # If log_acls is true, the selected ACL for every API access is logged. log_acls: true # If debug_acls is true, even more information about ACL selection is logged. debug_acls: true My goal is to allow anonymous user to dashboard, but MSR is still asking for a password when trying to access that. Nothing in the logs related to dashboard access. Probably an error in the configuration, but help needed to find that. Tried to put url: "/dashboard" under api_acls, but that was a long shot and didn't work.
Multi-System Reactor
VEC Virtual Switch Auto Off
S
I use Virtual Entity Controller virtual switches which I turn on via webhooks from other applications. Once a switch triggers and turns on, I can then activate associated rules. I would like each virtual switch to automatically turn off after a configurable time (e.g., 5 seconds, 10 seconds). Is there a better way to achieve this auto-off behavior instead of creating a separate rule for each switch that uses the 'Condition must be sustained for' option to turn it off? With a large number of these switches (and the associated turn-off rules), I'm checking to see if there is a simpler approach.If not, could this be a feature request to add an auto-off timer directly to the virtual switches. Thanks Reactor (Multi-hub) latest-26011-c621bbc7 VirtualEntityController v25356 Synology Docker
Multi-System Reactor
Upcoming Storage Change -- Got Back-ups?
toggledbitsT
TL;DR: Format of data in storage directory will soon change. Make sure you are backing up the contents of that directory in its entirety, and you preserve your backups for an extended period, particularly the backup you take right before upgrading to the build containing this change (date of that is still to be determined, but soon). The old data format will remain readable (so you'll be able to read your pre-change backups) for the foreseeable future. In support of a number of other changes in the works, I have found it necessary to change the storage format for Reactor objects in storage at the physical level. Until now, plain, standard JSON has been used to store the data (everything under the storage directory). This has served well, but has a few limitations, including no real support for native JavaScript objects like Date, Map, Set, and others. It also is unable to store data that contains "loops" — objects that reference themselves in some way. I'm not sure exactly when, but in the not-too-distant future I will publish a build using the new data format. It will automatically convert existing JSON data to the new format. For the moment, it will save data in both the new format and the old JSON format, preferring the former when loading data from storage. I have been running my own home with this new format for several months, and have no issues with data loss or corruption. A few other things to know: If you are not already backing up your storage directory, you should be. At a minimum, back this directory up every time you make big changes to your Rules, Reactions, etc. Your existing JSON-format backups will continue to be readable for the long-term (years). The code that loads data from these files looks for the new file format first (which will have a .dval suffix), and if not found, will happily read (and convert) a same-basenamed .json file (i.e. it looks for ruleid.dval first, and if it doesn't find it, it tries to load ruleid.json). I'll publish detailed instructions for restoring from old backups when the build is posted (it's easy). The new .dval files are not directly human-readable or editable as easily as the old .json files. A new utility will be provided in the tools directory to convert .dval data to .json format, which you can then read or edit if you find that necessary. However, that may not work for all future data, as my intent is to make more native JavaScript objects directly storable, and many of those objects cannot be stored in JSON. You may need to modify your backup tools/scripts to pick up the new files: if you explicitly name .json files (rather than just specifying the entire storage directory) in your backup configuration, you will need to add .dval files to get a complete, accurate backup. I don't think this will be an issue for any of you; I imagine that you're all just backing up the entire contents of storage regardless of format/name, that is the safest (and IMO most correct) way to go (if that's not what you're doing, consider changing your approach). The current code stores the data in both the .dval form and the .json form to hedge against any real-world problems I don't encounter in my own use. Some future build will drop this redundancy (i.e. save only to .dval form). However, the read code for the .json form will remain in any case. This applies only to persistent storage that Reactor creates and controls under the storage tree. All other JSON data files (e.g. device data for Controllers) are unaffected by this change and will remain in that form. YAML files are also unaffected by this change. This thread is open for any questions or concerns.
Multi-System Reactor
Oddness in Copy/Move of Reactions
G
Topic thumbnail image
Multi-System Reactor
[Solved] function isRuleEnabled() issue
CrilleC
Topic thumbnail image
Multi-System Reactor
[Reactor] Problem with Global Reactions and groups
therealdbT
Topic thumbnail image
Multi-System Reactor
Possible feature request 2?
CatmanV2C
Just another thought. Adding devices from my Home Assistant / Zigbee2MQTT integration. Works perfectly but they always add as their IEEE address. Some of these devices have up to 10 entities associated, and the moment they are renamed to something sensible, each of those entities 'ceases to exist' in MSR. I like things tidy, and deleting each defunct entity needs 3 clicks. Any chance of a 'bulk delete' option? No biggy as I've pretty much finished my Z-wave migration and I don't expect to be adding more than 2 new Zigbee devices Cheers C
Multi-System Reactor
Reactor (Multi-System/Multi-Hub) Announcements
toggledbitsT
Build 21228 has been released. Docker images available from DockerHub as usual, and bare-metal packages here. Home Assistant up to version 2021.8.6 supported; the online version of the manual will now state the current supported versions; Fix an error in OWMWeatherController that could cause it to stop updating; Unify the approach to entity filtering on all hub interface classes (controllers); this works for device entities only; it may be extended to other entities later; Improve error detail in messages for EzloController during auth phase; Add isRuleSet() and isRuleEnabled() functions to expressions extensions; Implement set action for lock and passage capabilities (makes them more easily scriptable in some cases); Fix a place in the UI where 24-hour time was not being displayed.
Multi-System Reactor
Genuinely impressed with Zigbee and HA / Reactor
CatmanV2C
Just for the record, in case anyone is following, I'm really rather impressed. I have installed one of these: https://www.amazon.co.uk/dp/B0B6P22YJC?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1 That's connected (physically) to the VM running on my Synology, with a 2m USB extension. The same host also runs Openluup, Mosquito, HA Bridge. Yesterday I installed Zigbee2mqtt. That was a bit of a PITA but mostly because of ports and permissions. Once up and running, and the correct boxes ticked, immediately visible in Home Assistant via the MQTT integration, and thence into Reactor I've only got two devices. I bought the cheapest sensor I could find, which is a door sensor. Dead easy to add to ZIgbee2mqtt and again, immediately visible in HA. https://www.amazon.co.uk/dp/B0FPQLWRW1?ref=ppx_yo2ov_dt_b_fed_asin_title The dongle is on the top floor of the house, and I wanted the sensor on the back door (just about as far apart as it's possible to get short of going into the garage) When I moved the sensor downstairs it dropped out pretty instantly (which wasn't a huge surprise) so quick bit of research found out that smart plugs will act as routers so... https://www.amazon.co.uk/dp/B0FDQDPGBB?ref=ppx_yo2ov_dt_b_fed_asin_title Took me about 30 seconds to connect. Updated the name. Instantly visible in Reactor with the new name pushed over from Zigbee2mqtt. And lo, the door sensor now has a signal of 140 and works as far as I can tell perfectly and instantly (unlike my z-wave one). A few more of those will be purchased and used to replace the Tuya wifi cloud devices and the (continually failing) Z-wave plugs (yeah, they were TKB so....) Commended to the house. Thanks for everyone that got me on the right lines. C
Zigbee
Copying a global reaction
tunnusT
With build 25328, if you copy a global reaction, a new reaction does not appear in the UI unless you do a refresh. I recall this used to work without needing this page refresh? Anyway, only a minor nuisance.
Multi-System Reactor
About
Posts
39
Topics
2
Shares
0
Groups
0
Followers
0
Following
1

Posts

Recent Best Controversial

  • openLuup log files - LuaUPnP.log and LuaUPnP_startup.log
    D Donato

    @akbooer

    here in the community app in the reply box

    openLuup

  • openLuup log files - LuaUPnP.log and LuaUPnP_startup.log
    D Donato

    @akbooer

    in my actual user_data file double quotes are preceded by \.

    when I paste the file as text and submit the reply here the \ symbol is eliminated .

    Following I paste as code :

    "StartupCode":"\n-- You can personalise the installation by changing these attributes,\n-- which are persistent and may be removed from the Startup after a reload.\nlocal attr = luup.attr_set\n\n-- Geographical location\nattr (\"City_description\", \"Rome\")\nattr (\"Country_description\", \"Italy\")\nattr (\"Region_description\", \"Lazio\")\nattr (\"latitude\", \"51.48\")\nattr (\"longitude\", \"0.0\")\n\n-- other parameters\nattr (\"TemperatureFormat\", \"C\")\nattr (\"PK_AccessPoint\", \"99000007\")\nattr (\"currency\", \"£\")\nattr (\"date_format\", \"dd/mm/yy\")\nattr (\"model\", \"Not a Vera\")\nattr (\"timeFormat\", \"24hr\")\n\n-- Any other startup processing may be inserted here...\nluup.log \"startup code completed\"\n\n",
    
    

    sorry for my error

    openLuup

  • openLuup log files - LuaUPnP.log and LuaUPnP_startup.log
    D Donato

    @akbooer

    attached a copy of startup lua

    startlua.png

    and the few lines around the error :

    "Mode":"1",
    "ModeSetting":"1:DC*;2:DC*;3:DC*;4:DC*",
    "PK_AccessPoint":"99000007",
    "Region_description":"Lazio",
    "ShutdownCode":"",
    "StartupCode":"\n-- You can personalise the installation by changing these attributes,\n-- which are persistent and may be removed from the Startup after a reload.\nlocal attr = luup.attr_set\n\n-- Geographical location\nattr ("City_description", "Rome")\nattr ("Country_description", "Italy")\nattr ("Region_description", "Lazio")\nattr ("latitude", "51.48")\nattr ("longitude", "0.0")\n\n-- other parameters\nattr ("TemperatureFormat", "C")\nattr ("PK_AccessPoint", "99000007")\nattr ("currency", "£")\nattr ("date_format", "dd/mm/yy")\nattr ("model", "Not a Vera")\nattr ("timeFormat", "24hr")\n\n-- Any other startup processing may be inserted here...\nluup.log "startup code completed"\n\n",
    "TemperatureFormat":"C",
    "ThousandsSeparator":",",
    "currency":"£",
    "date_format":"dd/mm/yy",

    openLuup

  • openLuup log files - LuaUPnP.log and LuaUPnP_startup.log
    D Donato

    @a-lurker

    the 176 line above is inside the user_data file and every parameter is separated by "," . Following some lines around 176 :

    "Region_description":"Lazio",
    "ShutdownCode":"",
    "StartupCode":"\n-- You can personalise the installation by changing these attributes,\n-- which are persistent and may be removed from the Startup after a reload.\nlocal attr = luup.attr_set\n\n-- Geographical location\nattr ("City_description", "Rome")\nattr ("Country_description", "Italy")\nattr ("Region_description", "Lazio")\nattr ("latitude", "51.48")\nattr ("longitude", "0.0")\n\n-- other parameters\nattr ("TemperatureFormat", "C")\nattr ("PK_AccessPoint", "99000007")\nattr ("currency", "£")\nattr ("date_format", "dd/mm/yy")\nattr ("model", "Not a Vera")\nattr ("timeFormat", "24hr")\n\n-- Any other startup processing may be inserted here...\nluup.log "startup code completed"\n\n",
    "TemperatureFormat":"C",
    "ThousandsSeparator":",",

    openLuup

  • openLuup log files - LuaUPnP.log and LuaUPnP_startup.log
    D Donato

    @akbooer

    This is the line 176 of User_Data Json file :

    "StartupCode":"\n-- You can personalise the installation by changing these attributes,\n-- which are persistent and may be removed from the Startup after a reload.\nlocal attr = luup.attr_set\n\n-- Geographical location\nattr ("City_description", "Rome")\nattr ("Country_description", "Italy")\nattr ("Region_description", "Lazio")\nattr ("latitude", "51.48")\nattr ("longitude", "0.0")\n\n-- other parameters\nattr ("TemperatureFormat", "C")\nattr ("PK_AccessPoint", "99000007")\nattr ("currency", "£")\nattr ("date_format", "dd/mm/yy")\nattr ("model", "Not a Vera")\nattr ("timeFormat", "24hr")\n\n-- Any other startup processing may be inserted here...\nluup.log "startup code completed"\n\n",

    I modify these parameter through the console openluup app and these are the values :

    -- You can personalise the installation by changing these attributes,
    -- which are persistent and may be removed from the Startup after a reload.
    local attr = luup.attr_set

    -- Geographical location
    attr ("City_description", "Rome")
    attr ("Country_description", "Italy")
    attr ("Region_description", "Lazio")
    attr ("latitude", "51.48")
    attr ("longitude", "0.0")

    -- other parameters
    attr ("TemperatureFormat", "C")
    attr ("PK_AccessPoint", "99000007")
    attr ("currency", "£")
    attr ("date_format", "dd/mm/yy")
    attr ("model", "Not a Vera")
    attr ("timeFormat", "24hr")

    -- Any other startup processing may be inserted here...
    luup.log "startup code completed"

    Is there any error ?

    tnks

    openLuup

  • openLuup log files - LuaUPnP.log and LuaUPnP_startup.log
    D Donato

    Hi akbooer,

    sometimes openluup restore the file user_data.json to the default and I need to restore the configured one. I notice in the LuaUPnP_startup.log these msgs :

    2024-07-18 07:46:19.585   :: openLuup STARTUP :: /etc/cmh-ludl
    2024-07-18 07:46:19.586   openLuup.init::        version 2022.11.28  @akbooer
    2024-07-18 07:46:19.595   openLuup.scheduler::   version 2021.03.19  @akbooer
    2024-07-18 07:46:19.723   openLuup.io::          version 2021.03.27  @akbooer
    2024-07-18 07:46:19.723   openLuup.mqtt::        version 2022.12.16  @akbooer
    2024-07-18 07:46:19.727   openLuup.wsapi::       version 2023.02.10  @akbooer
    2024-07-18 07:46:19.727   openLuup.servlet::     version 2021.04.30  @akbooer
    2024-07-18 07:46:19.727   openLuup.client::      version 2019.10.14  @akbooer
    2024-07-18 07:46:19.729   openLuup.server::      version 2022.08.14  @akbooer
    2024-07-18 07:46:19.737   openLuup.scenes::      version 2023.03.03  @akbooer
    2024-07-18 07:46:19.750   openLuup.chdev::       version 2022.11.05  @akbooer
    2024-07-18 07:46:19.750   openLuup.userdata::    version 2021.04.30  @akbooer
    2024-07-18 07:46:19.751   openLuup.requests::    version 2021.02.20  @akbooer
    2024-07-18 07:46:19.751   openLuup.gateway::     version 2021.05.08  @akbooer
    2024-07-18 07:46:19.757   openLuup.smtp::        version 2018.04.12  @akbooer
    2024-07-18 07:46:19.764   openLuup.historian::   version 2022.12.20  @akbooer
    2024-07-18 07:46:19.764   openLuup.luup::        version 2023.01.06  @akbooer
    2024-07-18 07:46:19.767   openLuup.pop3::        version 2018.04.23  @akbooer
    2024-07-18 07:46:19.768   openLuup.compression:: version 2016.06.30  @akbooer
    2024-07-18 07:46:19.768   openLuup.timers::      version 2021.05.23  @akbooer
    2024-07-18 07:46:19.769   openLuup.logs::        version 2018.03.25  @akbooer
    2024-07-18 07:46:19.769   openLuup.json::        version 2021.05.01  @akbooer
    2024-07-18 07:46:19.774   luup.create_device:: [1] D_ZWaveNetwork.xml /  /    ()
    2024-07-18 07:46:19.774   openLuup.chdev:: ERROR: unable to read XML file I_ZWave.xml
    2024-07-18 07:46:19.800   luup.create_device:: [2] D_openLuup.xml / I_openLuup.xml / D_openLuup.json   (openLuup)
    2024-07-18 07:46:19.800   openLuup.init:: loading configuration user_data.json
    2024-07-18 07:46:19.801   openLuup.userdata:: loading user_data json...
    2024-07-18 07:46:19.805   openLuup.userdata:: JSON decode error @[8173 of 8192, line: 176] unterminated string
       ' = luup.attr_set\n\n   <<<HERE>>>   -- Geographical loca'
    2024-07-18 07:46:19.805   openLuup.userdata:: ...user_data loading completed
    2024-07-18 07:46:19.805   openLuup.init:: running _openLuup_STARTUP_
    2024-07-18 07:46:19.805   luup_log:0: startup code completed
    2024-07-18 07:46:19.806   openLuup.init:: init phase completed
    2024-07-18 07:46:19.806   :: openLuup LOG ROTATION :: (runtime 0.0 days)
    

    Is this a my error in some configuration files ?

    tnks

    openLuup

  • Openluup: Datayours
    D Donato

    Hi akbooer,

    excuse me for late answer. Tnks for your precious support as usual.
    I'll test your code asap.
    A question for my clarity: does the routine register only a value every minute in the whisper file (average of values in a minute) ? Are the different values in a minute momentarily memorized in a DY cache ?

    tnks

    Plugins

  • Openluup: Datayours
    D Donato

    Yes, but all the variables with the name "Variable" (local target = "Variable")

    Plugins

  • Openluup: Datayours
    D Donato

    In my installation sensors measure at least a value every 20/30s (in my case is electric power) and I'd like to register the average value every minute (if possible).
    Can I change the retention schemas of the actual whisper files without loosing the actual data or do I have to start from zero?

    Over hour and daily period the aggregation is different for the whisper files created by L_DataUser routine :

    [Power_Daily_DataWatcher]
    pattern = .kwdaily
    xFilesFactor = 0
    aggregationMethod = sum
    [Power_Hourly_DataWatcher]
    pattern = .kwhourly
    xFilesFactor = 0
    aggregationMethod = sum
    [Power_MaxHourly_DataWatcher]
    pattern = .kwmaxhourly
    xFilesFactor = 0
    aggregationMethod = max

    For the "Variable" whisper files seems correct the average calculation for 5m, 10m, 1h .. etc based on :

    retentions = 1m:1d,5m:90d,10m:180d,1h:2y,1d:10y

    and the value registered every minute.

    Plugins

  • Openluup: Datayours
    D Donato

    Hi akbooer,

    tnks now things is going well for me too.

    You wrote for me the following L_DataUser.lua that I'm using :

    local function run (metric, value, time) 
    
      local target = "Variable"
      local names = {"kwdaily", "kwhourly", "kwmaxhourly"}
    
      local metrics = {metric}
      for i, name in ipairs (names) do
        local x,n = metric: gsub (target, name)
        metrics[#metrics+1] = n>0 and x or nil
      end
    
      local i = 0
     
      return function ()
        i = i + 1
        return metrics[i], value, time
      end
    end
    
    return {run = run}
    

    that write for every value of "Variablex" variable to other whisper files with different aggregation and schemas.

    Plugins

  • Openluup: Datayours
    D Donato

    Hi akbooer,
    i hope all is well.

    I've a question about datayours and the aggregation e schemas parameters.
    I've these configurations :

    1. aggregation :
      [Power_Calcolata_Kwatt]
      pattern = .Variable
      xFilesFactor = 0
      aggregationMethod = average

    2. schemas:
      [Power_Calcolata_Kwatt]
      pattern = .Variable
      retentions = 1m:1d,5m:90d,10m:180d,1h:2y,1d:10y

    In the cache history of openluup console for example i see these values :

    2024-06-27 09:17:57 46.73
    2024-06-27 09:17:25 16.55

    and in the whisper file (it contains a point every minute) i see :

    1719472620, 46.73 (17119472620 is 09.17 time for my zone)

    It seems that is considered the last value and not the average of two value registered at 9.17 time.

    Is it correct ?

    tnks

    donato

    Plugins

  • Openluup: Datayours
    D Donato

    @akbooer
    Hi akbooer, I've simulated on a test installation a network outage of the remote DY and I've produced two sets of whisper files : one updated (remote DY) and the other one to update (central DY) if possible with a routine similar to Whisper-fill.py from Graphite tool.
    In the files I'll send you by email you find :

    1. whisper file Updated ;
    2. whisper file To Update ;
    3. L_DataUser.lua used for both that processes and creates different metric names;
    4. Storage-aggregation.conf and Storage-schema.conf files.
      I remain at your disposal for any clarification.

    tnks

    donato

    Plugins

  • Openluup: Datayours
    D Donato

    @akbooer excuse me how can i send you the schema ?

    Plugins

  • Openluup: Datayours
    D Donato

    @akbooer
    a stand-alone command line utility is ok possibly with the option to indicate a date interval. The files to fill for a remote DY may be more than one all with the same Openluup/Whisper ID.
    At the moment I haven't an example of files to fill . I'll simulate a network outage so I produce the files.

    Can I send you meanwhile the schema to verify if the openluup/DY configuration remote and centralized are correct (destinations, udp receiver ports, line receiver port)

    tnks

    donato

    Plugins

  • Openluup: Datayours
    D Donato

    Hi akbooer,

    I've an installation with a centralized openluup/DY on Debian 11 where're archived and consolidated several remote openluup/DY on RPI. I'm also using a user-defined (defined with your support) "DataUser.lua" to process metrics and creating different metric names. I've a schema of this configuration but I can't upload on forum.
    I'd like to manage outage network connections between remote and centralized system while the remote DY is running and archives data locally.
    I see the whisper-fill.py python routine (https://github.com/graphite-project/whisper/blob/master/bin/whisper-fill.py) from Graphite tool. I know that DY/whisper format is different from Graphite/whisper (CSV vs. binary packing), but based on your deep knowledge and experience is it hard to adapt the fill routine to DY/whisper format ?

    tnks

    donato

    Plugins

  • Unexpected stop of openLuup
    D Donato

    tnks akbooer for your fix.

    About VPN, my server is on a cloud hosting and the web app is accessed by authenticated users so I suppose the only solution is to activate some fw rule on the openluup/datayours server.

    openLuup

  • Unexpected stop of openLuup
    D Donato

    Hi akbooer,

    in order to set firewall rules can you give me some info on the openluup log records ? Following there are few normal log lines of datayours read/write :

    2022-08-13 12:52:21.921 luup.variable_set:: 4.urn:akbooer-com:serviceId:DataYours1.AppMemoryUsed was: 6568 now: 6871 #hooks:0
    2022-08-13 12:52:26.570 openLuup.io.server:: HTTP:3480 connection from xx.xx.xx.xx tcp{client}: 0x556c474b0348
    2022-08-13 12:52:26.571 openLuup.server:: GET /data_request?id=lr_render&target=Vera-yyyyyyyy.024.urn:upnp-org:serviceId:VContainer1.Variable3&from=-2h&format=json HTTP/1.1 tcp{client}: 0x556c474b0348
    2022-08-13 12:52:26.571 luup_log:4: DataGraph: Whisper query: CPU = 0.405 mS for 121 points
    2022-08-13 12:52:26.572 openLuup.server:: request completed (3359 bytes, 1 chunks, 1 ms) tcp{client}: 0x556c474b0348
    2022-08-13 12:52:26.572 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x556c474b0348
    2022-08-13 12:52:26.574 openLuup.io.server:: HTTP:3480 connection from xx.xx.xx.xx tcp{client}: 0x556c47810548
    2022-08-13 12:52:26.574 openLuup.server:: GET /data_request?id=lr_render&target=Vera-yyyyyyyy.024.urn:upnp-org:serviceId:VContainer1.Variable3&from=2022-08-13T00:00&format=json HTTP/1.1 tcp{client}: 0x556c47810548
    2022-08-13 12:52:26.576 luup_log:4: DataGraph: Whisper query: CPU = 2.085 mS for 773 points
    2022-08-13 12:52:26.580 openLuup.server:: request completed (20228 bytes, 2 chunks, 6 ms) tcp{client}: 0x556c47810548
    2022-08-13 12:52:26.581 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x556c47810548
    2022-08-13 12:52:35.363 openLuup.io.server:: HTTP:3480 connection from xx.xx.xx.xx tcp{client}: 0x556c4756a4b8
    2022-08-13 12:52:35.364 openLuup.server:: GET /data_request?id=lr_render&target=Vera-yyyyyyyy.024.urn:upnp-org:serviceId:VContainer1.Variable3&from=-2h&format=json HTTP/1.1 tcp{client}: 0x556c4756a4b8
    2022-08-13 12:52:35.364 luup_log:4: DataGraph: Whisper query: CPU = 0.418 mS for 121 points
    2022-08-13 12:52:35.366 openLuup.server:: request completed (3359 bytes, 1 chunks, 1 ms) tcp{client}: 0x556c4756a4b8
    2022-08-13 12:52:35.366 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x556c4756a4b8
    2022-08-13 12:52:35.368 openLuup.io.server:: HTTP:3480 connection from xx.xx.xx.xx tcp{client}: 0x556c47294958
    2022-08-13 12:52:35.368 openLuup.server:: GET /data_request?id=lr_render&target=Vera-yyyyyyyy.024.urn:upnp-org:serviceId:VContainer1.kwdaily3&from=2016-07-01&format=json HTTP/1.1 tcp{client}: 0x556c47294958
    2022-08-13 12:52:35.375 luup_log:4: DataGraph: Whisper query: CPU = 6.521 mS for 2235 points
    2022-08-13 12:52:35.392 openLuup.server:: request completed (62445 bytes, 4 chunks, 23 ms) tcp{client}: 0x556c47294958
    2022-08-13 12:52:35.397 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x556c47294958
    2022-08-13 12:52:35.398 openLuup.io.server:: HTTP:3480 connection from xx.xx.xx.xx tcp{client}: 0x556c47158248
    2022-08-13 12:52:35.398 openLuup.server:: GET /data_request?id=lr_render&target=Vera-yyyyyyyy.024.urn:upnp-org:serviceId:VContainer1.Variable3&from=2022-08-13T00:00&format=json HTTP/1.1 tcp{client}: 0x556c47158248
    2022-08-13 12:52:35.401 luup_log:4: DataGraph: Whisper query: CPU = 2.409 mS for 773 points
    2022-08-13 12:52:35.406 openLuup.server:: request completed (20228 bytes, 2 chunks, 7 ms) tcp{client}: 0x556c47158248
    2022-08-13 12:52:35.406 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x556c47158248
    2022-08-13 12:53:26.581 openLuup.io.server:: HTTP:3480 connection from xx.xx.xx.xx tcp{client}: 0x556c4877f9d8
    2022-08-13 12:53:26.581 openLuup.server:: GET /data_request?id=lr_render&target=Vera-yyyyyyyy.024.urn:upnp-org:serviceId:VContainer1.Variable3&from=-2h&format=json HTTP/1.1 tcp{client}: 0x556c4877f9d8
    2022-08-13 12:53:26.582 luup_log:4: DataGraph: Whisper query: CPU = 0.735 mS for 121 points

    The read commands of whisper files are all of kind "http://server-ip:3480......."

    the following log record :
    openLuup.io.server:: HTTP:3480 connection from xx.xx.xx.xx tcp{client}
    is from a http read ?

    Can I see UDP write log record ?

    The write commands come from remote datayours through UDP to "server-ip" .

    All consolidated whisper files I read/write are on "server-ip".

    Is it correct in this scenario that all the regular and normal commands (read/write) must come from "server-ip" ?

    tnks

    openLuup

  • Unexpected stop of openLuup
    D Donato

    Hi akbooer,

    randomly openluup hangs but today I noticed in the log (attached) something strange :

    2022-08-13 12:54:35.283 openLuup.server:: GET /data_request?id=lr_render&target=Vera-45108342.024.urn:upnp-org:serviceId:VContainer1.Variable3&from=2022-08-13T00:00&format=json HTTP/1.1 tcp{client}: 0x556c471741f8
    2022-08-13 12:54:35.286 luup_log:4: DataGraph: Whisper query: CPU = 2.068 mS for 775 points
    2022-08-13 12:54:35.289 openLuup.server:: request completed (20282 bytes, 2 chunks, 6 ms) tcp{client}: 0x556c471741f8
    2022-08-13 12:54:35.290 openLuup.io.server:: HTTP:3480 connection closed openLuup.server.receive closed tcp{client}: 0x556c471741f8
    2022-08-13 12:54:57.889 openLuup.io.server:: HTTP:3480 connection from 92.255.85.183 tcp{client}: 0x556c482cf138
    2022-08-13 12:54:57.889 openLuup.server:: /*: mstshash=Administr tcp{client}: 0x556c482cf138
    2022-08-13 12:54:57.889 openLuup.context_switch:: ERROR: [dev #0] ./openLuup/server.lua:238: attempt to concatenate local 'method' (a nil value)
    2022-08-13 12:54:57.889 luup.incoming_callback:: function: 0x556c4753ff20 ERROR: ./openLuup/server.lua:238: attempt to concatenate local 'method' (a nil value)

    at 12:54 openluup stopped to write and read the datayours files

    Is a possible attack ?

    openLuup

  • Help with Z-Way plugin
    D Donato

    Hi akbooer,

    I've defined in z-way controller a Virtual Device (a virtual binary switch) but I can't see it on openluup with z-way plugin installed while I can see a real switch.

    Is it correct ? Are the z-way virtual device using different API not implemented in openluup ?

    tnks

    donato

    Zway Bridge
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Unsolved