Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Unsolved
Collapse
Discussion Forum to share and further the development of home control and automation, independent of platforms.
A

a-lurker

@a-lurker
Set reaction triggering wrong z-wave device
T
Topic thumbnail image
Multi-System Reactor
Can you run MSR on Home Assistant OS ?
cw-kidC
Looking at using Home Assistant for the first time, either on a Home Assistant Green, their own hardware or buying a cheap second hand mini PC. Sounds like Home Assistant OS is linux based using Docker for HA etc. Would I also be able to install things like MSR as well on their OS ? On the same box? Thanks.
Multi-System Reactor
RPi Alternative: Orange Pi 4 LTS (3GB RAM/16GB eMMC)
toggledbitsT
The last of four boards I'm trying in this batch is the Orange Pi 4 LTS. I purchased a 3GB RAM + 16GB eMMC model from Amazon for $83, making it the most costly of the four boards tried, but still well under my US$100 limit. This board is powered by a Rockchip RK3399-T processor, ARM-compatible with dual Cortex-A72 cores and quad Cortex-A53 cores at 1.6Ghz (1.8Ghz for the 4GB model); compare this to the RPi 3B+ with four Cortex-A53 and the RPi 4B with four Cortex-A72, this board is a hybrid that I would expect to stand in the performance middle between the two RPi models. It's available in 3GB and 4GB DDR4 RAM configurations, with and without 16GB eMMC storage. It has a MicroSDHC slot, gigabit Ethernet, WiFi and BT, two USB 2.0 type A ports, one USB 3.0 type C port, a mini PCIe ribbon-cable connector (requires add-on board for standard connector), two each RPi-compatible camera and LCD ports, HDMI type A, and can be powered (5VDC/3A) via USB-C or DC type C (3.8mm OD/1.1mm ID) jack (center-positive), an odd and perhaps unwelcome departure from the more common type A (5.5mm/2.1mm). A serial port for console/debug can be connected by using a (not included) USB-TTL adapter (3.3V) via pin headers like the Orange Pi Zero 2. The included dual-band antenna connects via U.FL connector to the board, so it's easy substituting for another if you prefer. The manufacturer recommends use of a heat sink (which was included in the box). A metal cooling case is also offered by the manufacturer (a bundle with the metal case and a power supply is sold on Amazon for $90 as of this writing). The Orange Pi 4 LTS is somewhat longer than the RPi 4B, and although the boards are the same width, the mounting hole placement is different both in length and (oddly) width. Between this and the differences in connector locations, neither board is a drop-in replacecment for the other and their respective cases are not interchangeable. The 26-pin header is a subset of the RPi 4B's 40-pin header, so some HATs for the RPi may work (although the mounting hole differences will make securing them "interesting"), and some HATs will surely not. Models with eMMC storage have an OS installed and boot immediately with SSH daemon running and ready for login. Mine was running Debian Bullseye, which would probably be fine for most users. It had clearly been on there a while, because it needed a lot of updates, but it's a current distro, so you're running out of the box with something that will last. A different OS can be installed by downloading an image (once again I chose Ubuntu Jammy) and writing it to a MicroSD card, then booting the system from the SD card. You can either leave the system in that state (running the OS from the SD card), or copy the OS from the SD card to the eMMC. The latter is done by a script; documentation for the process is best described in the downloadable PDF User Manual. This took about 10 minutes and went smoothly, and I was able to boot the system without the SD card after the process completed. I have lingering questions around the value of the eMMC storage. It's definitely faster than using MicroSD or USB-based storage (I got 311MB/s average on a 4GB write, compared to MicroSD performance around 15MB/s), but it would take a long-term test of this product to determine if the on-board eMMC option has the stamina to take the write counts typical of Linux systems, and if its wear-leveling and error correction are sufficient to assure a long, error-free life. Given the high premium apparently being paid for including eMMC on the board, it should be fast and durable, but only time and experience (perhaps painful) would tell the latter. A careful configuration with other Flash-friendly filesystems could be used to reduce wear, but this is an advanced configuration/cookbook topic and beyond the scope of this writing. This question is also not unique to eMMC — MicroSD cards are also known to fail with high write cycles, so the use of a "high endurance" product is recommended for any and all systems using MicroSD as primary storage. The board has Mini PCIe capability, and that may be a storage alternative, but read on... Also bear in mind that the eMMC storage is fixed-size forever; it cannot be expanded, and 16GB can run out pretty quickly these days. Users of MicroSD cards for primary storage can upgrade to bigger cards, but when users of eMMC primary storage outgrow it, the only choice is to add a MicroSD card or other "external" storage to the system, move part of the filesystem to it, and then manage both storage devices and deal with the limitations and risks of both. As I mentioned with the Orange Pi Zero 2, if you are going to use this board as a home automation controller/gateway or similar role, it should (IMO) have a battery-backed real time clock (RTC), and Orange Pi offers an add-on module that connects directly to the 26-pin header on the board. An available expansion board provides a standard Mini PCIe interface and SIM card slot (hmm...), but it connects to the main board via a short ribbon cable, and its mounting holes have no complement on the main board, so it seems like it would be a fragile dangly thing that's a nuisance to deal with. I want to like this board more, and it's very capable, but I'm concerned about value. The limited options for eMMC (16GB or none), the question mark of the eMMC's longevity vs cost, the strange DC power connector choice, the lack of 40-pin GPIO on a full-size (plus) board, the inconsistent hole placement, and the fragile Mini PCIe arrangement, are all "cons" that devalue this board in my view. The price point is clearly driven by the additional capabilities of the board (camera support, ports, six core CPU, extra RAM, on-board eMMC storage), but unfortunately, a great many of these features may not be useful for home automation, and therefore potentially a waste of money. In terms of overall value, I still believe the Libre "Le Potato" seems a better choice to me, and the Orange Pi Zero 2 (very) a close second, but I'll admit I'm focused on a particular application and your needs may be better suited to what this board offers than mine. Passmark Results: OrangePi 4 LTS Cortex-A72 (aarch64) 6 cores @ 1200 MHz | 2.9 GiB RAM Number of Processes: 6 | Test Iterations: 1 | Test Duration: Medium -------------------------------------------------------------------------- CPU Mark: 583 Integer Math 12037 Million Operations/s Floating Point Math 2542 Million Operations/s Prime Numbers 4.5 Million Primes/s Sorting 3141 Thousand Strings/s Encryption 153 MB/s Compression 4049 KB/s CPU Single Threaded 154 Million Operations/s Physics 80.5 Frames/s Extended Instructions (NEON) 244 Million Matrices/s Memory Mark: 498 Database Operations 551 Thousand Operations/s Memory Read Cached 2524 MB/s Memory Read Uncached 2602 MB/s Memory Write 3182 MB/s Available RAM 1947 Megabytes Memory Latency 119 Nanoseconds Memory Threaded 6243 MB/s --------------- eMMC storage write 311MB/s average for 4GB; MicroSD (Samsung 32GB class 10) storage write 15MB/s.
SBC
RPi Alternative: Orange Pi Zero 2 (1GB)
toggledbitsT
Topic thumbnail image
SBC
RPi Alternative: Libre Computer AML-S905X-CC "Le Potato" (2GB RAM)
toggledbitsT
With Raspberry Pi boards continuing to be relatively scarce, I've been trying a few alternatives to see what may be usable and good. I had previously written about the Jetson Nano 2GB, which is great, but a little pricey, so I'm trying to find sub-US$100 boards that will run Reactor. I've got four that I'm trying now, but one in particular goes right to work in the most predictable way and seems worth a mention immediately: the Libre Computer Board AML-S905X-CC 2GB (known as "Le Potato"). The form factor is very similar to that of the Raspberry Pi 3 B+, and has comparable CPU (ARM Cortex-A53, quad 64-bit cores at 1.5+GHz -- slightly higher clock speed). It's US$35 on Amazon and LoverPi in the (recommended) 2GB configuration, and easy to get. Startup is like RPi: download one of the available OS images (Ubuntu, Raspbian, Debian, ARMbian, etc.) from their site and write the image to a MicroSD card, insert into slot, power up, and off you go. I tried the Ubuntu 22.04 image first and it comes right up. No problem getting nodejs 18.12.1 installed and running (with Reactor). No WiFi on board, but I don't see that as a minus for use as a controller/hub (which should be hard-wired, IMO). The 40-pin GPIO connector is compatible with typical RPi HATs (PoE, breakouts, etc.). There is an available eMMC (solid state storage) module to use instead of MicroSD, which I would recommend for long-term use. It runs US$25 for 32GB (64GB and 128GB available). The module is scarcely larger than the chip it carries, and has the smallest board-to-board connector I've ever seen. Next up: ESPRESSObin 2GB (spoiler: it's... technical...)
SBC
HA and AI
CatmanV2C
Having hours of (actually quite fun) interaction with AI (Chat GPT) making up dashboards and sensors for HA. It's OK (well it's better than I am!) but it makes soooo many mistakes. Gets there in the end though, if you've half a clue (which I do half the time) C
Home Assistant
How to upgrade from an old version of MSR?
cw-kidC
Hello I haven't updated my installation of MSR in a very long time. Its a bare metal Linux install currently on version 24366-3de60836 I see the latest version is now latest-26011-c621bbc7 I assume I cannot just jump from a very old version to the latest version? Or can I? Thanks
Multi-System Reactor
This trigger no longer working - complaining about the operator needing changing
cw-kidC
Topic thumbnail image
Multi-System Reactor
Self test
CatmanV2C
Having been messing around with some stuff I worked a way to self trigger some tests that I wanted to do on the HA <> MSR integration This got me wondering if there's an entity that changes state / is exposed when a configured controller goes off line? I can't see one but thought it might be hidden or something? Cheers C
Multi-System Reactor
Access control - allowing anonymous user to dashboard
tunnusT
Using build 25328 and having the following users.yaml configuration: users: # This section defines your valid users. admin: ******* groups: # This section defines your user groups. Optionally, it defines application # and API access restrictions (ACLs) for the group. Users may belong to # more than one group. Again, no required or special groups here. admin_group: users: - admin applications: true # special form allows access to ALL applications guests: users: "*" applications: - dashboard api_acls: # This ACL allows users in the "admin" group to access the API - url: "/api" group: admin_group allow: true log: true # This ACL allows anyone/thing to access the /api/v1/alive API endpoint - url: "/api/v1/alive" allow: true session: timeout: 7200 # (seconds) rolling: true # activity extends timeout when true # If log_acls is true, the selected ACL for every API access is logged. log_acls: true # If debug_acls is true, even more information about ACL selection is logged. debug_acls: true My goal is to allow anonymous user to dashboard, but MSR is still asking for a password when trying to access that. Nothing in the logs related to dashboard access. Probably an error in the configuration, but help needed to find that. Tried to put url: "/dashboard" under api_acls, but that was a long shot and didn't work.
Multi-System Reactor
VEC Virtual Switch Auto Off
S
I use Virtual Entity Controller virtual switches which I turn on via webhooks from other applications. Once a switch triggers and turns on, I can then activate associated rules. I would like each virtual switch to automatically turn off after a configurable time (e.g., 5 seconds, 10 seconds). Is there a better way to achieve this auto-off behavior instead of creating a separate rule for each switch that uses the 'Condition must be sustained for' option to turn it off? With a large number of these switches (and the associated turn-off rules), I'm checking to see if there is a simpler approach.If not, could this be a feature request to add an auto-off timer directly to the virtual switches. Thanks Reactor (Multi-hub) latest-26011-c621bbc7 VirtualEntityController v25356 Synology Docker
Multi-System Reactor
Upcoming Storage Change -- Got Back-ups?
toggledbitsT
TL;DR: Format of data in storage directory will soon change. Make sure you are backing up the contents of that directory in its entirety, and you preserve your backups for an extended period, particularly the backup you take right before upgrading to the build containing this change (date of that is still to be determined, but soon). The old data format will remain readable (so you'll be able to read your pre-change backups) for the foreseeable future. In support of a number of other changes in the works, I have found it necessary to change the storage format for Reactor objects in storage at the physical level. Until now, plain, standard JSON has been used to store the data (everything under the storage directory). This has served well, but has a few limitations, including no real support for native JavaScript objects like Date, Map, Set, and others. It also is unable to store data that contains "loops" — objects that reference themselves in some way. I'm not sure exactly when, but in the not-too-distant future I will publish a build using the new data format. It will automatically convert existing JSON data to the new format. For the moment, it will save data in both the new format and the old JSON format, preferring the former when loading data from storage. I have been running my own home with this new format for several months, and have no issues with data loss or corruption. A few other things to know: If you are not already backing up your storage directory, you should be. At a minimum, back this directory up every time you make big changes to your Rules, Reactions, etc. Your existing JSON-format backups will continue to be readable for the long-term (years). The code that loads data from these files looks for the new file format first (which will have a .dval suffix), and if not found, will happily read (and convert) a same-basenamed .json file (i.e. it looks for ruleid.dval first, and if it doesn't find it, it tries to load ruleid.json). I'll publish detailed instructions for restoring from old backups when the build is posted (it's easy). The new .dval files are not directly human-readable or editable as easily as the old .json files. A new utility will be provided in the tools directory to convert .dval data to .json format, which you can then read or edit if you find that necessary. However, that may not work for all future data, as my intent is to make more native JavaScript objects directly storable, and many of those objects cannot be stored in JSON. You may need to modify your backup tools/scripts to pick up the new files: if you explicitly name .json files (rather than just specifying the entire storage directory) in your backup configuration, you will need to add .dval files to get a complete, accurate backup. I don't think this will be an issue for any of you; I imagine that you're all just backing up the entire contents of storage regardless of format/name, that is the safest (and IMO most correct) way to go (if that's not what you're doing, consider changing your approach). The current code stores the data in both the .dval form and the .json form to hedge against any real-world problems I don't encounter in my own use. Some future build will drop this redundancy (i.e. save only to .dval form). However, the read code for the .json form will remain in any case. This applies only to persistent storage that Reactor creates and controls under the storage tree. All other JSON data files (e.g. device data for Controllers) are unaffected by this change and will remain in that form. YAML files are also unaffected by this change. This thread is open for any questions or concerns.
Multi-System Reactor
Oddness in Copy/Move of Reactions
G
Topic thumbnail image
Multi-System Reactor
[Solved] function isRuleEnabled() issue
CrilleC
Topic thumbnail image
Multi-System Reactor
[Reactor] Problem with Global Reactions and groups
therealdbT
Topic thumbnail image
Multi-System Reactor
Possible feature request 2?
CatmanV2C
Just another thought. Adding devices from my Home Assistant / Zigbee2MQTT integration. Works perfectly but they always add as their IEEE address. Some of these devices have up to 10 entities associated, and the moment they are renamed to something sensible, each of those entities 'ceases to exist' in MSR. I like things tidy, and deleting each defunct entity needs 3 clicks. Any chance of a 'bulk delete' option? No biggy as I've pretty much finished my Z-wave migration and I don't expect to be adding more than 2 new Zigbee devices Cheers C
Multi-System Reactor
Reactor (Multi-System/Multi-Hub) Announcements
toggledbitsT
Build 21228 has been released. Docker images available from DockerHub as usual, and bare-metal packages here. Home Assistant up to version 2021.8.6 supported; the online version of the manual will now state the current supported versions; Fix an error in OWMWeatherController that could cause it to stop updating; Unify the approach to entity filtering on all hub interface classes (controllers); this works for device entities only; it may be extended to other entities later; Improve error detail in messages for EzloController during auth phase; Add isRuleSet() and isRuleEnabled() functions to expressions extensions; Implement set action for lock and passage capabilities (makes them more easily scriptable in some cases); Fix a place in the UI where 24-hour time was not being displayed.
Multi-System Reactor
Genuinely impressed with Zigbee and HA / Reactor
CatmanV2C
Just for the record, in case anyone is following, I'm really rather impressed. I have installed one of these: https://www.amazon.co.uk/dp/B0B6P22YJC?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1 That's connected (physically) to the VM running on my Synology, with a 2m USB extension. The same host also runs Openluup, Mosquito, HA Bridge. Yesterday I installed Zigbee2mqtt. That was a bit of a PITA but mostly because of ports and permissions. Once up and running, and the correct boxes ticked, immediately visible in Home Assistant via the MQTT integration, and thence into Reactor I've only got two devices. I bought the cheapest sensor I could find, which is a door sensor. Dead easy to add to ZIgbee2mqtt and again, immediately visible in HA. https://www.amazon.co.uk/dp/B0FPQLWRW1?ref=ppx_yo2ov_dt_b_fed_asin_title The dongle is on the top floor of the house, and I wanted the sensor on the back door (just about as far apart as it's possible to get short of going into the garage) When I moved the sensor downstairs it dropped out pretty instantly (which wasn't a huge surprise) so quick bit of research found out that smart plugs will act as routers so... https://www.amazon.co.uk/dp/B0FDQDPGBB?ref=ppx_yo2ov_dt_b_fed_asin_title Took me about 30 seconds to connect. Updated the name. Instantly visible in Reactor with the new name pushed over from Zigbee2mqtt. And lo, the door sensor now has a signal of 140 and works as far as I can tell perfectly and instantly (unlike my z-wave one). A few more of those will be purchased and used to replace the Tuya wifi cloud devices and the (continually failing) Z-wave plugs (yeah, they were TKB so....) Commended to the house. Thanks for everyone that got me on the right lines. C
Zigbee
Copying a global reaction
tunnusT
With build 25328, if you copy a global reaction, a new reaction does not appear in the UI unless you do a refresh. I recall this used to work without needing this page refresh? Anyway, only a minor nuisance.
Multi-System Reactor
[HowTo] Using HABridge with Reactor
therealdbT
If you’re like me and still running HABridge to control your devices locally via Alexa, you might need to tweak your endpoints to call Reactor via HTTP. Here’s the best way to do it, IMO: Insert the Reactor Canonical ID (e.g., zwavejs>71-1) into the MapID field, but make sure it’s URL-encoded like this: zwavejs%3E71-1. Then, configure these endpoints as needed: On: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/power_switch.on Off: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/power_switch.off Dim: For lights: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/dimming.set?level=${intensity.decimal_percent} For roller shutters: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/position.set?value=${intensity.decimal_percent} Color: http://[ReactorIP]:8111/api/v1/entity/${device.mapId}/perform/rgb_color.set_rgb?red=${color.r}&green=${color.g}&blue=${color.b} Just replace [ReactorIP] with your actual IP address. By using these placeholders, you can standardize your endpoints across all devices, making maintenance easier. This setup works with any device mapped under MSR, regardless of the controller (ZWaveJS, Vera, HASS, OpenSprinkler, virtual, MQTT, DynamicEntities, etc.). If you need different calls, just go to the entities, get the action and parameters, and adjust accordingly. Enjoy super fast access to your devices via Alexa! If you're migrating from Vera, the endpoints are (URL-encoded) in a file called device.db, in JSON format, under your config. You'd write a script to align the new endpoints to the new one, if you prefer to do it automatically. YMMV.
How-To
About
Posts
260
Topics
36
Shares
0
Groups
0
Followers
1
Following
0

Posts

Recent Best Controversial

  • AltUI sans internet connection
    A a-lurker

    Just on AltUI:

    The browser would be downloading any number of resources first time round from various servers. But I would have thought the majority of servers would be using some sort of cache control header combinations to the command the browser's caching. You don't need to be downloading jquery every time you hit a web page and I think it's unlikely that would be happening.

    So I would have thought that the browser (for AltUI) could have cached most of what AltUI needed? What resource is the browser calling up that it can't download with the internet connection down? Maybe AltUI could have functioned but in some sort of reduced capability mode?

    But yes the openLuup console, as I understand it, has been written to not rely overly on outside resources. Surprising akbooer could get it work!!

    Software

  • AltUI sans internet connection
    A a-lurker

    I had this problem a long time ago:

    You need to set up the required files locally somewhere. On a local directory, NAS, USB stick, web server etc. You then have the responsibility of keeping them up to date. Some things may still not work like Goggle charts.

    amg0 set up a variable in the AltUI plugin labelled "Local CDN ?" that can be used to point to the new local file source. So it's not hard to switch back and forth.

    In normal operation (ie using the internet) this variable is blank. You can read amg0's doco from here at GitHub

    The old forum discussion here.

    Software

  • openLuup email server
    A a-lurker

    "AK: Your best bet would surely be to register a callback handler to listen for messages on a specific address?"

    Yep - that seems the most obvious method - just wanted to make sure I hadn't missed some other turn of events. With your own callback handler, you can clearly sort out any email encodings, etc.

    On the Doco - Goggle doesn't seem to index github io pages. Seems to me you need to have some sort of URL redirect, that looks like say https://smarthome.community/openluup, that would get indexed? ie via the web server set up or similar.

    openLuup

  • openLuup email server
    A a-lurker

    Hello AK

    Have been writing about the openLuup email server as I was tinkering with it the other day. One minor problem: it looks like the domain part of the email address eg ...@openLuup.local is case sensitive in openLuup.

    Looking round the net, it suggests that the local part is case sensitive but the domain part is not meant to be. To keep things so they are more likely to work, it's suggested the email address should be totally case insensitive regardless. Refer to rfc2821 page 13 or search on the word "sensitive". Suffice to say I was using mail@openluup.local rather than mail@openLuup.local, so it didn't work for me.

    Next challenge was that the file saved in /etc/cmh-ludl/mail has "Content-Transfer-Encoding: base64" so the body of the email was encoded:

    Received: from ((openLuup.smtp) [ip_address_1]
     by (openLuup.smtp v18.4.12) [ip_address_2];
     Tue, 26 Nov 2024 14:21:22 +1000
    From: "dali@switchboard" <dali@switchboard>
    To: "mail@openLuup.local" <mail@openLuup.local>
    Subject: Warning form R2E.
    MIME-Version: 1.0
    Content-Type: text/plain
    Content-Transfer-Encoding: base64
    
    QXV0byBXYXJuaW5nOiBDb2xkIHN0YXJ0IGV2ZW50
    

    The above base 64 text translates to "Auto Warning: Cold start event".

    Is it your preference to leave the saved files in the raw mode or would you consider translating the base64 text in the openLuup code base?

    On a side note, does the reception of an email by openLuup generate some sort of trigger that can be watched. In the case above; the email represents the restoration of power after a power outage. I would like to know about that by the email triggering a "Telegram" notification on my mobile.

    I see that the images@openLuup.local images@openLuup.local can be associated with I_openLuupCamera1.xml, which spawns a movement detector child. Anything similar for the other email addresses?

    openLuup

  • openLuup_install.lua - URL changed?
    A a-lurker

    Now fixed in the Development branch.

    openLuup

  • Vera PushOver notification with image
    A a-lurker

    The code in your link was messed up when they updated that forum some years ago. I've rehashed it to give it a chance of working but I suspect you may still have trouble getting it to work. Running it in the Lua test window would be your starting point after reading the
    push over api doco.

    Another alternative is to use Telegram with the Telegram plugin.

    -- Refer to pushover documentation:
    --   https://pushover.net/api
    
    local pushToken    = "YourPushOverTokenHere"
    local pushUser     = "YourPushOverUserCodeHere"
    local pushTitle    = "MessageTitle"
    local pushMessage  = "MessageContent"
    local snapshotFile = "/tmp/camera_snapshot.jpg"
    local pushPriority = "1"
    
    -- Sound could be: pushover bike bugle, cash register, classical, cosmic, falling,
    -- gamelan, intermission, magic, mechanical, pianobar, siren, spacealarm, tugboat,
    -- alien, climb, persistent, echo, updown, none
    local pushSound = "gamelan"
    
    -- Link to the BlueIris videostream of that camera
    local pushUrl = "http://xxx.xxx.xxx.xxx/mjpg/ShortCamName&user=XXX&pw=XXX"
    local pushUrlTitle = "Camera Name"
    
    -- This points to one of my BlueIris managed cameras
    local camera = "http://xxx.xxx.xxx.xxx/image/ShortCamName?q=50&s=80&user=XXX&pw=XXX"
    
    -- Get the snapshot from the camera
    local out = assert(io.open(snapshotFile, "wb"))
    local _,data = luup.inet.wget(camera)
    out:write(data)
    assert(out:close())
    
    --Send PushOver request
    local curlCommandTab = {}
    
    table.insert (curlCommandTab, 'curl -s')
    table.insert (curlCommandTab, '-F "token='      ..pushToken    ..'"')
    table.insert (curlCommandTab, '-F "user='       ..pushUser     ..'"')
    table.insert (curlCommandTab, '-F "title='      ..pushTitle    ..'"')
    table.insert (curlCommandTab, '-F "message='    ..pushMessage  ..'"')
    table.insert (curlCommandTab, '-F attachment=@' ..snapshotFile ..'"')
    table.insert (curlCommandTab, '-F "sound='      ..pushSound    ..'"')
    table.insert (curlCommandTab, '-F "priority='   ..pushPriority ..'"')
    table.insert (curlCommandTab, '-F "url='        ..pushUrl      ..'"')
    table.insert (curlCommandTab, '-F "url_title='  ..pushUrlTitle ..'"')
    
    -- The .json suffix requests that the response be in JSON format
    table.insert (curlCommandTab, 'https://api.pushover.net/1/messages.json')
    
    local curlCommand = table.concat (curlCommandTab, ' ')
    print(curlCommand)
    
    local handle = io.popen(curlCommand)
    local result = handle:read("*a")
    handle:close()
    print (result)
    
    -- Delete temporary snapshot
    os.remove (snapshotFile)
    
    
    Vera

  • 20 amp smart physical switch (to control Infratech heater) - preferable to be outdoor rated, but any
    A a-lurker

    May be better to get a DIN rail high powered contactor and use a Shelly to flip that on & off.

    Hardware

  • openLuup_install.lua - URL changed?
    A a-lurker

    AK. Was doing an openLuup install and the installer errored with:

    openLuup_install   2019.02.15   @akbooer
    getting openLuup version tar file from GitHub branch master...
    un-zipping download files...
    getting dkjson.lua...
    lua5.1: openLuup_install.lua:45: GitHub download failed with code 500
    stack traceback:
            [C]: in function 'assert'
            openLuup_install.lua:45: in main chunk
            [C]: ?
    

    The installer code was executing this URL:

    http://dkolf.de/src/dkjson-lua.fsl/raw/dkjson.lua?name=16cbc26080996d9da827df42cb0844a25518eeb3
    

    Running it manually gives:

    dkolf.de
    
    The script could not be run error-free.
    Please check your error log file for the exact error message. You can find this in the KIS under "Product Management > *YOUR PRODUCT* > *CONFYGUAR* > Logfiles". Further information can be found in our FAQ.
    The script could not be executed correctly.
    Please refer to your error log for details about this error. You find it in your KIS under item "Product Admin > *YOUR PRODUCT* > *CONFIGURE* > Logfiles". Further information can also be found in our FAQ.
    

    I'm thinking the dkjson code URL has been changed. On dkolf.de there is a download link:

    http://dkolf.de/dkjson-lua/dkjson-2.8.lua
    

    and dkjson code also seems to be in GitHub (I presume this is the same code?):

    https://github.com/LuaDist/dkjson/blob/master/dkjson.lua
    

    I'm don't know what dkolf.de looked like previously but I do see the dkjson code has been updated as of 2024-06-17. Hope this helps.

    Oh - and by the way the dkjson.lua file seems to have been downloaded OK by the installer - error or no error, so go figure.

    openLuup

  • openLuup charting and forward slashes in variable names
    A a-lurker

    Tried out the above start up code and it has done the job. Now have lots of Shelly based files that can now be plotted using Grafana. They all have the same retentions but so far that's not a problem.

    Great - thanks very much.

    openLuup

  • openLuup charting and forward slashes in variable names
    A a-lurker

    Easily answered: Whisper Database Format

    OK

    You should absolutely not create any extra files in the Historian folder.

    OK - assume it's all meant to be private to openLuup. Best not to mess with its own little world.

    IIRC, the Historian substitutes / in the file names it creates.

    However whisper.create (filename,archives,0) does not. The string

    "whisper/0.10006.shellypro3em.em1/0/act_power.wsp"

    is used as is.

    Ultimately I would like to have openLuup save the data for any variable I nominate and it gets saved no matter what the variable is called. I imagime forward slashes could be replaced with say dashes (underscores are already in use by Shelly).

    The openLuup instance where I have the em1/0/act_power.wsp variable does not have DataYours installed. You have said however that any files in a whisper directory will get updated. Is that right? I though the whisper directory had be specified in DataYours?

    So a little confused on how "em1/0/act_power.wsp" or any other variable with slashes can be charted. I can see how having children set up for each device could be set to do this but ultimately you need to be able to plot any variable of ones choosing.

    For example the Shelly Pro 4pm also measures power (very useful) but the variable is "switch/0/apower" ie a completely different layout.

    Variables don't have to be picked off a list - they could just be manually set up by ruuning a snippet of code such as "whisper.create" as seen above.

    openLuup

  • openLuup charting and forward slashes in variable names
    A a-lurker

    Currently I have some Whisper files used by DataYours that been working well for ages and do what I want.

    One of the files is called Watts_L1.d.wsp and uses this retention from "storage_schemas_conf" in openLuup file virtualfilesystem.lua:

    [day]
    pattern = \.d$
    retentions = 1m:1d
    

    Inside the actual "Watts_L1.d.wsp" file is a header like so:

             1,      86400,          0,          1
             84,         60,       1440
    
    

    The 1, 86400 is one minute & one day (in minutes) as per the retention listed above. As a side issue I would like to know what the other header values mean ie what's the syntax here?

    New challenge: I now have three Shelly variables named:

    em1/0/act_power
    em1/1/act_power
    em1/2/act_power

    with a device ID of "10006" and a SID of "shellypro3em"

    And I would like to plot them using the Historian, just like I do with Watts_L1.d.wsp in DataYours. So I need a file in the history directory for the data. So I looked at doing this:

    local whisper = require "openLuup.whisper"
    
    -- Syntax:  history/0.deviceNumber.shortServiceId.variableName
    local filename = "history/0.10006.shellypro3em.em1/0/act_power.wsp"
    
    local archives = "1m:1d"
    
    whisper.create (filename,archives,0)
    

    Problem is that the variable names contains forward slashes, which are invalid filename characters. What to do?

    Also should the retentions now be (to suit the latest openLuup software)?:

    local archives = "1m:1d,10m:7d,1h:30d,3h:1y,1d:10y"
    

    Also "shellypro3em" is not a "shortServiceID" as per those listed in "servertables.lua". So can "shellypro3em" be used instead? ie can both short and long service IDs be used in the above call to whisper.create?

    openLuup

  • openLuup log files - LuaUPnP.log and LuaUPnP_startup.log
    A a-lurker

    Try getting the json and checking it here. Also may be a decoding problem for the currency symbol. Try changing the currency to one simple character. The a-circumflex symbol before the pound symbol looks odd. UTF-8 versus ISO-8859-1 ?

    openLuup

  • openLuup log files - LuaUPnP.log and LuaUPnP_startup.log
    A a-lurker

    The code above before it is reformatted has ", at the very end:

    Any other startup processing may be inserted here...\nluup.log "startup code completed"\n\n",

    openLuup

  • openweather plugin ?
    A a-lurker

    OpenWeatherMap's API changes: Some time back OpenWeatherMap decided to change their API billing practices. The old "One Call API" arrangements (1,000 API calls per day free) have now ended.

    You now have to provide your credit card details, so if you exceed the free 1,000 API calls per day you can be charged. The provided API keys don't work unless you provide these details. Returned error message:

    {
      "cod": 401,
      "message": "Invalid API key. Please see https://openweathermap.org/faq#error401 for more info."
    }
    

    However, if you just want current weather with no forecasts, you can use the "weather" call:

    https://api.openweathermap.org/data/2.5/weather?lat=%s&lon=%s&units=%s&lang=%s&appid=%s

    It's possible the Multi Station Weather plugin could be modified to fall back to this call, if it fails on the first call. As suggested here.

    In my case, I've hacked the MultiStationWeather plugin code to use the "weather" URL at all times, as I don't use forecasts.

    Plugins

  • Migrating from Vera Plus to Home Assistant (or other?)
    A a-lurker

    I've got a lot of Zwave devices inside walls, so I'm being using a Vera Edge as a Zwave radio bridged to openLuup. Works perfectly once you move everything else to openLuup. Just leave the thermostats on Vera and bridge to openLuup. Makes the transition a lot easier.

    Hoping to see Zwavejs talking to openLuup via MQTT one day. It's certainly possible. In the interim it wouldn't be too hard to set up a couple of Zwave devices using the Virtual Devices plugin but thermostats may be a bit tricky.

    Also got ZigBee2MQTT talking to openLuup (mainly Hue devices) and some Shellies - all works nicely. You can also use Reactor with it.

    You can read the openLuup info here.

    Home Assistant vera home assistant open lua reactor

  • Chat seems broken (still)
    A a-lurker

    Have you seen this post?

    https://smarthome.community/topic/1525/forum-sysops/4?_=1713755285910

    Or is this in addition to the above?

    Comments & Feedback

  • openLuup: Shelly Bridge plugin
    A a-lurker

    Had a look at the latest development code. Got this:

    2024-04-20 18:51:28.519   openLuup.userdata:: [9111] LuaView (GitHub.master)
    2024-04-20 18:51:28.519   openLuup.userdata:: [9281] Virtual HTTP Devices (GitHub.master)
    2024-04-20 18:51:28.519   openLuup.userdata:: [4226] Sonos (GitHub.v2.0)
    2024-04-20 18:51:28.519   openLuup.userdata:: ...user_data loading completed
    2024-04-20 18:51:28.519   openLuup.init:: running _openLuup_STARTUP_
    
    2024-04-20 18:51:28.525   scheduler.context_switch::  ERROR: [dev #0] ./openLuup/luup.lua:1103: attempt to index field '?' (a nil value)
    2024-04-20 18:51:28.525   openLuup.init:: ERROR: ./openLuup/luup.lua:1103: attempt to index field '?' (a nil value)
    
    2024-04-20 18:51:28.525   openLuup.init:: init phase completed
    
    

    Had to get the house back up and running, so we could watch the TVeee.

    Commented out some code:

    local function log (msg, level) 
      local dno = scheduler.current_device()
      -- local mute = devices[dno].attributes.log_level    -- 2024.04.12  add "log_level" device attribute
      local mute = "something"
      if mute ~= "off" then
        logs.send (msg, level, dno) 
      end
    end
    

    Could look further but had to get the house back up and running, so we could watch the TVeee. So best I can do currently!

    Plugins

  • zigbee2mqtt and openLuup
    A a-lurker

    Yes my rookie mistake. Mixed up the function's variables being returned versus the function itself being returned. Was on the right track as "configure (dno) has fixed the issue. Thanks.

    Plugins

  • openLuup: Shelly Bridge plugin
    A a-lurker

    Doco corrected.

    Plugins

  • openLuup: Shelly Bridge plugin
    A a-lurker

    I've got something about pretty printing here. No doubt in my writings, I have made a few errors here and there but that can happen. OK on the "Generic status update over MQTT" - I'll fix that in the paragraph immediately above here.

    Plugins
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Unsolved