Has anyone successfully incorporated incoming call(er) information** within an automation workflow?
I'm talking about having Alexa speak out the name and/or phone number of a caller, or activating a routine/scene based on that info.
Always been a dream of mine. Wouldn't know how to achieve it! Do they make an Ethernet/WiFi-connected CallerID box?
**from a LANDLINE, not a MOBILE PHONE (although in the case of my Google Voice #, both lines ring, and if my smartphone is home and turned on, I could see invoking Tasker somehow)
REFERENCES:NCID - Network Caller ID
[Please file under proper category, as you see fit]
I’ve recently started to build out my own ‘Homer’ dashboard (https://github.com/bastienwirtz/homer) and I must admit I really like what it’s given me.
Thinking about future use-cases. I can see potential for this being an alternative (albeit simple ) UI for Vera/OpenLuup..
Is anyone else using Homer ?
Some of you may know that I took at shot at building an alternate geofencing solution for Vera. The core of it was system agnostic, using the OwnTracks application and AWS lambdas to track devices and keep a central data, then disseminate that to the Vera via a websocket-based plugin. It worked with other apps as well, including Tasker and GPSLogger, but of the dozen people that were testing it, most used OwnTracks.
A lot was learned in the process, not the least of which is that the success of any such solution is highly dependent on the phone and its settings. Phone manufacturers love to set things up for the longest battery life, of course, but that's usually very anti-geofencing behavior. In the case of at least one brand, it was unusable and the settings could not be modified. It was also cost-prohibitive to maintain on Amazon, as AWS grabs a dime here and a dollar there and before you know it, it added $100/month to my AWS bill, which my wife deducted from my Scotch budget. Unacceptable.
But it's quite reasonable to use OwnTracks to a local endpoint, and I could pretty easily replicate the functionality as a local application, or maybe even as an additional endpoint built into MSR's API (still separate port and process, but in the package).
So the question really is... would you do it, or would you be too concerned about the security risks associated (e.g., dynamic DNS and NAT mapping in the firewall necessary for the phone to contact the service when not on LAN)?
After having optimized my video processing integration of 8 cameras into my openLuup based automation, I have been wondering what I could possibly improve on my setup. Instead of pestering @akbooer with petty localized console optimizations I am looking at these Apple Homepod minis as a potential improvement from my alexa based voice command system. openLuup is presently already bridged to both platforms using habridge and homekitbridge.
I am seeing two issues with Alexa:Speed. The cloud processing of the voice commands takes ~1s (I don't use anything cloud to cloud) and I would really prefer it to be local. These devices can go a bit crazy when they lose DNS connections and sometimes do random things. Privacy because all the recordings go to the amazon cloud all the time.
I have been exploring open source solutions for this but I would lose the benefit of optionally being able to go to the cloud for search information.
The downside of the homepods are that siri is not nearly as good in terms of helpfulness as alexa at this point. They also don't have a version with a screen which I found to be very useful at a couple of locations on the other hand, the sound quality of the homepod minis seem to be better than the echos...
What do you guys think?
This thread from longtime vera user @dJOS inspired meFeb 4 Hubitat + HomeKit + HomeBridge + HomePod Mini = WOW Hubitat + HomeKit + HomeBridge + HomePod Mini = WOW
Howdy all, I got a HomePod Mini very recently for my study (the audio quality is great and hand-off is magical) and to replace my iPad as the main HomeKit controller for my house (we use HomeKit mainly for Presence detection which is IMO best in class). We also have a dozen Amazon Echo's of...
Go with me here... I recently pick up a unresponsive Vera Secure from eBay for next to nothing, thinking I’d could have a go at trying to restore it, if only as a play thing..
Situation - The power led comes on, but the internet and service leds just flash - no connection made (and even using a direct cable and Wireshark, I can’t see an arp request being made to see if it has a default in address) - I’ve also tried various reset button combinations - no luck.
Perhaps this post is a long shot, but seeing so many familiar ex Micasaverde/Vera forum names - i thought I’d at least ask - just in case anyone had any guidance/advice etc. I could use ..
Whistleblower: Ubiquiti Breach “Catastrophic” Whistleblower: Ubiquiti Breach “Catastrophic”
On Jan. 11, Ubiquiti Inc. [NYSE:UI] — a major vendor of cloud-enabled Internet of Things (IoT) devices such as routers, network video recorders and security cameras — disclosed that a breach involving a third-party cloud provider had exposed customer account credentials. Now a source who...
Reposting an article I got from a reddit thread.
I won't stop advocating against cloud dependence and unnecessary cloud reliance especially in the field of smarthomes.
I divested from a fairly large ubiquiti unifi system almost a year ago and I never connected my controller to their cloud service but as some may have sensed, it was a direction they were going towards. What got me off of it actually was the large disparity in product performance and a decreasing trust in the company with large amount of dubious advertising, inconsistent products for some of which the hardware could obviously not meet the marketing promises.
So have been migrating a bunch of automations to MSR from Hubitat and no I feel like I want to have an SSD instead of SD card on my Pi3.What is an Easy way to convert to SSD from SD card on Pi with MSR. What files do I need to copy to keep my rules?
I want to set something up natively if possible on Vera, so a virtual motion sensor device is tripped / not tripped by following the tripped variable state of a real device.
This LUA code works if I run it manually but I don't know how to have this running all the time and watching for that "tripped" variable to change ?local istripped = luup.variable_get("urn:micasaverde-com:serviceId:SecuritySensor1", "Tripped", 100) --Real Device luup.variable_set("urn:micasaverde-com:serviceId:SecuritySensor1", "Tripped", istripped,101) --Virtual Motion Sensor Device return true
How do you add something like this in to Vera's startup LUA feature ?
I have a Reactor set up to extend an outside awning when the OAT reaches 26C and it's after 1100, that part is straightforward enough, however I wanted to take it one step further and rather than it being either fully out or the reverse I'd like to have the ability to increment it either In or Out by a certain distance (typically 500mm) every 120 secs or so.
I had tried to do this with PLEG based on Sun position but failed miserably and never attempted to do this with distance and time.
I have looked at adding it to the current Reactor but not being fully fluent in Reactor I've hit a brick wall and don't see if it can be done - anyone point me in the right direction is is this something that Reactor can't do?
I have a 64GB SD card in my Raspberry Pi.
I read some guides online and the ones that looked easy to follow use a program call Win32 Disk Imager to make an image of the entire SD card.
I shutdown the Pi and when I insert the SD card in to my Windows 10 PC, I can see drive letter D:\ appear and its label is "boot"
Looking at this disk in Windows Disk management it looks a bit strange however as it has a massive unallocated area.
Anyway I proceeded to backup the disk D:\ using Win32 Disk Imager program.
It took a long time but it has created an .img file which is 27.4GB in file size.
I then put the SD card back in to the Pi and powered it on.
Here is a "df" output from the Pi
Looks like I am not using the full 64GB of the SD card.
When I first setup the Pi I was using a much smaller SD card and then I bought a decent Sandisk 64GB card and followed some other user guide found online to clone my original smaller card to the new card and expand etc.
So have I actually just backed up the Pi's SD card and all its contents successfully ?
My turn now... Shit Ezlo CEO is really on slipper ice.
I sak that in the thread that someone has contacted his wife and is Boeing concerened about his behavior.
When you read the mail you clearly read that its someone close to him at the Company...
Sad that he is ruining his Company and allt of peoples Jobs at risk.
Some of my favorite projects I have been using on my setup for over 5 years:bwssytems/ha-bridge bwssytems/ha-bridge
Bridge the vera or openluup to amazon echo through a local hue emulator. A much faster solution than the vera native bloatware as the only cloud service is used by this alternative is the voice recognition. It was initiated for vera control but later expanded to many other platforms like the logitech Harmony and other controllers. No cloud to cloud so no need for mios servers:
Home automation bridge that emulates a Philips Hue light system and can control other systems such as a Vera, Harmony Hub, Nest, MiLight bulbs or any other system that has an http/https/tcp/udp int...
The equivalent for Apple Siri:Hackworth/VeraHomeKitBridge Hackworth/VeraHomeKitBridge
Contribute to Hackworth/VeraHomeKitBridge development by creating an account on GitHub.
Alternative SONOS TTS implementation from all the plugin versions applicable for any platform since it is an API you can call. It is run also locally on a MacOS desktop in my case but can run on anything running nodeJS. For the TTS to be fully local though, you will need a mac or bear with the robotic voice of maryTTS. I have been hesitant to move my TTS to the amazon echos (project by @therealdb), but the echos lack synchronization and are cloud dependent which causes a 3-5s delay I do not have with this local solution, so this remains my choice:jishi/node-sonos-http-api jishi/node-sonos-http-api
An HTTP API bridge for Sonos easing automation. Hostable on any node.js capable device, like a raspberry pi or similar. - jishi/node-sonos-http-api
Use my fork if you need to install on Catalina because some fixes are needed and my pull request from months ago has not yet been merged.
This morning I signed up for Starlink internet. This is the Musk project that has been aboard many of the SpaceX launches for some time. I watched this morning's launch (60 additional Starlink satellites), and they announced that the Beta was open in some areas. I went to the site, and it was open in my area.
The site (starlink.com) quoted $99/mo with $499 initial equipment and setup. I realize that's pretty pricey compared to Internet access in many areas, but I currently pay a fair amount more (monthly) to my current cable- (TV) based ISP, quality and speed are inconsistent, and every year I have to fight their customer retention staff to keep my pricing from nearly doubling as my "special offer" expires.
Verizon, the dominant cellular carrier where I live, has been slow to roll out home-based Internet on 5G. This is another alternative I've been excited to investigate, but still waiting.
I'm pretty excited. Yes, I'm a Musk fan-boy. Don't judge me. 🙂
A while back the z-wave module on my veraplus stopped working and as it couldnt be fixed, I bought an external usb z-wave dongle UZB1 (which I plugged in and setup on ‘port’ /dev/ttyACM0 ) which made it operational again, and I’ve been using it again since.
Recently I picked up a new (well 2nd hand) VeraPlus unit and tried to do a full system/zwave network restore from the original one, but it didn’t work.
The restore looked like it worked fine, as it brought everything over, but when nothing worked, I checked the z-wave ‘port’ settings on the new unit, and it too had been set to use an external usb (/dev/ttyACM0) which it obviously doesn’t have , but when I changed it back to the onboard zwave module (/dev/ttyS0) all the z-wave devices disappeared.
To fix this, it sounds like I need to do a controller shift to get the details off the UZB1 key and onto the zwave chip on the veraplus board ?
However not having done one before I’m not sure if the process is - any ideas ?
I’ve got a few Lua scripts that I’d like to run periodically on my Pi, and I’ve tried to get them to run via Cron, but I can’t seem to get them to work..
Does anyone have anything similar set up that works ?Crontab -e
I’ve tried it a couple of ways based on the forum posts I’ve read, but neither have worked..0 0 * * * home/pi/shared/TS_cabin_graph_email_24h.lua 0 0 * * * /usr/bin/lua /home/pi/shared/TS_cabin_graph_email_24h.lua
Installed Sitesensor yesterday. Defined a http request towards yr.no for the weather.
Default it only sends requests when the device is armed. When I armed the sensor both of my sirens started bleeping. Not funny with at night with two kids asleep.
Simultaneous with arming the device, the first http request probably is send and my first http request did not have the Useragent http header so the result was a 403; this probably made the sitesensor device being triggered.
I unchecked the box "only send requests when armed" and disarmed the sitesensor. This works ok but only one slight mistake with thick fingers and I have armed it again.
I see the sitesensor device is being configured as a SecuritySensor (as it implements the securitysensor serivce?). This makes that sirens react when the sitesensor trips. I have a leakage sensor as well and I know this one triggers the sirens as well. Do not have any doorsensors right now but hope these will not trigger sirens as well.
Is there a way to tell Sirens to ignore Sitesensor and/or other specific devices. Or should sitesensor not be configured as a security device?
Is there an easy solution?
-Vera Secure box with built in siren (siren is a separate device)
-POPP Smokedetector with siren
-Upto-date with Vera firmware and Not on Openluup yet but don't think this would make a difference here
Yet another attempt to create the one standard to rule them all. "The ring of power".What Is “Project Connected Home Over IP” for Smart Homes? What Is “Project Connected Home Over IP” for Smart Homes?
Project Connected Home over IP is a new industry group announced by Apple, Google, Amazon, and the ZigBee Alliance. The group will create a new unifying standard for smart home devices, and that’s a big deal. Here’s why.
I am very skeptical about this. I don't feel it is needed and I think it will only add to the confusion but I am probably not a typical consumer. Thoughts? It seems to want to do the same thing as what a lot of us have already achieved through open source but commercializing it. It's not so different from Apple's HomeKit.
Status Board - What’s your HA Information Dashboard ?
parkerc last edited by parkerc
I’ve always liked the idea of having a screen, located somewhere in the house that would allow me to see the status of pretty much everything. (Hardware wise I’m just think of a basic Rasp Pi, fixed to a vesa mount, screwed to the back of an old monitor screen)
I’ve tried a number of tools/apps over the years, one of which was PanicBoard (where the above image comes from) - which seemed to have some potential, but the owners stopped developing/investing in that a while back.
What are people using ?
Is there something, perhaps a single tool/app that this community would collectively support/promote, one that no matter what HA you used, you could submit information to and have it displayed ?
**** Just to be clear, I’m referring to status/information boards, not a touch based, control board where you can turn things on/off etc..***
parkerc last edited by
I should add - I’ve personally have been using Node-Red’s Dashboard and found it to be pretty good..
rafale77 last edited by
I too am using grafana... not as fancy as AK's though.
...the nice thing about Grafana, is that it can pull data directly from openLuup's Data Historian, which uses an industry-standard API (Graphite.)
AK - have to say I've been a bit lazy on keeping up with openLuup's graphing ability (and reading the manual). I see I can graph virtually anything listed here: Console--> Historian-->Cache. There is also DataYours but currently I'm doing this (can't even remember how this works):
Unsure what's ancient technology or what each one entails eg AFAIK Grafana needs a Grafana server to be set up, etc. Presume that can be done on a RasPi.
What URL shows what you have shown above.(may be we need a new thread for openLuup graphing techniques?)
I'm planning on using Imperihome as long as i can.. this way i can have some controls as well, in addition to grafana graphs..
AFAIK Grafana needs a Grafana server to be set up, etc. Presume that can be done on a RasPi.
I run Grafana on a windows machine now. It works, but is a bit fiddly as it is at beta stage..
Does anyone here use an alternative to imperihome? the Imperihome bridge doesnt transfer all sensors for some reason.. motion sensors, light sensors, UV doesnt come over..
I want a status panel, but I want to be able to set i.e light schemes, open the garagedoor, etc from these panels.. (Old tab's and phones in 3d printed frames)
I've been looking at Home Remote, but it seems to need to connect to the vera servers for credentials, and I can't find another way to connect in openluup..
@perh Take a look at Homewave if you are on iOS. It works with Vera, both UI5 and UI7 and it also works for OpenLuup. You can have multiple controllers mapped seamlessly at the same time.
For Veras it works with cellular access, for OpenLuup you need a vpn to access the system when being off-site.
Black Cat last edited by
There is other visualisation software that will do what you want, but it comes with a high price tag...
Possibly the best professional system, IMHO is Eisbaur Scada (google is your friend), there are other professional systems available, it all comes down to price or user familiarity.
Only one iOS device in the house, and its not mine!
I want this to go local communication only, I just use altUI (via dataplicity) when i'm not home.. This is mainly for some UI pads i plan to have around the house, all android..
therealdb last edited by
@perh I ended with my own solution.
I searched a lot, but I couldn’t really find something ready. It’s obviously not generic, but I have thermostats, a/c, sensors and much more. It’s running on fully kiosk in fire tablets, that I control via mqtt/api, so I can display cams after motion events and much more.
toggledbits last edited by
I've also rolled my own. It will be included for optional use with the new multi-system Reactor.
Today I tried to use Fully Kiosk on an old Samsung tablet but the AltUI wouldn’t fully loaded. But it would load fine in the regular Chrome app
Black Cat last edited by
I've also rolled my own. It will be included for optional use with the new multi-system Reactor.
Missed this.......tell us more.......
toggledbits last edited by
@black-cat It's a skinnable tile interface that maps properties/capabilities to display controls (widgets). Each widget supports multiple canned layouts, and you can do custom layouts (either as per-device exceptions or globally-available). Widgets are tiled and moveable/sizeable. They are mapped to device properties and you can use expressions to fetch values, map values, etc. For example, the scene widget can be "active" based on the state of any device state or expression, not just the "active flag" on the scene itself--that is, a widget is not limited to sourcing data from one device/thing. So, for example, the thermostat widget can draw current temperature from an in-room multi-sensor, and on-off heating control by a plug-in switch, etc. Basically, instead of having to make a virtual device to collect data to a single object that is then displayed by a canned-appearance widget following rules particular to that device type, the widget just brings all the data together from whatever sources and displays it; actions work the same. If you want your thermostat in the bedroom to display the outdoor temperature in Moscow, you can do it. Easily. If you want a "binary sensor" widget to show tripped/alarm state when the pool pump is running after sunset, no problem. And you can do fun but sensible/expected things like when you activate a scene via a widget, the widget changes to the counter-scene (e.g. when you tap "Kitchen On" the lights come on and the widget then changes to "Kitchen Off"). Colors, fonts, sizes, etc. are all configurate/replaceable (CSS, HTML).
I've used and evolved this dashboard for years in my own home. It actually came into being first in 2017 when I made my first move away from Vera toward HomeAssistant (didn't happen, but that's another story). It was for family use, so right from the start, my idea was that nobody needs to know or care which controller is managing a device. Its hardware abstraction layer served as the launching point for a multi-system Reactor; I'll call it "MSR" here for brevity's sake. This MSR will work in a similar way: knowing the attributes and capabilities of a device, you can create rules using those, and rules that incorporate this data from multiple sources. That is, if your bedroom temperature is controlled by a space heater on a Tasmota-based relay board using control logic driven by input from a ZWay+openLuup-connected multi-sensor's temperature measurement, no problem. You configure any number of "controllers"; each instance announces what devices it has in inventory, and what attributes and capabilities they have. A controller can be an interface to Vera/openLuup, or Hass, or Hubitat, or just an HTTP-based element that fetches weather from OpenWeatherMap, or an interface to your EVL3/4-connect alarm panel, etc. It is an interface that simply says "these are the objects I have and this is what they know and do". So any device could be supported by a plugin in your Vera/openLuup/Hass/HE/other HA controller, or it could come from a dedicated controller crafted just for that device. For example, it currently supports Sonos through the Vera/openLuup plugin, but I (or someone) could write a dedicated Sonos controller that talks directly to the Sonos zones on the network and bypasses the plugin, maybe even uses their new API rather than UPnP. Controllers have a strictly defined behavior/contract, with the intention that others can develop controllers as well. This aspect is making MSR grow legs, a bit... it's really turning into a home automation controller all on its own. I foresee an ecosystem of available add-on controllers for every manner of device in future. This gives you the flexibility to determine what best supports the products you use; for example, if support for a particular Fibaro or Zooz device in Vera/eZLO is lacking/buggy (no--say it's not so!), you can instead include it on your openLuup+ZWay, Hass, or HE controller where the support is better, and MSR can find it there. But when creating your rules and activities, you don't have to know or care where that device lives. To the maximum extent possible, I am also keeping an architecture and implementation in which system objects (devices, groups, scenes, etc.) are entirely overridable and creatable through configuration. If you have a device type on Vera/openLuup that MSR doesn't natively support, you should be able to just go to configuration and say "this device type, or even this specific device, has these capabilities and these attributes". If a capability doesn't exist, you can create it locally immediately. Up and running in five minutes or less (modulo the first-time learning curve, of course). And for all of this, you should be able to contribute the configuration to the community if you wish (or find configurations/capabilities others have done and apply them). And of course, whatever to create/train is available both in the Reactor part and the Dashboard part.
I've focused mostly on the rules and reactions part of MSR for several weeks, and it lives and breathes now. Although algorithmically it shares ideas with Vera/openLuup Reactor, it is an entirely new code base (and not Lua). Huge strides have been made quickly, but of course, there are a lot of "TBD" comments in the code, and I'm sure no shortage of crashes in boundary conditions from things like unfinished input validation and so on. It needs combing out, some deep code reviews (which I prefer to do on paper), and backporting of some evolution of the evolved hardware abstraction to the Dashboard. There's plenty to do. But, with the freedom of creating the environment rather than working in someone else's, it's much faster and easier, and I'm really pleased with acceleration towards something usable over the last month. I'm about ready to cut over my own home's automations to it. There is nothing like the pressure of pleasing my "driving coach" to make sure I get things working well, and quickly.
@toggledbits Wow, sounds really exciting!
Today I use Homewave on my iPhone and on iPad, this sounds like the next step up towards a real dashboard.