my cameras are in the garage… will turn off the base for now…
Probably explains why all the the days events showed up at once and nothing triggered the cameras
Very Nice Pun even if it was or not intended. Wakey Wakey as in the Wakey alarm clock…
It is confusing to me that they didn’t just deauthenticate everyone on the server end, and that would have forced a new login. It seems like that would be straightforward, and the worst impact would be having to log in again.
Forcing devices to reboot might be harder. But reauthenticating should prevent anyone from getting to the wrong cameras again, so I don’t know why that would be necessary.
I can confirm this. I’m seeing someone else’s cameras that are in a different time zone/country. This is a major breach! I only recently bought these, will be returning the cameras for a full refund. Utterly disappointed by this incident This company should stick to making power banks, chargers and other electrical goods, NOT security-related products.
Still happening? Why?
So every user who doesn’t log out is able to see another random customer’s camera? So the “security” is controlled by random customers and not Eufy?
The solution being up to customer action is not solving, it’s delegating and absolving.
Updated statement added in bold
During a software update performed on our server in the United States on May 17th at 4:50 AM EDT, a bug occurred affecting a limited number of users in the United States, Canada, Mexico, Cuba, New Zealand, Australia, and Argentina. Users in Europe and other regions remain unaffected. Our engineering team identified the issue at 5:30 AM EDT and immediately rolled back the server version and deployed an emergency update. The incident was fixed at 6:30 AM EDT. We have confirmed that a total of 712 users were affected in this case.
Although the issue has been resolved, we recommend users in the affected countries (US, Canada, Mexico, Argentina, New Zealand, Australia, and Cuba) to:
Please unplug and then reconnect the eufy security home base.
Log out of the eufy security app and log in again.
All of our user video data is stored locally on the users’ devices. As a service provider, eufy provides account management, device management, and remote P2P access for users through AWS servers. All stored data and account information is encrypted.
In order to avoid this happening in the future, we are taking the following steps:
We are upgrading our network architecture and strengthening our two-way authentication mechanism between the servers, devices, and the eufy Security app.
We are upgrading our servers to improve their processing capacity in order to eliminate potential risks.
We are also in the process of obtaining the TUV and BSI Privacy Information Management System (PIMS) certifications which will further improve our product security.
We understand that we need to build trust again with our customers. Thank you for trusting us with your security and our team is available 24/7 at firstname.lastname@example.org and Mon-Fri 9AM-5PM (PT) through our online chat on eufylife.com.
That doesn’t mean they are doing private encryption per account, so still means Eufy can see everyone’s cameras.
Or did I miss something?
I work with financials through my job and we regularly have to patch when oracle throws a new set of CVE’s out. Some clients pay for this as part of their contract, others figure it is not that big of a risk and that semi annual or annual base patching as well as Windows/Linux and what not. Want to take a guess on who has been breached and ended up paying more due to it now being out of scope. Usually 6 hours or so we will give you a break. 2 days of doing complete restores of every single server, the downtime they incur as well as having users sitting around twiddling their thumbs will make anyone think twice about cheaping out. There are no perfect admins like you said. We are all human, we all make mistakes. I can go over every single line of my plan, have it pass a sandbox, have it go through my Senior and then CAB and still have something go wrong. Hell, I accidently rebooted the wrong environment the other day. This on the other hand was a cost based decision as you said. Cheaper to run it from server based and just hope for the best, but whether this was an accidentally exploit or malicious we will never know. I do not feel for whomever has the privilege of writing that RCA, if they are still employed.
Well many of us here professional lives, we should separate that from this community.
Impact Assessments are where you work out the business impact of an outage, or disruption, or breach. Typically it is non-linear. A small issue addressed quickly usually has a tiny impact, but one lasting long enough your brand becomes widely mentioned are significantly more expensive - so “I heard Eufy was something to avoid” revenue impact. There are businesses not around now because they didn’t weigh up their risks.
Mistakes do happen but you can make them rare. At a cost. They had a long brand-damaging outage a year ago, and then this.
Outages, and availability do co-exist, as you hold a set of spare servers. Those spare servers can attach to either the current active image - High Availability - or to the next software version - testing. They can attach to current data - sync mirror - or to a copy - async mirror. So you can test, you can survive an outage. Just at a cost.
A BIA - Business Impact Assessment - will calculate the appropriate amount of $ to spend to balance the business impact. If it turns out the poor overworked admin needs more help (people, tools, hardware) then it will come out.
They didn’t state what they did wrong but the context implies a current production system was pushed an update which was not fully tested, which implies the testing process was lacking. You mention CAB but I don’t think their team is that large, it’s probably one guy.
This is why my company likes to highlight the 99% uptime average across the board. Nothing is ever up all the time, be it for patching and what not, but nothing rattles me more than seeing a comm come out for an HA event. That being said, that reputation is something that allowed for them to be bought out as well as keep the entire infrastructure (less people who were redundant) and try to keep a high standard. Word of mouth is both a blessing and a curse. All the loss leader giveaways spread brand awareness. Anyone that knows me knows about them. It works just for that reason. Negative will always trump anything positive though.
Agree the negative trumps anything positive only if there is a string of negative events happening. I try to tell my kids that once you build a bad reputation that it takes several times more of positive events to forget about the negative.