sidenote: this virus actually scares me, and it sounds like it actually scares most people who work in IT. This is the shittiest thing anybody has ever seen, it sounds like.
If the "1002.exe" sample on Reddit is accurate the installer is unsigned, so forbidding unsigned binaries should be sufficient. The number of legitimate unsigned Windows binaries is small enough that you should be able to whitelist them by hand.
That being said a very restrictive Software Restriction Policy as linked below would mitigate CryptoLocker as it exists today. It has worked well for me so far.
It actually made my skin crawl reading about it. Never had that reaction to such a story before. Interesting...
Edit: It's the BTC aspect that's worrisome. Ransomeware is nothing new -- AIDS Information Trojan did it in 1989, but the (potentially) safe method of payments in crypto currency seem to be a new factor that will attract much more innovation in these type of attacks.
I just though the exact opposite. When I read "Ransomware comes of age with ... anonymous payments." I just thought "Somebody is going for a surprise once he finds out how anonymous Bitcoin really is".
Anyway, what really makes me nervous is Microsoft's insistence of executing any data that a their programs touch.
This type of viruses are nothing new [0]. The only new thing in this case is that it demands BitCoins instead of an SMS to a premium number or something else.
Unfortunately lots of stuff runs under there including, but not limited to:
GitHub for Windows and dozens of apps it installs in there
F.lux
Anything installed with ClickOnce
Chrome
GMVault
Xamarin's Android Support
Markdownpad
SkyDrive
Join.me
Assuming that everything in there is a virus is too much, I think.
I would think that .Net portable apps are likely also per user executables.. not to mention that there are usually at least one scripting environment even on windows cscript/jscript/vbscript/powershell for example, not to mention Java, Python, Ruby and/or node may be installed.
Doesn't Google Chrome run under %AppData% in a default (non-MSI) install? (This is how it's able to silently update itself, even when run as a non-administrator.)
Doesn't Google Chrome run under %AppData% in a default (non-MSI) install?
Yes, and from a security point of view it should be treated as hostile accordingly.
There is no need to actively circumvent Windows security like this. Firefox, among many other examples, is quite capable of automatically updating itself using a proper Windows service mechanism.
It's long past time that Google were called out on this one. Not only is it a potential security risk, it also interferes with backups of %AppData%, which is generally an area of Windows PCs that you do want to save regularly in case of disasters.
Installing into %AppData% is, iirc, Microsoft's intended approach with ClickOnce installers (which Chrome uses). The difference is that ClickOnce installers have a far more restrictive permissions model than old MSIs.
ClickOnce-installed applications are limited to "Internet Zone" permissions. This can make them immensely frustrating to develop with, actually, since many of MS's own development frameworks fail miserably in Internet Zone even when they have no reason to do so (mostly they generate temporary files in places they aren't allowed).
I'm not sure how Google Chrome gets permissions to save files into your documents and whatnot from there - I don't recall Chrome requesting a permissions escalation during install or anything.
Fascinating. Thanks for sharing this information. I had no idea this was actually a sanctioned installation option, but clearly it is if you know what to look for[1]. That's actually rather disturbing, from a security point of view...
It surprises me that there is not central update service that every program can use, and that every program instead have to use it's own always on update poller.
It probably is, but that doesn't make it any better as an idea. There is a good reason why every decent operating system's security model in the past few years has segmented this kind of functionality so only people with elevated privileges can do it.
EDIT: If I want to run/update something (Chrome) in Userland, why should an OS security model stop me? My guess is, Microsoft have successfully confused a common business requirement with a security one.
No, it's a required security feature that goes back decades in some operating systems. You need to be able to trust the code that runs on your system, and to do that you want to ensure only admin can install things.
Of course, Windows has now partially solved that with UAC. Unfortunately you can never know if you can trust the software or not though. However this does stop malware from secretly running without your permission, since it would require a UAC prompt to run. Then we get into uneducated users.
Or you could just not trust the code to begin with. The user should be able to run any program they want to. The OS just shouldn't trust the users programs. (And shouldn't autorun programs that the user didn't request).
Yes, but UAC has the same weakness as Linux permissions - it only protects the OS and programs, not the user-data. Programs can screw with userland data all they like without user permission.
The point is that UAC will (hopefully) prevent installing untrusted code in the first place, there by preventing those types of attacks. Unfortunately, you have to either trust that the user knows what programs are good, or go down the dark road that leads to things like an app store.
An interactive shell (like bash/python/irb) is untrusted code (i.e the user can type whatever the hell they like). But I don't/shouldn't need root to run it.
Wait, but 'install' means 'download' ? So if chrome was a single .py file, which I downloaded, and ran with python. Thats fine. But because it's a .exe, i need root... ?
This. I love Chrome, but their target market is using Windows, and asking them to click "Yes" to upgrade Chrome (or leaving this question up to the administrator) is not a barrier worth circumventing.
You don't even need to do that. You should need administrator access to install software initially, but that installation process can set up a system service that handles any subsequent updates automatically. This then runs independent of any current user on the system, and therefore does not depend on their personal privileges, nor does it need to prompt anyone for permissions for every update.
Clearly there is a risk involved with any process that can automatically download code you will subsequently execute. However, with proper access control, at least a compromised application running in user space can't do things like modifying its own executable so the malware has a place to live or, more generally, anything else that the user couldn't do without elevating their privilege level.
This certainly doesn't get us to an ideal security model. As I noted elsewhere in this discussion, a user on most systems today can probably still do things like e-mailing all the sensitive work documents they can access to a hostile party with just their normal privileges. However, it does at least prevent one common kind of attack.
And that's just as good an idea as executable data segments in a binary format (ie. not a very good idea). It's taken MS literally YEARS to get to half-decent default filesystem permissions in Windows 7 and this kind of thing just undermines it totally.
What do you suggest instead? People who work at BigCorps and have shitty outdated IE installs are motivated to install alternative web browsers, even when they don't have administrative rights (and they almost never do.) Google is motivated to enable them to do so.
The real problem, I think, is that Microsoft thinks requiring admin rights to write to "Program Files" is the be-all and end-all of solving the "application-environment integrity problem." That works for enterprise-wide deploys of sysadmin-supported software, but falls down for user-specific installations. On OSX, "application-environment integrity" can be enforced easily enough, since the OS delineates applications by a line called "the app bundle." OSX can (though I'm not sure it does) just disallow apps from writing into other apps' bundles without a "do you really mean it" prompt. But in Windows, the The Directory Is The Application Bundle[1], and so Windows doesn't know that this directory is special and should be protected from having other apps in other directories tinkering with it.
My understanding is they changed that and by default it wants admin rights. Then if that fails, it asks if you want to continue without. (At least, this was my experience the last time I had to install Chrome on a machine without admin rights.)
It's helpful to add /A (shows .exe files even if they have hidden/system attributes set) and maybe /B (bare format, just the path/filenames without all the header/footer information).
I tried implementing this solution and it has a lot of difficult side effects. Shortcuts on the task bar could not run (with the exception of Chrome oddly enough). If you select run in IE it fails because it saves to temp and some installers failed as well, again because of the use of temp.
Unless the end user is very saavy or has an onsite IT this seems that the better solution is rotating backups.
Alternating days to external hard drives that are then disconnected is the best mitigation.
And having already had one client effected by this is does scare me. Interesting enough he paid and had his files decrypted in about 48 hours.
I tried implementing this solution and it has a lot of difficult side effects. Shortcuts on the task bar could not run (with the exception of Chrome oddly enough). If you select run in IE it fails because it saves to temp and some installers failed as well, again because of the use of temp.
Unless the end user is very saavy or has an onsite IT this seems that the better solution is rotating backups.
Alternating days to external hard drives that are then disconnected is the best mitigation.
I was hit by this, or a variant, at my place of business. Hundreds of thousands of files on our shared drive were overwritten, about 2 TB worth of files. Office documents, PDFs, and Adobe documents like PSD and INDD were encrypted. JPEGs were altered but still viewable. All files increased in size by a few hundred bytes.
Pull-only backups were the savior here, although because we didn't notice until the next day, the pulled backups on that system were also overwritten with encrypted/corrupt files. Luckily we had VSS versioning on the pull-only backup location. There was a close call in that the 2 TB or so of "new" data ended up pushing VSS over quota and we almost lost our good versions of the files that way. If not for the VSS versions, we would've had to resort to cold backups which would've been a bit older. As it stood, no file recovered was more than a few hours old.
Auditing on the file share indicates which workstation was infected. Pertaining to that: it surprises me that in 2013, a default install of Windows will not log any useful information about shared folders by default. You must enable object auditing in Group Policy and specifically declare which users or groups are subject to said auditing on a share-by-share basis. In a world without logrotate, I suppose a sensible default is to just let a bunch of shit happen without recording it.
What gets me wound up most of all is the amount of engineering involved for an average home user to protect themselves. I thought a Mac with Time Machine was enough, but a similar virus would easily corrupt those backups if they were available to it over a mapped drive.
It is the goddamn 21st century, and users are still losing work by overwriting documents by accident, or opening a document as an e-mail attachment and not being able to find the actual file they edited. Should people really need an IT guy with ten years of experience to be protected from simple mistakes? Google has made progress on that front with the Chromebook, I suppose.
Something like CrashPlan provides good protection against this sort of thing for home users. It includes versioned, off-site backups -- either on their servers for around $6 a month, or on a "friend's computer" for free. Either way, the backups are saved via crashplan, not with direct drive access, so it should be safe against this kind of thing.
> opening a document as an e-mail attachment and not being able to find the actual file they edited
I'm so sick of this. The "open/save" dialog is in sore need of being revamped. There's really no such thing as "open" anyway -- it's really "save to some obscure profile temp directory and then open". Try explaining "you can't open a file that's not first saved to disk" to a user, though.
But sometimes you want to just "open" a file. The fact that your computer may choose to save it is an implementation detail. In fact, most systems don't actually 'save' it in many senses of the word. Instead they write it to the file system in a way that indicates that it may be removed at any time without notifying the user. In fact, on Linux (I can't speak to any other OS), it is common for these temporary files never to actually be saved to the disk. Instead they are loaded into a RAM based file-system (tmpfs), usually found at "/dev/shm"
In a way, that's worse! You'll have someone "open" a file, maybe make some edits to it, save it -- and it won't indicate a problem with that because it's considered as a file on disk somewhere -- and then when they go to send it, they can't find it and it may have been overwritten / deleted.
If the file is opened read-only, then programs should fall back to save as. Unfourtuantly, the only standard way to signal read-only is with file permisions. This will work, but many programs would likely have report a permisions error prior to the save as. Also it will cause problems for programs that transparently modify the file on disk while viewing.
I suppose you could hard code the read only parameter into the command that is used to execute the external program. Or have the external programs check if they are in a tmp-directory.
Presumably it's doing something like encrypting just the file headers or a part of the file, that way it can "lock" more files in a shorter time. JPEGs seem to be quite robust in being partly recoverable even when parts of a deleted file have been over-written - sorry I don't know the details.
For science, I will try to find one that is acceptable to share and post a before/after, or just the results of the comparison.
What I recall from my initial investigation was that the binary was completely different, but opening the image did not indicate any changes. Almost like it was converted from RGB to CMYK or something.
I went back and found that I was mistaken. JPG files that were altered were in fact completely unreadable.
In the confusion, I missed that JPGs with certain naming patterns were encrypted and others were left alone. I took two unrelated facts, 1) that plenty of images were readable, and 2) that plenty of images had binary differences, and put them together to arrive at a faulty conclusion. I am not going to be too hard on myself based on how that day was going for me.
IMG_????.jpg and presumably DSC?????.jpg were encrypted and other patterns were left alone. I presume this is to inflict damage as quickly as possible without getting bogged down encrypting stuff from "Temporary Internet Files" for hours.
I wonder if they'd improve their conversion rate by leaving behind a thumbnail to remind people of how much they liked their pictures that now risk being gone forever.
I really don't think we should try to dumb down UX for the benefit of less experienced users. You run the risk of creating a false understanding of how a computer works which can cause harm down the line, as well as frustrating and confusing more advanced users who do know what happens when you save a word document. Besides, as the user population ages it's becoming less of a problem.
I mentioned cold backups -- those just would've been a little older.
The pull-only archive w/ VSS versions really is massively convenient. It is the first line of defense against the scenario that comes up almost all of the time: "Help, I messed up this important file!"
I think the interesting thing here is the shift from the target - the "best" target used to be compromising the OS, so OS's made moves to protect themselves from programs running as unprivileged users. Now, it's trivial to wipe an OS and restore from a backup. The real value is the things people store on a computer, which are usually going to be accessible via a user account.
One trivial solution would be OS level automatic versioning of files (ala Dropbox or Sparkleshare) - the original files would be written to location that is read only to the user and only accessible via the OS, hence, backups could always be restored from it, but never destroyed without admin rights.
Of course, with people having great internet and whatnot, an automatic cloud based solution would be much more likely and useful.
I think with Windows 8.1 and onwards, Microsoft are automatically doing this by setting up the "Documents" type folders in SkyDrive - a great think moving forward.
Backups are, obviously, a much better solution but require extra storage and usually cost money.
So there might be a niche for a freeware product that runs as an admin that automatically versions files - perhaps even as simple as having an admin-owned .git repo for the Documents folder.
The worrying thing about this attack is that targeting user data is trivial on all OSs, because of the way we think about privileges - it could be done to us Linux users through something nasty in our shell rc using GPG or whatever. There is no need to compromise anything.
I think the interesting thing here is the shift from the target - the "best" target used to be compromising the OS, so OS's made moves to protect themselves from programs running as unprivileged users. Now, it's trivial to wipe an OS and restore from a backup. The real value is the things people store on a computer, which are usually going to be accessible via a user account.
You make an excellent point, but there is a second and perhaps even more sinister side to it. Encrypting your data and holding it hostage is one thing, but even if you have indestructible backups, there are probably still many sensitive pieces of information that can be acquired by a blackmailer with only user-level privileges: bank details, company trade secrets, personal mail/photos/videos, etc.
Having a back-up of these is important, but probably so is ensuring that they aren't distributed to people they shouldn't be. This requires a very different model of access control and user/application privileges, and unfortunately I don't think any mainstream OS is even close to solving this one yet.
> This requires a very different model of access control and user/application privileges, and unfortunately I don't think any mainstream OS is even close to solving this one yet.
I'm not sure it does require a different model of access control. It just requires people to actually use the access control mechanisms that exist already.
You should not access banking details or any other sensitive information in the same user-level context as you use to generally browse the internet. The privileges needed for each task ("browse the internet" vs. "check bank statements") should be different. I personally have a separate user account on my machine set up specifically for "sensitive" tasks.
Separation of data access via privileges is nothing revolutionary, nor is it something that can't be done on any modern OS. Unfortunately, online services are still behind. For example, I would probably switch to an online banking provider that let me create one account for viewing balances and another for transferring cash. But these services will get there in time.
Your proposal is OK if accessing sensitive information is something you only do occasionally, but it's not very practical to switch users completely if you deal with sensitive information often, which many people do.
On the other hand, if only explicitly authorised applications can create outbound Internet connections at all, and if applications like browsers and e-mail clients need explicit permission to read a general user file (as opposed to, say, accessing their own designated configuration or data files), then you significantly decrease the degree of vulnerability a user has to data leakage attacks (among other types).
Check out qubes os if you don't want to trust your kernel to enforce your mandatory access controls (you DO only allow certain applications/users/groups/roles/OS's/Hypervisors/etc... to do certain things, DON'T YOU??). Xen is a smaller attack surface, and depending on how much of a pain in the ass you consider having all of your files stolen and deleted being, there are many options for locking it down quite a lot. XSM-Flask if you are too paranoid, Hypersafe for control flow attacks + invariant violation detection tools for non-control data attacks over nested hypervisors if you are resolute.
>Your proposal is OK if accessing sensitive information is something you only do occasionally, but it's not very practical to switch users completely if you deal with sensitive information often, which many people do.
$ sudo -u banking gnucash &
$ firefox &
Done. My banking files and my Firefox session are now separated.
And for the 99.7% of users in the real world who drive their computers using a GUI and not a command line? Or those who do use a command line but aren't sufficiently competent with system administration to reliably get sudo-based access control right every time?
What about photos? I could see ransomware being very successful just demanding payment to avoid making a bunch of your personal photos publicly available on the internet. They may not be sensitive per se, but they're still likely not something you want out there publicly. Ditto for email, chat messages, etc., etc.
The problem is seemingly solved by OS X app sandbox and Mac App Store review process (the sandbox alone is not enough, because it allows to declare 'exceptions' like full disk access, so human reviewers are needed to watch out for those).
The sandbox may occasionally be causing some pain (in fact, would be very painful if I had to support OS X 10.7), but at the same time my app can no longer access any user data that the user hasn't explicitly whitelisted, which is a good thing.
Windows Metro apps also live in a sandbox, but they are sort of a different platform (no access to the file system at all, as far as I know). Over time, I can see them gaining some access to a subset of the file system, perhaps via SkyDrive.
> Backups are, obviously, a much better solution but require extra storage and usually cost money.
And the virus will encrypt anything writable, so the backup needs to be "pull", if the infected machine is the one doing backups and has write access to a non-cold-storage backup location it will may encrypt the backup itself.
Solved this problem at my startup Nuevo Cloud.. the filesystem is copy-on-write, including deletions.. In the settings you can control how long to keep the copy-on-write log, and then you can jump to any second within the log.
So even if this virus encrypted your backup on Nuevo Cloud, you can just pull up the snapshot from a second before the infection, and restore your files.
I do something similar. I keep all my files on an external HD drive. Only thing on my pc are the programs I need.
My impending move to Tails OS is also timely considering this new virus. We just spent two days dealing with this after an exec launched one of these and encrypted a bunch of files on one of our servers. This, after two emails warning about it.
Yup, this is basically what I was thinking - the daemon would run as a system user (e.g. root or something that could access user files) would "commit" the changes on write, pulling from the user's files, creating a read-only copy.
Obviously there are of course issues running stuff like this as root - if the daemon was compromised in any way it's game over.
Yes, the service doesn't have a 'space limit' setting.. So it's essentially infinite storage. It is deduplicated, so there is some savings there.. And the log only saves your changes.. So the space used would be 100% + % changed during period - % duplicated
We are working on a 'space limit' setting (should be finished shortly).. But if that were enabled, and you exceeded it, you would just get a write error when new data is written.. It wouldn't delete the log.. So if that setting were finished, an you got this virus, the virus might get a write error halfway through.. But your old versions would still be safe.
The only virus I ever got was the SevenDust 666 virus on Mac OS 8. An infected machine would have a "666" extension that couldn't be deleted (it would instantly replace itself) and then start losing files. So losing files as a target has been around for many years.
The interesting change to me is that now viruses have been effectively monetized.
> the original files would be written to location that is read only to the user and only accessible via the OS
A versioning filesystem looks much cleaner than a different location. Maybe we should start using those again. (Is there any candidate for ext5 already?)
And yes, partitioning the data permissions for the same user is a much needed change. Nobody got a solution for that yet, and there are lots of people trying. Apple, for example, is just giving up on iOS; Google has a subpar solution on Android that does not actually work on practice (the cyanomod people did improved it a bit) but is the closest we have from something viable.
Been using NILFS2 for 3 years now. Works great, performance is decent. It lacks extended attributes and ACLs, but the automatic snaptshot part is worth it.
I get annoyed when people are warned not to open some attachment. The real problem here is that in 2013 we're still using the flawed language of "opening attachments" -- as if running a native executable with full permissions is an action that belongs in the same category as viewing an image, reading a text file, or listening to music.
Well, it doesn't. This is a problem that should have been solved at the level of OS permissions/UI long ago. Why does a modern OS include UI functionality allowing a standard user to run an uninstalled executable in a non-sandboxed environment? There's no good reason for it.
In some cases the problem been solved (e.g., restrictions that allow only signed apps to be executed). But I guess none of those cases include Windows, its standard UI, and popular e-mail programs. :-(
We only use that language because it's an order of magnitude easier to explain to novice computer users, and because as you stated, the problem still hasn't been fixed at the OS permissions / UI level.
A modern OS lets us do that because lots of users are the sole user of their PC and do not understand the idea of permissions.
Except in this case (original article) it was an executable inside a zip file.
In the normal case, unzipping a file on linux will result in the executable bit being restored if it was included on the original file.
This is normally what you want - imagine an app that was distributed (over https) as a zip file where you then had to go and manually add the executable bit to each relevant file.
But a zip file that was opened as an email attachment is largely indistinguishable from one that was opened from an HTTPS download (it need not be that way, but it is), so the OS has no reliable way to allow you to run executables you download in a zip, but not ones you received as an email in a zip file.
There are certainly ways around it, but the executable bit isn't really the solution here.
While that is useful, in Windows-land the result would only be another message box to click-through, asking if you wanted to make the file executable, which no average user would understand, and therefore just click OK. If they even took the time to read it before clicking through.
Better to sandbox any executables received from external sources.
The absence of that executable flag does nothing to protect you from using an existing executable and some data such as an interpreter and source code.
In a corporate environment I'd expect crucial data to be on the network drive and snapshotted every few hours. We run ZFS on our network and all the secretaries have to do their doc/excel work on the drive. Nowadays that everybody has a Gigabit Ethernet connection read/writes are extremely quick.
Use ZFS and make read only snapshots that are only accessible to the sysadmins. You'll solve many problems that way. We do snapshots at 6am,noon and 6pm and then keep the 6pm one for 7, 14 and 30 days.
In about any corporation you look, crucial data will be in a Windows server (no ZFS available, sorry), and backed up on intervals that are some integer multiple of 24 hours.
Or, better, the above is the best case scenario that IT dreams of achieving some day. In practice, a huge share of the crucial data sits on people's machine, with no backups, and go on vacation every year.
Most corporate windows file servers (since 2003) use shadow copy, which saves previous versions of files every couple of hours. Any decent IT dept will use folder redirection, which redirects deskop, my documents to the local file server.
Agreed. At a previous job, I set up a multi-terabyte SMB/NFS file server (Solaris, ZFS) with snapshots taken every 5 minutes. This was incredibly useful. The snapshots (in .zfs directories) were even accessible to end-users so that they could recover from their own mistakes without the help of sysadmins.
With such a setup, the only situation in which sysadmins are required are when end-users accidentally copy sensitive data to the file server, remove it, and need sysadmins to also remove the snapshots to permanently remove the sensitive data.
This is a great solution if you have a good technical staff helping to run a business. The reality is though that this is more likely to affect businesses without technical knowledge, or home users.
Which means the Islamic terrorists coincidentally live upstairs (which I believe is one of the stupidest coincidences of any book I have read and enjoyed.)
A company I work with was hit when the employee opened a phishing email supposedly from another employee in the same company. It hit about 50 gb of data on the shared drive. We had Crashplan and restored from a few days previous. I then turned on DKIM and enabled quarantining non DKIM emails via DMARC.
DomainKeys Identified Mail (DKIM) lets an organization take responsibility for a message that is in transit. Domain-based Message Authentication, Reporting and Conformance (DMARC) is a technical specification created by a group of organizations to help reduce the potential for email-based abuse, such as email spoofing and phishing e-mails, by solving some long-standing operational, deployment, and reporting issues related to email authentication protocols
The ten thousand readers of HN who don't know these acronyms can use a search engine to look them up, or someone can ask a question and someone else can answer it and save 9,998 other readers the bother.
1 Google search = 1/35 of a boiled kettle.
So asking the question just saved about 285 boiled kettles of carbon footprint.
And having a flamewar on how people should google things for themselves wasted how many kettles? Anyway, if you don't want to tell people things, then don't tell people things, but going on and on on how OP should just google things themselves, is reaching 4chan levels of elitism. It's a really shitty kind of elitism.
"This infection is typically spread through emails sent to company email addresses that pretend to be customer support related issues from Fedex, UPS, DHS, etc. These emails would contain a zip attachment that when opened would infect the computer. These zip files contain executables that are disguised as PDF files as they have a PDF icon and are typically named something like FORM_101513.exe or FORM_101513.pdf.exe. Since Microsoft does not show extensions by default, they look like normal PDF files and people open them."
I haven't got a Windows box handy to try this on but I assume there is at the very least an extra warning dialog when opening an exe - even a zipped exe?
Not that that mitigates this at all. The inability to distinguish executables from data files - and although that doesn't apply in this case - the ability of data files to hide executable payloads either via design or error - is a major and currently uncorrected flaw in the system.
It does (I think), but even if it doesn't, it uses the file extension to determine it. However, EXE files are free to set their own icon. In this case, the icon of the EXE was a "PDF" icon.
The silly bit is the fact that the file extensions are hidden by default, and users can only use the icon to check the file type.
Ah, I guess it is time to send the annual email to mom, dad, and the in-laws to be very wary of downloading anything or clicking on links in suspicious emails.
I find this is good insurance against the inevitable phone calls I receive as the only computer-literate member of the family: "Hey Cory, all my documents disappeared and I can't get them back. Do I have a virus?"
I'm sorry, but if a firm doesn't compartimentalise access and a single infected workstation can bring down everything, then they deserve what they get.
Hadn't been ransomware it could have very well been a disgruntled employee, to the same effect.
While you're technically right - we are responsible for our security, and we should lock down our networks just like we lock our front doors - this is basically blaming the victim.
It's not truth, it hits residential users all the same. As much as we nerds might wish it, you don't deserve to be extorted because you don't understand computers.
What sort of IT infrastructure do they usually have?
-
My gut reaction was that they wouldn't have a need for a server in the first place, but I guess that depends on how small it is.
A simple file-share though, would be rather vulnerable to this.
Have you fully secured your home and office against arson attacks? No? Don't even know how to do so? Didn't think so. Does that mean you deserve what you get if you end up bankrupt in the event of such an attack?
I've been trying to raise awareness in my social medias, since my family, friends and co-workers might not spend time on HackerNews.
If you want, copy my message and share with your family, friends and co-workers:
"Hi folks,
There's a new virus out there that I want to raise awareness of, it's called CryptoLocker. Basically what this virus does is that it tracks all your files - hard drives, flash drives, usb sticks, network drives/shares - then it encrypts the files it finds.
The only way to unlock the files again is to pay $300 to get the key used for the encryption. The encryption used is RSA with a 2048 bit key which makes it extremely hard to crack, I'd say impossible with the time span and todays computers.
You have 72 hours before they trash the key making it impossible for you to get your data back.
This can be extremely devastating if you are running a business and all your files are gone. If you sync your files to the cloud, you're still not safe, it syncs the encrypted files as well. If you are able to restore to previous versions of your files in the cloud - great.
Let your friends, family and co-workers know about this.
Here are some simple ways to avoid getting a virus in general:
1. Don't open e-mails from people you don't know
2. Don't open attachments in e-mails unless you were waiting for the attachment
3. Don't go to websites/click links that you don't fully trust
4. Don't download and execute files that you don't fully trust
It might seem obvious to the most of us to don't do the above, but to a lot of friends, family and co-workers it might not be.
Imagine waking up and having to pay $300 to get your data back. However, the police tracked down one of the servers that serves the keys and shut them down which means the keys were not delivered and the data was lost, this means even if you do pay the $300, there is no guarantee that you will get the data back.
Raise awareness of this and avoid having your files lost."
5. Consider alternatives to Windows so you won't have to deal with these silly things that have largely only been affecting Windows users for the last decade+.
Sure, I'll ask my 70+ year old relatives that have been using PC with Windows since they first got their computer to download an Ubuntu ISO, burn that and re-install their system.
Joking aside. I'd love for everyone to just jump on a virus free OS, but as soon as that OS is mainstream there will be viruses.
The problem isn't the OS, the problem is that people trust everything that is for instance sent to them via e-mail. Users need to be educated on security, no matter the OS.
You should, actually. Ubuntu has an it efface 90% close enough to windows these days, and back in 2010 I did exactly what you say - got my mother to start using Linux instead. It's filled her needs perfectly and my support calls have dwindled to near-nothing.
I've been installing Mint for several retirees the past few years. After having installed it I rarely hear from them again because the system does the same thing every day: start up, let them surf, write a letter, switch off.
Central to the plot in the book Reamde but these guys don't offer a 'pay in WoW gold' choice.
Given the cost of computers these days, at least in business a separate 'browsing' machine and 'business' machine seems to be the best solution. I wonder if you could provide wireless for employees to bring their own laptops which had no 'office' connectivity (but internet connectivity) and machines that were hard wired and MAC filtered to the 'business' network.
Since the Bitcoin blockchain is public, couldn't you follow the money? Make a list of all wallets that accepted these funds initially, and then do graph analysis, either to see where the money went or provide others with a tool to avoid transactions with those wallets?
Yes, but this is somewhat like saying you could mark the banknotes used to pay off a person that's blackmailing you. If you catch someone with a marked note that doesn't prove they are the perpetrator; it just means that they received your money somehow.
Problem is that doesn't really help you identify the perpetrators. Both mixing services, and the fact that a user can generate unlimited wallets (if someone sends money to a wallet, you can't prove they own the second wallet or if they transferred money to someone else) makes this very difficult.
yup. But the fact they're using bitcoin shows a clever way for ransomware to collect payment with virtually zero-risk; since it's not possible(that I know of) to really trace exactly who, in real life, got those bitcoins. Which means, ransomware might make a strong comeback since the risk is now basically zero, this program isn't that difficult to write and there's real money to be made. Even if you only charged 50 USD, this idea would make hundreds, if not thousands, a month. Change the binary every once in awhile so its signature doesn't match popular anti-virus databases and you got free money coming in for... well ...forever[1]
1. Educating users to stop running random programs in zip files attached to emails, is apparently impossible. Maybe email-clients should scan the contents of any zipfile it receives and if it finds any kind of executable, put up all kinds of warning dialogs saying "You really don't want to run this. There's no reason to get a program in zipped email attachment nowadays. Please go consult your IT-admin or somebody who knows about computers for a 2nd-opinion"
the bitcoin pseudo-anonymity is a plus, but i feel the real value in this new round of ransomware is that the unlocking actually works. Its possible for the ransomware app to verify payment and unlock itself, with no contact or control from the ransomware author, greatly reducing the author's risk. Actually, its easier for the victim too - rather than wiring funds to some bank account in far off lands, a quick anonymous digital payment instead. Im speculating but its possible for the app to query blockchain.info for a deposit for a given address, or (less likely) for the app to download the blockchain itself, and then unlock after a certain balance. If there is high confidence that the data will actually get unlocked, that swings the balance of fight the app or pay the app towards the pay the app side. The author sits back and waits for those wallets to fill up.
The way this ransomware works still requires a centralized command and control server; without one, it would be possible to trigger the "unlock" codepath in the client without paying the authors.
The authors run a key-storage service which notifies the client (and provides a private key) once payment is received.
In this case the authors are still at a substantial advantage, though - as long as enough unlocks work that "just pay up" is the advice given online, they don't have to care if their C+C server is down half the time or the feds take it down, because the money rolls in even when the decryption isn't working.
That won't work, it could be prevented by a man-in-the-middle attack on the victim's own computer. Just spoof the blockchain signatures required as if the payment was sent on an ad-hoc network and the program would unlock itself.
Lower risk, but it probably reduces income: how many people can figure out how to make a bitcoin payment? How long does it take to make a bitcoin payment? The harder it is, the more likely the target is to give up and do without.
I think their Bitcoin payment method is actually to facilitate international payments (funny in a dark way) - they also take the popular shady prepaid-debit/cash-wire service GreenDot Moneypak and I'd imagine most US victims paid up that way.
There are a couple of anecdotes on Reddit about Canadians and other non-US residents scrambling to find a physical Bitcoin storefront or Craigslist contact to pay the ransom for them since Moneypak wasn't available in their area.
I would think their C&C server sets up a random Bitcoin wallet, and waits for a deposit, then allows the private key to be retrieved the next time CryptoLocker phones home.
Actually bitcoin has many strengths, but anonymity is not one of them. There have been multiple papers that go into how easy it is to trace, and even laundering style services like SRO used aren't very good.
> Educating users to stop running random programs in zip files attached to emails, is apparently impossible.
Imagine something just like the malware we're discussing, but instead of a 72 hour timer, it's a 4 hour timer - and at the end, it pops up a "gotcha! just kidding. but if this were real malware, you would have either lost hundreds of dollars, or all your documents. Don't open attachments like me."
The article mentions that there is a $100 variant floating around.
Makes me wonder whether they use the $100 variant in markets that $300 would be too much to pay.
If, is as reported, this virus is pulling in around ~$5million / annum, then that is a great basis for setting up a professional organisation to run the virus and extract maximum value from it.
This is one of the scariest forms of attack on computing since viruses became prevalent in the nineties. The fact they were up until recently relatively undetectable adds another eerie dynamic to the situation. It highlights the aged old problem of people not pro-actively backing up their data offline until it's too late. Go out and buy a couple of cheap 1tb external drives and back your data up now and keep doing it, there are even tools and drives that handle this automatically for you.
While ransomware isn't anything new, the fact that the authors of such software are using currencies like Bitcoin make it that extra bit harder to track and stop these people from extorting data. I sense a new wave of ransomware is about to hit the scene now that Ars have revealed specifics about potentially making millions a year from such a racket. It's hard informing people about these things without encouraging others to go and try writing their own ransomware and expect Bitcoin as payment.
While I'd like to think I'm sophisticated enough about security to avoid this, it makes me concerned about the vast majority of people (e.g. my parents, my girlfriend) that are clueless about such dangers.
Are there any recommendations of a simple way to at least enable automated backups of local documents to the cloud on a windows box?
Tarsnap is the only sensible backup provider given the recent history of warantless secret searches in America. SpiderOak is also a contender for file sharing. Both use end-to-end encryption knowable only to the end-user.
It's funny you mention that.... I implore 'cperciva to consider a glacier-level service. It is hard to compete with backblaze, but capping network bwdth is proly one way to skin that cat.
I like and have used Tarsnap in the past, but it's not like other providers prevent you from uploading encrypted archives, they just don't encrypt them themselves.
The crashplan JARs decompile pretty easily - I had a go a few months ago, and they weren't obfuscated.
Highlights:
The crypto is pretty bad - it's using blowfish in CBC mode with a static IV of 0c22384e5a57412b (convert each byte to decimal...).
The client-server protocol use 32 bit nonces and MACs, which is.
License key validation works by decrypting some packed data from the key after converting the alphabet back to hex. The key is blowfish-cbc encrypted data and the only validation done is verifying the padding - about 1 in 256 randomly generated ones will have valid padding, and the length is not checked.
There's list of a some backup solutions relevant to personal backup (as opposed to enterprise level solutions) here, costs for most providers given too and a brief overview of differentiating features - http://alicious.com/cloud-backup-solutions/.
I quite like duplicati. Slightly opaque to set-up but I'm using it with SkyDrive for some files as at the time of set-up SkyDrive gave me the most free storage with the lowest level of pain [for me] setting up.
I think that you could use Box for this pretty effectively. With their $15/month business plan, you get 1TB of storage and can apparently set any directory as a "workspace", which presumably includes the home directory. For most users, that would be more sufficient to keep everything backed up and the syncing process is supposed to be the same kind of transparent deal as Dropbox (which would also be a good solution, except that you can't set an arbitrary directory as your Dropbox folder).
until it encrypts the workspace and that gets synced. Although, I suppose you might have a previous revision as I know dropbox supports versioning for some (all?) kinds of files.
This is the difference between crime and organised crime. People would not hand over the money to the burly visitors each month if their shop was burnt down anyway.
Evidence that paying the ransom actually results in the files coming back is the most troubling aspect here - these people are looking to establish a longer term criminal enterprise.
I got a similar virus once but it was before bitcoin was popular. It just asked for money via credit card. The virus hid my files, and I needed them for work too.
Fortunately the virus did that by some filesystem driver level hack, because after I booted into Linux I was able to mount the partition and get my files back.
This variant seems to - it needs the command and control servers to get the public key.
Particularly evil malware could probably encrypt the data irreversibly if the command and control servers were unavailable, since as long as the decryption works some portion of the time lots of people will pay, but thankfully this particular example doesn't seem to be there yet.
Even if it encrypts regardless, preventing the perpetrators from profiting will remove their incentive to keep spreading this stuff. Once antivirus catches up to the copies in the wild, the problem would be solved. Of course, whether it's actually possible to shut down enough servers to prevent them from profiting is another question. But it seems to me anything that makes it more difficult is a good thing, even though it does suck for those who lose data.
This wouldn't really prevent them from profiting - an unsuspecting user could still pay the ransom, and then never receive a decryption key, so would be both out of the money and lose their data.
Sure, it wouldn't prevent it completely, immediately. But 1) many users will do a search beforehand to see whether paying actually works. The less often it has worked for others, the less likely they will be to pay, and more importantly, 3) it would prevent them distributing new versions of the malware, which would prevent them profiting once antivirus caught up to the existing versions.
But note that's only due to popularity. Socially engineering your way into a user running an executable means that executable will simply run with user privs. No trickery or hacking required, no OS holes. And that will mean that the executable will have full access to do everything a user could do, which will effectively certainly include sending a new encryption key over the network, and encrypting every file that user can get a hold of.
(One of the little problems with the UNIX-style user permissions is that it is designed to defend the OS, not the user. Sure, that little executable may not be able to corrupt "the system", which may amount to 5 or 10 GBs of easily-replaced code, but it will have its way with the 2TB of the single user's media files.)
The only faint defense Linux/UNIX can claim is the slightly higher probability that you'll be on a checkpointing file system and can roll back, and I say only "slightly" because they still aren't very popular yet compared to conventional file systems.
OS X defaults to only running applications that have been signed with a valid developer ID. It’s not difficult to get such an ID, but Apple can also blacklist them, which would prevent the malware from running once Apple notices it. So I think the Mac has a good defense against this kind of attack.
Malware developer can make 256 valid developer IDs, compute 256 signatures and switch them automatically and randomly during the propagation of malware. Once Apple blacklists one developer ID, another one pops out, and so malware continues to propagate.
I would imagine that Apple can also say "this developer ID is owned by this person, and we just blacklisted another one owned by them", then proceed to blacklist all of the IDs they've generated
Still, it's not as easy as the person I was replying to made it sound.
How many Macs would you have to compromise before you randomly stumble upon a registered developer, let alone a registered Mac developer (of which there are far fewer than iOS developers)? And how much more secure is a developer's machine likely to be, and how much less is the user of such a machine likely to fall for common email attachment-based infection attempts?
At some point, the feasibility is low enough not to bother. That's what all security ultimately is, since nothing is foolproof.
I think it's no longer accurate to think of this as "MS-focused attack but only because OS X is not as popular". Today, iOS is used by many more people than OS X as their primary computing device and I would say it's pretty safe from this type of attack.
Only because people can't email you apps to run on your phone. Which, last I checked, is why HN thinks iOS is a terrible, freedom restricting walled garden of evil.
iOS is fantastic if you aren't smart enough to use a computer. Most HN users know better than to run arbitrary apps from email, so for them it is a restriction that only prevents them from using their own device as they wish to use it.
People on here are talking about attachments and being smart enough not to fall for sham downloads, but this isn't how most of ransomeware is spread to its victims. They use exploit packs and 0 days. Visiting a website that's been hijacked with an Iframe or a proxy that embeds an Iframe or any other data to the HTML that is returned could get you infected. There is no full proof way around this unfortunately.
You could imagine the Bitcoin community deciding to blacklist any wallets to which funds like this were demanded and disbursed. That seems like a great idea until you then realize that this would be a way of denying anyone access to their own funds, by specifying their wallet as the recipient even though the attacker doesn't control it. There really doesn't seem to be any good countermeasure to this.
Which police? The guys behind this virus may as well be somewhere deep in China or Russia, good luck reaching them. It's not a terrorism or child pornography to get serious international attention.
Wow what a scheme. I mean it's almost the perfect situation for whoever wrote the system. It creates an extortion mechanism with a sense of urgency. Normally, users just carry malware around on their machine for weeks or months. The most frustrating part of this whole thing is that if you don't get the private key back and you're not backing up; you're toast.
This happened to someone I know (really, it wasn't me). Not only did it encrypt the local drives it also hit all of their network drives. As reprehensible as it is to pay the ransom they really had no choice since the encryption happened the prior night before the last backup.
Our company was hit by this yesterday, caused a lot of issues. Thank god we had backups, but they were 2 days old (frustratingly enough, the backup failed the previous day - first time in months...)
For hacker having both an original file and the encrypted version that file should be relatively easy to retrieve the key? Especially if the virus XOR all or a part of the file. Otherwise a hacker may look at the random function that generate the key in the source code of the virus it may be weak and take values from the computer and time of infection.
How is this any different from a virus that wipes (not just deletes) your data? It takes the same amount of time (actually wiping data would be faster) and the result is the same: No data.
Maybe the psychological part of "Oh God the file is there but I can't use it" or the fact it's ransomware?
I know a customer that got hit by this Tuesday morning. Unsurprisingly, Avast did nothing. I just told her the bad news and clean-installed Windows.
I have tried to find the private key with sample files, using known file byte headers, the public key and brute force on the private key. Sadly, no luck yet.
I imagine that this combined with virus capabilities (so it can spread itself via network) would be an overkill. Strange that they didn't do it, once you have an access to the local network (as soon as the initial victim runs .exe received by email) it shouldn't be too hard.
Finally viruses are doing what they're supposed to - wreck your computer instead of staying under the radar as long as possible. If people are motivated to protect themselves from this they'll also be preventing botnets and doing good to the rest of the internet.
Huh , that is pretty scary add a physical packet snooper on all the traffic sent from my computer , it might be possible to mitm the private key as it is sent to the server. That way i might have a fighting chance against this.(if the traffic was unencrypted that is )
What I find funny is that this piece of software actually tells you more about what it does than software you pay money for and even uninstalls itself, after it is not needed anymore. It's kinda weird how malware is better quality than most other software.
I talked to a small shop owner just the other day that had been hit by this. They said they spent the $300 on a new PC instead - but I'm pretty sure they lost a bunch of irreplaceable data (mailing lists, supplier details etc). Pretty heart breaking.
Nasty stuff. Fortunately for me, this would set off the "why the heck are my fans running so loud right now" alarm that I have in my head (that honestly, I wish I could turn off sometimes ... curse you trustedinstaller.exe!!).
but this one seems to do what it claims to do. it's pretty scary for people who don't have decent backup system. but these same people live with the risk of losing their data due to a drive failure, so...
A lot of "decent backup systems" would be vulnerable to this too. Say you back up all your local stuff to a RAID that you've mapped as a drive, as well as a mapped Google Drive?
It's still all toast.
That level of backup would handle any kind of physical failure - a dead drive, the destruction of your house, the failure of Google... but still, this thing would kill it.
There's only so much you can expect from a person when it comes to keeping their personal documents and family photos.
I mean obviously, if you're running a company you need a real backup solution, but for family files or a one-man-show business? There is no reasonable precaution.
This type of thing only works because the backup user has identical permissions to the backup contents as the user being backed up (because they're the same).
It wouldn't work on any system where the backup user is a separate, privileged process that is the only one with write access to the stores of backed up files.
ZFS with a snapshot script is a good way to implement this for a networked drive on Samba, since it's implicit, automatic, and the point at which it hits would be really really obvious since your snapshot sizes would suddenly explode. The same story is true of volume shadow copy (but MS idiotically limits the user's ability to set a known and trustworthy shadow copy schedule).
Yes, but if they have their crypto stuff together, it might not gain you much.
Someone here mentioned the encrypted versions of the files are the original size + a little extra. To me that indicates that they use a public key (of which the private component does not reside on your computer, and never has, but which you can buy). The public key is used to encrypt a key for a symmetric algorithm (AES, DES, ...), which encrypts the data, and the RSA-encrypted version of that symmetric key is then prepended as a header of some sorts.
So using a debugger you'd be able to see the public key, which I suppose is infected-useraccount-specific. It's not useful for decryption, you'll need its private counterpart.
You'll also see the symmetric key, of which a new (random) one should be instantiated for each file that is being encrypted. Should, but might not... if they slipped up, the latter might be reused (for your user account). In which case you can win, if you can observe it encrypting a new file — you'd be able to decrypt the other files too.
They'd have to be quite stupid to slip up like this, but it happens.
Update: Reading a reverse engineering report¹ it appears that it indeed works as described above. And yes, they didn't slip up; a new symmetric key is generated for each individual file.
Problem is, the prompt doesn't appear until the encryption has ended, the key has been sent to the servers (it's kind of complicated, it apparently tries to find servers on its own, I wonder if it can be fooled) and that key has been locally destroyed.
So, by the time the user is notified that there is malware on their PC, it's too late. People who know to detect viruses while they're running don't run attachments in the first place.
According to the KernelMode thread¹ the keypair is generated on the server. The public key is retrieved from it, but its private counterpart will never be on your machine. No key is sent to the server.
Disabling or limiting your use of JavaScript and Java in the browser will go a long way towards protecting against delivery of this as it is likely delivered by an exploit kit. If you do hit an exploit kit, Microsoft EMET (free) will probably mitigate the exploit/s.
sidenote: this virus actually scares me, and it sounds like it actually scares most people who work in IT. This is the shittiest thing anybody has ever seen, it sounds like.