Hacker News new | past | comments | ask | show | jobs | submit login
Air Gaps (schneier.com)
343 points by bostik on Oct 11, 2013 | hide | past | favorite | 196 comments



Is there an effective way to "mostly" airgap, if you need Internet connectivity for your work? This is a comment I posted on a similar thread a few weeks ago.

=========================================

Just curious, how would airgapping be practical if you need Internet connectivity for your "real work"? For example, let's say you run a quant trading firm and the algorithms you're concerned about being stolen need connectivity to download live trading info, and then after processing that info they need to communicate buy/sell orders to the outside world. Are there any methods that could be used that would prevent all communication with a secure system (with an airgap level of certainty) besides the strictly defined data you need to do your "real work"? -----

gaius 19 days ago | link

Sure, you would just use Radianz, and that is in fact what everyone does. This is a very solved problem! Bloomberg also operates a private network, and there are others too. These systems can operate perfectly well without access to the public Internet. A couple of jobs ago I worked at a financial services firm with 2 networks and 2 PCs on everyone's desk. Rednet for outside connectivity, and an internal network for real work, and never the twain shall meet. NO-ONE needs the Internet for real work, let's be honest, just for goofing off. Time we all started to prioritize security over mere convenience. -----

*

wikiburner 19 days ago | link

Yep, maybe trading wasn't the best example, although they are still effectively at the mercy of the security of their data providers network - which admittedly is probably quite good. Let's say you're a P.I., journalist, researcher, law enforcement, or intel agency, and need to automate news or people searches for some reason. If you were able to very strictly define the data you're expecting to receive, isn't there any way you could automatically pass this data on to a secure system without opening yourself up to exploits?


I've not played in this space for a looong time but...

There are four things you want to do -

1. Get a herd of cash together. The stuff that follows is not cheap.

2. Set up a hardware data diode (an appliance that only allows data to travel in one direction). [1]

3. Set up an air gap like Whale Comm's appliance used to do (two 1U rack-mount servers, back-to-back, which [dramatization alert] automates plugging a USB stick into one server, copying data onto it, pulling it out, sticking it into the other server, and coping the data onto it - at ~10Mb/s, if memory serves). [2]

4. Any time anything traverses the trust boundary, convert from one format to another, so PDF becomes RTF, DOC becomes TXT, PNG becomes GIF, and so on. The point is that converting attachments into other formats drops malicious payloads, or stops them from exploiting vulnerabilities in the apps that open the original formats.

[1] Tenix used to do one, but they cost crazy money (millions). I don't know much about this space anymore, but this might provide some pointers: http://en.wikipedia.org/wiki/Unidirectional_network

[2] Whale Communications was acquired by Microsoft. The product is now called ForeFront Unified Access Gateway, and while still a good application firewall, no longer provides that air gap (http://en.wikipedia.org/wiki/Microsoft_Forefront_Unified_Acc...). I've no idea who else can do this.


Nowadays the poor mans 1-way data diode is a fiber link with the bad transmit connector physically obstructed. So you're using fiber where transmit is on one fiber and receive is on the other but the receive is blocked. Then only use 1-way protocols (UDP).


My employer (Fox-IT) sells a data diode (https://www.fox-it.com/en/products/datadiode/). List price is "call us for a quote", but the quote won't be anywhere near $1 MM.

Using a "proper" diode instead of hacking something yourself gets you a guaranteed-good solution - how much do you trust your firmware? - plus some software that automates "I want to send X through this machine" for many common and/or high-value instances of X. That said, custom hardware plus custom software plus certifications plus enterprise sales is indeed (a lot) more expensive than snipping the tx wire/fiber.

Automatically copying USB sticks doesn't seem particularly useful to me.


The air gap (copying data out of band, as it were) is useful in that the connection can be physically broken by software on the system high side, using a software kill switch, automated schedule or other rules.


Unless you need to transfer large amounts of data quickly, I'd suggest going back to the pleistocene of computing: Use RS232 in a two-wire config. (Data & Ground)

Saves you about 1 million dollars ;)


It seems like a wifi device in monitor mode might work for this. You could write a small utility to assemble streams from sequenced udp broadcast traffic or raw frames, and ensure the radio in the airgap machine never transmits.


> ensure the radio in the airgap machine never transmits How? Do any wifi devices use separate receive/xmit antennas? If not, then you are back to relying on software.


A few years ago, I once met someone who worked on imaging devices for satellites for a military contractor, which was about all he would say about the specific work he did. He indicated that the building they did their work in had _no_ internet access. Generally, if they wanted to refer to things on the internet, they had to go to some internet-connected computers in another building, print out whatever they wanted and bring it back to the building they worked in.

Perhaps it's unrealistic to expect security without tradeoffs of convenience.


Sounds like standard operating procedure for Classified programs. Everything is either printed and brought in, or burned to a CD-R and virus scanned and then brought in.


What you want is a firewall that uses deep packet inspection so that only data meeting your specifications get's through aka it's an XML file with the correct schema. Unfortunately, you can't exactly trust of the shelf Software for this stuff, however a firewall can be far simpler than a modern OS so they tend to be much more secure so you are reasonably safe just updating the inspection code. Though if you have a sufficient budget or are overly paranoid you can build your firewall from scratch. Also, what you do at the endpoint matters, opening websites in IE is going to be far less secure than say doing day trading using custom software.

That said, the usual rules of defense in depth still apply, ensure that your machine can only talk to a white-list of IP address, etc etc.


I'm a big fan of Qubes-OS. It's an interesting OS distro that's based on the Xen hyprevisor and allows you to maintain different VM zones that integrate with the desktop.

You could have a Banking VM that only runs a certain browser and the firewall only lets traffic to-and-from your banks site. You Work VM could be set to only allow traffic through a VPN connection to your work. Your BitCoin VM could be set to not have any network traffic at all. You could even have a Tor VM with a browser.

http://qubes-os.org/trac


But qubes did issue a security advisory , talking about a security bug in VT-d/VT-x hardware that only intel can fix. For someone like schneier who want to avoid the NSA , maybe qubes is not enough.


What about having two computers, one connected to the internet, one disconnected. Then develop and install two custom services - one obtains the required information, e.g. the trading data, from the internet or sends data to the internet. The other service runs on the disconnected computer and communicates with the software actually using and producing the data, probably in the form of a proxy server.

Finally connect both computers using something simple like a null modem cable and make both service communicate over this link using a very simple proprietary protocol. Assuming the disconnected computer is not already compromised before you start using the system and you have not been extremely sloppy when you designed and implemented the two services, it should be quite hard to compromise the disconnected computer.

One way would be to find valid data (passing the protocol checker in the receiving service) to be transferred from the connected to the disconnected computer that triggers a bug in any data consuming software leading to code injection and execution which in turn sends secret data over the null modem link to the (compromised) connected computer. That seems to be a quite a complex attack to me, especially if the data traveling over the link is something simple like stock price time series which enables very simple protocols and thorough validation.

To avoid some classes of bugs, e.g. buffer overflows, in the services linking both computers I would implement them using a managed runtime like .NET. This will of course expose the system to vulnerabilities in the underlying runtime.


Depending on the bandwidth you wanted, the first thing I thought of was having the output orders on one computer - perhaps using something with a higher data density than plain text - aiming a webcam at that computer's monitor and using some form of image recognition to copy the orders across.


Knuth on his air gap: "I currently use Ubuntu Linux, on a standalone laptop—it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux." : http://www.informit.com/articles/article.aspx?p=1193856


In a post-Stuxnet world, can we trust flash drives? If I remember correctly, that virus would jump onto flash drives to spread to the next few computers it touched. I think I might prefer an ethernet wire connection without the outgoing wires.


Not at all. The microcontrollers on flash drives can be reflashed to do all manner of mean stuff to any host it's plugged in to.


As long as you wipe and format the flash drive from the secure computer every time you use it, there shouldn't be any risk. I don't think even Stuxnet could have infected a linux machine that didn't mount or autorun the partition.


Flash drives are not dumb. I know that Travis Goodspeed has had some success in reflashing their microcontrollers to speak corrupt USB to attack USB stacks of host machines (which of course run in kernel mode).


A fascinating talk by Travis Goodspeed on the mayhem possible by reprogramming the USB controller in a disk:

http://www.youtube.com/watch?v=D8Im0_KUEf8

Writing a Thumbdrive from Scratch: Prototyping Active Disk Anti-Forensics


I guess the only safe options are CDRs and one-way ethernet connections.


Stuxnet was bespoke malware with multiple 0days--the reason it worked like it did was because of how the target's computers were set up. If Iran had used Linux they would have likely used different vulnerabilities to [try and] accomplish the same goal.


Modern ethernet ports will not link over one pair. It breaks autonegotiation.

I'd sooner use a wifi card in monitor-only mode, patching the driver to disable all transmit ability.


You can't do it with Ethernet over copper but you can with fiber.


I find it somewhat amazing that a few months ago, I'd have read an article like this and thought "Man, talk about paranoid."

But today, after all that I've read and learned recently, it makes perfect sense.


I read a comment just like this a couple of months ago. Someone saying that until this point they thought Richard Stallman was a complete paranoid nutjob but it turns out he was completely correct. I guess that's why he will be seen as a visionary in so many areas.

And speaking of stallman and airgaps: http://stallman.org/stallman-computing.html

> I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I fetch web pages from other sites by sending mail to a program (see git://git.gnu.org/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly. I usually try lynx first, then a graphical browser if the page needs it.


While I admire his dedication to free software and security, I find it sad that he who has done so much for the internet and modern computing eschews most of it.


You could have said something similar about Dijkstra, who most of his life didn't even use a computer.


Ihad a formal methods lecturer who insists that no computer scientist should be allowed to use a computer until they are 40


Yes but most of his life was lived before personal computing was really a thing.



I've been setting up networks since 2002 the following way:

Internal network, NOT connected to the internet. External (small network) is connected to the internet, and has "terminal server" (Windows Terminal Server if I must, Xrdp if I can let the external servers be Linux).

Firewall between outside world and external network, configured to allow reasonable work on that network. Firewall between external network and internal network only allows internal network initiated connections to the RDP port (3389) on the external network.

Also, an rsync setup that allows some controlled transfer of files between inner and outer networks (preferable to USB drives - the USB ports should be disabled logically and physically, although I didn't always get to do that). This rsync setup goes through a different port, with a cable that is usually not connected (the air in "airgap"). When files need to go in or out, I plug the cable for a few minutes, and unplug when not needed.

From experience, this lets you keep a network reasonably secure, without having to put two PCs on everyone's desk.

Of course, there's risk: There might be a way to root the inside machines through a bug in RDP, after rooting the outside machines. However, it will work well, against "standard" attacks and malware that assume internet connectivity. Even if they get in (through a USB drive, as schneier says was done in the Iranian and US army facilities), they can't just call out to the internet.


RDP makes it easy to access drives, devices, etc. from the the client... do you do any additional configuration to disable these features?

http://geekswithblogs.net/DesigningCode/archive/2010/04/19/f...


Yes, drive sharing was disabled server side (though, if the external RDP server is compromised, one could turn this back on). On the client side, we set up the connection not to try to share anything.


It is completely possible that mentioning Windows in the article was meant to be only a smokescreen. I'm sure a person in his position would absolutely not want to publicly declare the exact solution he is using. In reality, it might as well be something completely else, like Slackware or some USB-bootable distro. Yes, this might be security through obscurity but considering that he admitted that he isn't familiar with the inner workings of Truecrypt etc, it is the safest bet. Not disclosing what exactly you are using doesn't allow an adversary with unlimited resources to adapt and optimize to break this specific scheme.


I have no insight into Schneier's top security setup, but I know for a fact he uses Windows on a regular basis. His portable computer is a standard Sony Vaio runing Windows.


That doesn't seem plausible. He could non-specifically say "don't use Windows. Ideally use [some stock linux distro] or investigate other unix operating systems that can be configured for safety." That wouldn't really give away anything.


It's not about giving things away -- it's about delaying your adversary. If the NSA takes Schneier at his word and targets him accordingly, then (assuming this is a diversionary tactic) any 0days or other attacks they attempt to send his way will fail. Now obviously that's not a long-term strategy, but it does provide an extra layer of protection against naïve attacks based on his public statements. To be clear, I'm not sure I buy the smokescreen idea, either, but your reason for disbelieving is flawed.


It's fair that this would make some difference. But it's not as "he's not on Windows, but we have no clue what he is on?" is a recipe for quick success. He doesn't have to say what linux distro he uses, just give a placeholder for a decent one, and he doesn't even have to use Linux.

In any case, is it worth potentially misleading a lot of people for the sake of such a marginal increase in his own security? He could have an even more secure setup if he didn't talk about instituting an air gap. He's already giving away information.


Windows is a bad choice for an air gapped system. A much better would be Slackware, where its even simple to maintain an air gapped system. Maintaining a Linux with a package manager, e.g. Debian without internet is much more trouble.

I had scripts to maintain an air gapped Debian 10 years ago, but can no longer recommend them, as Debian now has signed archives, and the script breaks the sign.


You can still use optical media (CD/DVD) or USB keys to install packages with APT, so I don’t see how Slackware would have any advantage over Debian there.

There’s even a apt-offline[0] to create a list of ‘needed’ packages on one system, then download these packages on another one and transport them to the air-gapped system. Of course, you will still have to decide whether to trust these downloaded packages, and unless you trust at least some Debian Developers to do the right thing, this will be hard to do even with GPG signatures on all packages.

[0] http://packages.debian.org/wheezy/apt-offline


Slackware is generally more secure by default, such as explicitly requiring root (not sudo) for package management and at least sudo for utilities that prompt the kernel. This is much in the vein of the *BSDs.

Also its conservative nature, constant security advisories and eschewing of bleeding edge are a bonus.


  $ dpkg-query -W -f '${Status}\n' sudo
  unknown ok not-installed
IOW, it is perfectly possible not to use (or even install sudo) on Debian. I don’t want to argue whether Debian or Slackware have a more ‘conservative nature’ nor whether that’s an advantage, but there are of course also security advisories for Debian (e.g. today for the systemd packages…).

> at least sudo for utilities that prompt the kernel

Basically everything ‘prompts the kernel’ in one way or another, could you expand on how exactly Slackware manages to run when every syscall needs sudo? (Or what you mean by ‘prompt the kernel’.)

I guess at the end of the day, you can configure a Debian installation to be more secure than any given Slackware installation and you can configure a Slackware installation to be more secure than any given Debian installation – this, of course, depends on your skills and experience with any of the two, so you should use with whatever you’re more comfortable :)


The OS on the air gapped system is not that important, since you don't have to deal with regular internet threats, and anybody who wants to attack you will use 0-days anyhow. But I think that the air gapped computer should have a different OS than the computer which writes the USB sticks (and is connected to the internet). Just to force the attacker to burn two 0-days.


OpenBSD on some older hardware is probably even better!


> 1. When you set up your computer, connect it to the Internet as little as possible. It's impossible to completely avoid connecting the computer to the Internet, but try to configure it all at once and as anonymously as possible.

There's no technical reason you can't keep your airgapped computer completely off the internet for its entire life cycle. I'd even go so far as to commit heresy say that this is just plain bad security advice that Mr. Schneier is giving out here. Instead, you should probably get your install media from a trusted source and use that to install the OS and any initial updates (maybe that's a manufacturer's install CD or a Linux ISO that you burned yourself - avoid anything that isn't write-once). If the OS on your airgapped machine has a unpatched remote vulnerability, you're already putting that system at risk by connecting it to the internet even once.

Don't discard that trusted install media - if you need to create another airgapped machine, you're using the same airgapped data to perform the install. I realize that Bruce was discussing setting up a stand-alone computer, but I thought I'd share my experience: Years ago, around the same time that Blaster was a nuisance, I managed a network of airgapped machines. If any one of them had been hit because I chose to just let it download updates off the internet, the entire network would have been compromised. This would be much worse if you were worried about a targeted attack - every time you connect a fresh computer to the internet with the intent of moving that box over to the secure network, you're giving the attacker another opportunity to gain access.

For transferring data back and forth, I've used CDs in the past, but toyed with the idea of using a dedicated serial cable for transfers instead. Tar up the files, connect the cable, tell the remote machine to listen, shoot them over, then disconnect the cable. The connection has no network stack to worry about independent programs sending data across the channel; if extra data is added, the result on the other end likely won't untar; there's no auto-execution of programs to worry about. The only thing I have to worry about being compromised are my copies of tar and cat. Removeable media in general has issues - Schneier mentions a few examples in the article of successful compromises using USB sticks.


If the OS on your airgapped machine has a unpatched remote vulnerability, you're already putting that system at risk by connecting it to the internet even once.

Even if it's behind a NAT firewall with no external ports open? And you only connect via SSL (or SSH) to specific known hosts?


There are whole bunch of things you can do to mitigate the risk, and whole bunch of other variables regarding the network environment that you're setting it up in. The network behind that NAT might be compromised, and depending on the operating system there may be ports open by default that could be compromised before you can close them or there could be some other remote vulnerability. I remember about a decade ago having a Windows box that I was wiping/restoring for a family member infected with Blaster after its first reboot before all of the system updates were finished downloading.

Something with a good reputation for security, like a clean OpenBSD install with no ports open, is unlikely to get hit on its first round of updates. Even so, if you're going to go through all of the hassle to set up an airgapped system anyways, why bother taking the risk?


For the really paranoid, to the extent that your data can be represented as a text file, you can print it on paper from your internet connected machine and OCR it into your air gapped machine, and vice versa. In this case, you only have to worry about your printer or scanner having a backdoor. If you are very confident in your OCR accuracy, you can encrypt it prior to printing and decrypt it after scanning.

Just remember to burn the paper afterwards.


"Don't worry too much about patching your system; in general, the risk of the executable code is worse than the risk of not having your patches up to date."

Not good advice. If you plan to open anything other than text files on the machine, un-patched software is almost as big a risk as transferring executables. The only difference is that it seems less dangerous to you.


I came here to say the exact same thing. The problem with complicated file formats isn't that they contain "macros"; it's that the code that parses and interprets those files is prone to memory corruption.


I'd do two things differently.

First, instead of using removable media from which data could still be recovered, I'd get a second Ethernet switch. Whenever I wanted to move data from my regular machine to my secure machine, I'd have to move the cable on my regular machine from one switch to the other. Thus it would be physically impossible to be connected to both internal and external networks simultaneously, and I wouldn't be leaving any persistent physical data trail like a USB stick or CD-ROM.

The second thing I'd do is a double air gap. Think of it as an airlock: you can't open the inner door until you're sure no contaminants got through the outer door. The intermediate host would have a single purpose: run malware checks. Thus, only data that had already been checked in a secure environment would even be allowed to touch the real secure machine.


You're assuming that the only way to compromise a computer is through a direct internet connection. This is wrong. Pre-internet viruses spread on diskettes.

The point of the air gap is to assume that any computer that has ever been connected to the internet is infected in an undetectable manner and that this infection is capable of spreading autonomously. Only by physically denying the infection the means to spread can you protect against it. Secondarily, you want to deny the infection the means to communicate back home, but sometimes the point of an infection isn't to steal data - see Stuxnet.


"You're assuming that the only way to compromise a computer is through a direct internet connection."

I'm assuming the exact opposite. I recognize, as does Schneier, that infection can occur without such a connection. Any mechanism that facilitates transfer of data also facilitates infection. That includes USB sticks and CD-Rs, which have the additional problem of leaving artifacts around for others to pick up later. It's the "we can secure USB sticks better than we can secure networks" belief that's magical.

"Only by physically denying the infection the means to spread can you protect against it."

As soon as you physically move a USB stick from one machine to another, you've effectively created a network. A really crappy one with high latency, but that doesn't make it any more secure as you yourself illustrate with the diskette example.


A network connection, even brief has an enormous attack surface. The corresponding surface when using physical media is much smaller.


Bull. A network connection on a physically separate network subject to proper inspection/monitoring has a very small attack surface. The corresponding attack surface for a USB stick is larger, with new exploits being discovered every day. The separate switch is functionally identical to the USB stick. They both allow transfer of data. They can both potentially be attack vectors. They both (in this construction) require manual intervention to complete the data path. The only difference is that it's a lot easier to get a copy of someone's data on a USB stick, after someone conveniently recorded their data transfer on a readily purloined bit of media.

You can't acknowledge the exploits that have occurred via diskettes or USB sticks, and then also say they're fundamentally better than an isolated network. It's illogical. In fact, it's stupid.


Networks are avoided for a reason. In your scenario, establishing a network connection between the intermediate computer and your "air gap" computer is a breach of the air gap if the intermediate computer has been compromised.

The same argument could be made for removable media, but there are good methods for tightly controlling the transfer of data on removable media. Network interfaces have a larger attack surface.


There are methods for tightly controlling the transfer of data on networks too. It's really trivial to set things up so that the only connection allowed in either direction is the one your file-transfer program is using. That's Firewall 101, too restrictive for any normal machine but easy to accomplish.

Also, a compromise of the intermediate computer doesn't represent a breach of the air gap. The same two-gap approach could be used with removable media. It would still provide the same benefits (and vulnerabilities). You seem to be saying that the intermediate computer could get compromised because the air gap is breached, and the air gap is breached because the intermediate computer could be compromised. A bit circular, don't you think? That intermediate computer can provide an extra layer of protection or an extra vector for attack. Which matters more will depend on its configuration and use, but for any semi-sane configuration the protective aspect would quickly dominate.

The intermediate computer in a double-air-gap setup would be just as secure as Schneier's single-air-gapped one, and the third computer even more so.


Notice I said "has a larger attack surface".

We're talking about the method that is used to walk things across the air gap. There is no perfect method. Not even if you were to use printed paper and manually transcribe (the human doing the work can be compromised). The goal is to reduce your attack surface as much as possible. In that regard, physical media has advantages over any network.


> It's really trivial to set things up so that the only connection allowed in either direction is the one your file-transfer program is using.

You sound a bit like a 10-year-old playing James Bond. Network packets have plenty of metadata fields and slack space that can be used for data egress. And then there are steganographic methods like playing tricks with packet fragmentation, retransmission, and jitter.

The only way to do it with any degree of security is to convert to a primitive serial connection like RS-232 and run the data through an NSA-style guard device.


I'm not the one thinking magically. Of course it's possible to do nasty things even on a physically isolated network. I know because I was doing nasty things on physically isolated thicknet twenty-some years ago. ;) But that's not the point. The point is that there are also vulnerabilities in the proposed alternative. Those vulnerabilities have in fact existed even longer (see mseebach's diskette comment) and Schneier mentions a couple that should give pause to anyone from the "USB sticks are better" camp. When all is said and done, "left the USB stick on the counter" vulnerability is probably the largest of all. Recording your data transfer, which is what that amounts to, is doing half of the bad guy's job already. Brilliant. Sure, you avoid one set of problems, but only by walking into a much worse set.

"convert to a primitive serial connection like RS-232 and run the data through an NSA-style guard device."

Actually I thought of suggesting line-of-sight IR. That's a true air gap, optical is harder than electrical to eavesdrop on, and it still avoids the vulnerabilities specific to sneakernet.


Hold the phone. Who is engaged in magical thinking here? Everyone in this conversation appears to be presenting rational arguments. There's some disagreement over whether networks have a larger attack surface than physical media, but neither side has engaged in any magical thinking.


I worked on an "Air Gapped" network. We didn't call it that. As the internet and open source took off it became more and more painful.

To get files over to the network, we'd have to download from internet and then burn to dvd and bring it over. The thinking was that DVD's with their write once capability would prevent unwanted files from hoping aboard. This didn't help if the file you were transfering was infected, but files were virus checked before burning.

Oddly files went Windows->Dvd->HPUX machines meaning the virus scan on windows was somewhat useless.

But having no access to cpan or online research on your main work machine was hard.


This article misses the most important security tip: do not use any proprietary software, especially the ones starting with "W" made by MS.


Are you sure? How much more difficult would it be for an intelligence agency to get an open source hacker to "accidentally" inject a vulnerability disguised as a bug, than to pressure MS to write a backdoor? (or to get MS to hire a mole)


The premise that a large, US-based software company needs pressure from intelligence agencies to write a backdoor in their product isn't historically accurate.

For example, Microsoft developers invented the idea of USB AutoRun -- that an executable on a USB drive is executed automatically when the drive is plugged in -- without any kind of pressure. That feature is responsible for the Buckshot Yankee attack,[1] early Stuxnet,[2] and of course COFEE.[3]

COFEE in particular was certainly not a bug or written by a single mole. On the other hand there is no evidence of open-source projects going out of their way to accommodate intelligence agencies in the same way Microsoft has.

Without this active collaboration from developers, it is far more difficult to backdoor software.

[1] http://www.washingtonpost.com/wp-dyn/content/article/2010/08...

[2] http://www.symantec.com/connect/blogs/stuxnet-lnk-file-vulne...

[3] https://wikileaks.org/wiki/Microsoft_COFEE_(Computer_Online_...


> On the other hand there is no evidence of open-source projects going out of their way to accommodate intelligence agencies in the same way Microsoft has.

http://blogs.iss.net/archive/papers/ShmooCon2011-USB_Autorun...


Yes I'm sure. It would be much more difficult, because that vulnerability could be possibly detected by many people reviewing the code, so it must be more sophisticated and hidden than the one buried in the precompiled binary. Also I haven't seen any discovered backdoor/vulnerability on widespread open source product yet, contrary to the countless examples of products by big names. Not saying that open source is 100% secure, but it's still much safer than proprietary programs.

[EDIT] Thanks for pointing out Debian SSL example, I wasn't aware of that. But it still doesn't deny the key point I mentioned - that there are more discovered backdoors and vulnerabilities in proprietary software than the open one.


> Also I haven't seen any discovered backdoor/vulnerability on widespread open source product yet.

One could argue that the Debian SSL issue[0] would qualify as such a backdoor/vulnerability, although I don’t want to argue that it was introduced maliciously, merely that it could have been introduce with such intentions.

[0] http://www.debian.org/security/2008/dsa-1571


All those people reviewing the code took two years to discover the Debian SSL bug.

https://www.schneier.com/blog/archives/2008/05/random_number...

https://wiki.debian.org/SSLkeys


Now how long would that have taken WITHOUT source code?

Would we _ever_ have known (without keyfiles on disk to analyze)?


You dont have to trust GNU/Linux, in fact you should distrust it as well, but it is safer and better than Windows according to record so far, since you have many more independent security solutions and can customize it to your liking, and we have seen Stuxnet, NSA_KEY etc...

You can modify the kernel so much as to make any existing 0days unpractical for your particular installation. You cant do that with windows kernel. For linux, you can remove all the drivers not really needed for your particular computer, and prevent modules from being used.

If you're really paranoid you must go with ArchHurd or HaikuOS, those extremely small OS not many have heard about yet NSA to have devoted time for 0daying them or inserting backdoors. But definitely not go back to Windows.


Almost all of that goes out the window for targeted attacks. For general safety, you just need to be a little bit better than the average computer user--the money is in simple reliable attacks that hit broad swaths of the public.

But for targeted attacks you need to have near perfect security.

Edit: You're probably still gaining by leaving the beaten path, but if you're exposed, and someone knows what you're running and has the resources to create custom exploits, that won't help much.


I don't think this matters against the NSA. An adversary either has the resources to exploit some target, or not.

If the NSA has a lot of exploits to choose from for each of Linux, Mac, and Windows then it doesn't matter which one you're using.

Think of this in Bayesian terms. You have some prior beliefs that MS W software is less secure than other software. What we've gotten as a result of all these leaks is new likelihoods, so we have to modify our posterior.

I.e. A is more secure than B doesn't matter if both A and B are easily exploitable by your adversary.


Man, Bleachbit sure took no time at all to put up his "testimonial" blurb on their site!

> "Since I started working with Snowden's documents, I have been using [...] BleachBit" -- Bruce Schneier


haha I was thinking the same when I noticed it. Has anyone here used BleachBit? From the description it looks like ccleaner + eraser mix.


I'd try it out if I weren't a Mac guy these days. Looks decent, and nicely minimalist.


What about isolation? With heavy use of virtualization one can make the air gapped machine even more secure:

- Only open documents in a virtual machine - Only interface with the document transfer media (cd/dvd etc.) through virtual machines. Don't ever mount or use this media on your host. - Clone a new throw-away virtual machine for opening EACH document and delete it after reading the document

About his points:

1) This is nonsense. It's possible to set up an OS (for example linux) with zero internet connectivity, just download the ISO on another computer, verify checksums and signatures, burn onto optical media and you're set.

8) Also, use one-time media. Write once on the internet host, fill up and finalize media, read once on the air gap host, destroy media.

Also, I don't think Schneier is recommending to use Windows for this task. He's just assuming that most people out there is using Windows and can use these tips to improve their security. For his own high security setup(s) I'm pretty sure he'd have the common sense to not use Windows.


Surprised that he decided to use Windows.


I don't know for sure, but its possible that a windows machine, if it did accidentally leak any meta-data about itself would be less unique than a linux install. Just a guess though.


Probably true, and while Windows may well be compromised it is unlikely to be compromised in a way that can jump a well-maintained air gap.


If you assume your connected machine is going to get p0wnd, and you rely on the air gap to prevent your secure machine from being penetrated, you could run any OS you like, no matter your opinion of how much the vendor cooperates with the NSA.


Given that he's going to the level of preferring a store-bought USB stick over one found in a parking lot, it shows he's concerned about transferring malware. Not using the OS for which the most malware exists seems like a sensible choice.

After all, if you're going to all this trouble and inconveniencing yourself in the name of security, what's a touch more inconvenience with using an operating system that you're less user-friendly with?


Who is your threat? Are you worried about a spray-and-pray attacker who just dumps a bunch of malware out there? Or are you worried about being specifically targeted by someone who wants your stuff?

In the first case, a USB key bought at a big box store might be full of malware. In the second case, the big box store is the perfect place to buy something, as long as it's not the store where you always buy stuff, because the APT wants to keep his profile small.


That's a good point. But at the level this game is being played, I'm not sure if there is a difference. Schneier has made himself a high value target, so in the FOXACID hierarchy of exploits, he is worth risking the use of an expensively bought or developed zero-day exploit.


I don't think this is right. You should reduce the entire attack surface. Since Bruce is worried about malware getting in via a removable device and cites examples that attacked Windows he should not use Windows.


If you assume the OS is coming from a compromised vendor, what's to stop it from making wireless network connections on the sly? Or adding a 'phone home' payload to any outgoing data copied to removable media?

You could physically destroy the wireless capability. But not using or destroying the media inputs would leave you with a fancy typewriter.


Indeed. If he picked Linux or a Mac he'd have the advantage of being able to read most MS proprietary formats without the disadvantage of embedded code executing.


Seems to me the only reason to choose Windows for this would be to use Microsoft Office to read the NSA documents and he's stated that he's using OpenOffice, so...


Ironically it is a result of modern encryption that separating things from the internet is so difficult. If every app sends data over an encrypted channel it makes it much harder to audit what exactly it is doing. You can't impose rules if you don't know what the data is or where it will end up.


He forgot to mention keeping the computer in a faraday cage. If he has Snowden info, it seems likely that intelligence agencies would be monitoring him closely enough to use Van Eck phreaking to spy on his laptop display (or other part of the computer that leak info through rf, which is all of them).


Schneier explicitly mentioned tempest (http://en.wikipedia.org/wiki/Tempest_(codename)) in the article.



"the first company to market a USB stick with a light that indicates a write operation -- not read or write; I've got one of those -- wins a prize".

Get to work, people!


How about an inline protocol analyzer that knows the USB mass storage device class protocol, and can detect when a write request is being sent?

That would perhaps also make it possible to optionally prevent such requests from ever reaching the USB stick, thus adding write-protection to legacy sticks.

Probably not 100% trivial given the signalling speed and general complexity of USB, but perhaps solvable using an FPGA? There is a software-only USB stack for 8-bit AVR:s, so it doesn't seem totally impossible, either.

No, I don't have a startup manufacturing such a device. :)

UPDATE: Ah, I just reinvented the WriteBlocker: http://www.wiebetech.com/products/USB-WriteBlocker.php. Sigh.


No reason to involve FPGAs (beyond performance concerns). Eg FTDI makes Arduino-like boards with host+device USB connections, seem to cost $35 at their store. With this specific device you are limited to full-speed USB which is slight inconvenience for storage use. But it's also the first product I happened to stumble upon, there probably are better alternatives out there.

http://www.ftdichip.com/Products/Modules/DevelopmentModules....


There are sticks out there that have hardware write-enable switches (I keep my medical records on one), so that you can at least control when writes occur.


Nothing is stopping a compromised host system from flashing the microcontroller on the USB stick to make it lie to you. They aren't appliances.


What you propose is probably possible for many USB sticks, but I don't think it's possible in general. Flashing a microcontroller often requires access to a serial interface like SPI, I2C, or JTAG. That's typically on different pins than USB (which pins can be buried in potting or otherwise inaccessible to compromised hosts). In addition some models can have particular pins connected to disable flashing.


Some microcontrollers I've been working with, that had USB capability also supported DFU- flashing new firmware via USB.

I have also resurrected a write-broken flash drive, by re-flashing it's firmware (a tool for which was provided by microcontroller vendor, which I found out by looking at VID/PID values and googling them).


Oh definitely it's possible for many microcontrollers. DFU is very handy during development. I was just reacting to this statement:

Nothing is stopping a compromised host system from flashing the microcontroller on the USB stick to make it lie to you.

Nothing, that is, except possibly a complete absence of such a facility! It depends on the microcontroller and how it has been wired into the USB device.


Why do you write 'at least'? Isn't this strictly superior to Schneier's light idea?


Probably better on balance, but not strictly better.

On the one hand, it gives you greater security when used perfectly. On the other hand, it has a cognitive overhead, and a worse failure mode (you forget that you've left it writeable, and won't have the light to give you feedback).

So even better would be a hardware lock for writes and a light indicating when writes happen.


Including all SD cards.


Most SD card "write prevent" tabs are suggestions only. From Wikipedia:

    The presence of a notch, and the presence and position of
    a tab, have no effect on the SD card's operation. A host
    device that supports write protection should refuse to
    write to an SD card that is designated read-only in this
    way. Some host devices do not support write protection,
    which is an optional feature of the SD specification.
    Drivers and devices that do obey a read-only indication
    may give the user a way to override it.
In most cases the switch is just software detectable. Some prosumer cameras use this switch to indicate there is a firmware update on the card and that the camera should try to apply it when it is powered up.


Interestingly enough, the switches are not electric ones. It's just something that can be physically (maybe even optically) detected by the device.


www.youtube.com/watch?v=LZXaYLVFdcQ‎


Schneier is not as paranoid or as particular as I thought he would be:

> 1. When you set up your computer, connect it to the Internet as little as possible. It's impossible to completely avoid connecting the computer to the Internet, but try to configure it all at once and as anonymously as possible. I purchased my computer off-the-shelf in a big box store, then went to a friend's network and downloaded everything I needed in a single session. (The ultra-paranoid way to do this is to buy two identical computers, configure one using the above method, upload the results to a cloud-based anti-virus checker, and transfer the results of that to the air gap machine using a one-way process.)

A friend's house is not "anonymous". If you have the need for an air gap, then you probably should assume that your attackers have the ability to suss out your off and online social network. In a not-too-distant future, it's not hard to imagine a surveillance operative being able to expand their examination of network traffic to not only include you, but associates of yours, and then to detect when an online-installation routine was run. At that point, the fact that that computer's fingerprint (however it may be calculated) was never seen again from that friend's home might be one flag of several in a comprehensive surveillance flag.

Though I guess if Schneier is talking about a off-the-parts computer, I'm assuming he means a desktop computer that can't be assembled in the Starbucks two states away to connect to the Wifi. OTOH, I think I would prefer a Linux laptop as my air-gapped computer


I'd actually be interested in hearing from Tptacek on this topic, but he's absent for once. Weird.


Also you can boot from livecd each time. You can use livecd boot on your internet-enabled device. Imagine fellow rootkiter's frustration when he realises root filesystem is read only.

Of course you're still vulnerable to BIOS/firmware malware.


There is another backchannel that can be used if your air-gapped computer ever does get compromised, which I haven't seen anyone discuss yet. If your internet-connected computer and air-gapped computer both have audio speakers / microphone, then that seems like a perfect covert way for a compromise to set up wireless communications between them An audio signal can appear to be similar to fan noise, or outside of human hearing range. I wonder if this has ever been exploited before.


Bruce gives a good collection of tips, but in his specific case it probably matters little. If he is a target of surveillance, I would be thoroughly surprised if he did not get "black bag" intrusions (as he puts it). He is a target that is definitely high enough priority ("I have Snowden's documents!") that dedicating assets to investigation is more likely, and at that point an air gap seems more like an inconvenience to the attacker than true protection.


I ordered a new desktop computer last night, and a number of the options from which I chose did NOT seem to have Wi-Fi built in. Rather, they sold low-cost USB devices for Wi-Fi connectivity. I didn't actually check all the motherboard specs to confirm this, but it seems pretty accurate.

So getting an air-gapped computer without Wi-Fi would seem to be the least of the problems.


I've been using serial cable with data leds and gpg ascii armored data over it. Very easy to visually inspect all data before further processing. Only attack vector remaining is gpg ascii armored parser, signature verification & decryption.

Afaik, this is quite safe.

This computer has not been connected to the internet ever, and it won't be in future.

Don't forget physical site security.


Secure Linux air gap.

Here is a real secure Linux air gap:

1. From a friend's computer, burn two copies of your favorite Linux liveCD.

2. Hold the two identical discs so that you can see the reflection of the document in front of you in one (mirror writing) and the reflection of the reflection in the other. (normal writing.)

3. You now have a secure Linux air gap with which you can read any document.


I'd like to sign my executable files on an air-gapped machine. Problem is, the code-signing tools for OS X and Windows (ie. codesign and signtool.exe) seem to require access to my private key to generate the signature AND and an internet connection to generate the timestamp.

Is there any solution here?


What about using a Linux CD? You can get online and use it for what you need without it downloading or installing software. Every time you re-boot it's guaranteed to be the same OS without any spying malware on it... I guess you'd have to save files on a USB drive though.


> if you're using optical media, those disks will be impossible to erase

I pop the old optical disks I'm tossing away into a microwave oven for 10 seconds at 1000 watts. How recoverable is the data stored on them? (And how cancerogenus the stench?)


When microwaving a CD, the induction stops when the track is cut. There are usually big chunks of surface (~2cm²) unaffected. You can't read it with your drive but it's reasonable to think that some data can be retrieved from it, with enough work and money.


A belt sander with rough grit should take care of that without damage to device or surroundings. Sand from the top -- that's where the data is, or used to be. For paranoid satisfaction, sand it all the way through.


I'd like to suggest to everyone trying this that: 1. holding such a thin object against a belt sander may be difficult. 2. if you lose your grip, it might fly across the room at dangerous speed.


In the same spirit, I'd like to suggest that anyone trying this use some sort of jig to hold the disc against the belt rather then try to hold it in place against lateral force with your bare fingers.

Source: long-ago experience. Ouch.


It seems like the simplest way would be to burn it.


More than you would like. The amount of surface area left is substantial so something could be recovered. The best way is to melt them with thermite. If it is legal in your area.


Apparently the dye coating used on CD-Rs [0] has a melting point of 200-300C [1]. This is enough that even a candle would melt [2].

[0] http://en.wikipedia.org/wiki/CD-R

[1] http://www.tstchem.com/eng/?page_id=243

[2] http://en.wikipedia.org/wiki/Fire#Typical_temperatures_of_fi...

(Links to more scientific evidence appreciated)


wait, thermite is illegal?


I have no idea. Chances of a mix that violently burns at 1300 degrees being under regulation somewhere in the wide world are not that slim.


It's useful for fixing cracks in cast iron and joining rails, so I don't see why it'd be regulated.


Something being regulated does not mean that it can't be used industrially. Many types of explosives are regulated even though they are used industrially every day in quarries and mines.

Still, restricting thermite would be silly and ineffective. As a chem-lab assistant, I made the stuff in highschool. Aluminum powder is widely available and rather cheap, as is iron oxide (obviously ;). I also made cupric oxide thermite, but that didn't work as well.


Everything fun is illegal somewhere.


Well, CDs contain BPA for one. If you enjoy endocrine disruption, this could be your easy ticket!


this reminds me a lot of what a lot of radio personalities who deal with personal finances recommend, the system which you do your banking on should only do that and nothing more.

While such a system is obviously still connected to the net, you reduce your risk by running a discreet set of software.

To be totally safe you would need a room which protects from emissions escaping it. Back in my service days we had system isolated as such simply because you could be monitored through walls.


discrete


"Note: the first company to market a USB stick with a light that indicates a write operation -- not read or write; I've got one of those -- wins a prize."


Isn't this an example of poor OPSEC? If he is in fact using the procedures spelled out in this post, he's giving his potential attackers a clear picture of all of his security measures so that they can focus effort on exploiting a weakness.


He forgot that the computer must be wrapped in tinfoil.


How about conducting all IO by webcam OCR?


Schneier is kind of disappointing more and more...

Be cautious of this advice dear readers.

" (The ultra-paranoid way to do this is to buy two identical computers, configure one using the above method, upload the results to a cloud-based anti-virus checker, and transfer the results of that to the air gap machine using a one-way process.)"

No, the ultra paranoid would buy two computers, perform install 1 from friend 1s internet connection, downloading everything keeping a copy and check-sums, then perform install 2 on a friend of a friends connection, then compare the results of both the downloaded check-sums and the installation. (For certain flavors of Linux it should be the same).

There is no point in uploading to a cloud-anti-virus checker if the NSA is after you, its not like they are going to use Slammer or some other known virus against you.

Jesus christ, and he is using Windows !? WTF. He is going against his own advice - to use public/free software as often as possible.

For the step of moving files between air-gapped computer, he suggests using USB sticks. He forgot to say that you must encrypt the entire usb-stick as well, You dont write a file-system to it! Only an encrypted blob. As "viruses" can be transferred on the NTFS he is probably using. Even Linux fs had a vulnerability - when the kernel tried to mount the fs it would privilege escalate to root and run code - code that can be hidden in the NTFS alternate (hidden) streams.

EDIT: For the NSA-agent wishing to leak, a good idea is too look into HaikuOS, MenuetOS etc and use those instead of GNU/Linux, or ArchHurd. Something very rare, something unexpected. Modify the installation from the default as much as you can. Hm, we should make an Ask HN thread - what is the best ingenious methods for current NSA employees to leak again, now that they have to share a computer with a partner?


No, the ultra paranoid would buy two computers...

The ultra-ultra paranoid might use their popular and widely read blog - a blog which is almost certainly read by more than one or two people at the NSA - to post an enormous boat-load of misdirection that is nevertheless also helpful advice for people who are actually stuck attempting to secure Windows computers. Advice that happens to highlight what a nigh-impossible task that really is. (TEN rules? Good luck.)

I can't think of any reason for someone in Schneier's position to publicize his actual security arrangements at this time.

Then again, maybe he feels he has a duty, as a security expert, to use and thereby remain familiar with the most popular systems around.


>Then again, maybe he feels he has a duty, as a security expert, to use and thereby remain familiar with the most popular systems around.

I think this is the case for security experts like Schneier and Krebs. Most of the threats they're interested in affect Windows. Most of their readers run Windows. They would be a less useful resource if their first recommendation was always "ditch Windows" even if that's accurate.


Maybe he needs to use Windows but whether or not his single air-gap-ed computer runs Windows or not isn't going to determine whether he's going to be running Windows in general.

I don't know how maintaining a full off-Internet computer isn't much more extreme than switching OSes. If he is recommended an off-Internet machine, it seems clear he'd want to recommend the hard steps as well as the easy steps.

I mean, he says he's running open office so Linux should be able to work great for him.


> I can't think of any reason for someone in Schneier's position to publicize his actual security arrangements at this time.

Why not? He's the last person I'd expect to rely on security through obscurity.


When the attack vector is literally relying on information disparities (previously unknown exploits), "obscurity" does provide a fair amount of security. If everyone thinks you're running Linux+OpenOffice and spends time writing looking for exploits there, but you're really running BeOS+Pe, that gives you a significant upper hand.


One of the few constants in security, "Security through obscurity is no security at all"


I'm surprised how often that's repeated given that it's not true at all. Obscurity is a totally legitimate security technique. Maybe not the strongest one, and hopefully not your only defense, but it's clearly got some value.


In this particular case, Schneier can honestly, with 100% accuracy assume that he is being monitored. This is not paranoia - he has publicly stated that he has some of the Snowden documents. And as we've seen in this saga, the people in power are doing anything and everything they can to get to them. (Miranda case, Guardian UK hard drive destruction, ...)

Under constant and potentially aggressive surveillance, there is not much room for obscurity.

As to his using Windows - well, there may be good reason for that. Schneier has been using Windows for a very long time, and with his level of sophistication, I expect him to be rather good at digging in to the system and identifying potentially unwanted behaviour. This should make NSA less likely to deploy some of their highest-value tools, because it is probable that the tools used would be exposed.

Assuming he is less well versed in maintaining and excavating a Linux installation, it would be more likely for the machine to get silently infected by a zero-day, high-octane exploit.

After all: prevention is desirable, detection is crucial. (How else could you contain the damage once it happens?)


The fundamental misunderstanding that has developed is that there are "systems which are secure", and "systems which are not secure". This is false. Security can be thought of as "how long it will take for Attacker A to compromise this system". All systems can be compromised eventually.

Thus clearly security through obscurity is a valid tactic to increase security. You just have to regularly alter your structure based on how quickly your attackers work- but this is no different from any other form of security. All forms of security have a time limit...


Well I use it as the counterargument to "open source software is full of holes because bad guys can read the source". Because everyone knows that Microsoft doesn't publish their source code and has never been exploited ever.

My point was simply that obsfucation will barely even slow down a determined attacker, ESPECIALLY one with the resources of a nation state (such as the US). It won't even register as a speed bump to anything other than a script kiddie or a worm designed for the majority case.

I'm really kind of shocked that my previous comment was voted -1, when the numbers blatantly agree with me on closed source vendors getting the living snot hacked out of them, even if their source is "obscured" due to not being available.


I literally gave you one example where that helped. Stuxnet and others are further evidence.

Simplified scenario: Your target will open a single file from you. How do you exploit them?


Very good point.

Im inclined to believe Scheiers real security arrangemetns are obscured for now.


That's false. If you're talking exclusively about startups it may be true but that's only because maintaining obscurity isn't an option.


That's why no real systems use obscure strings of characters as authentication tokens.

Right?


You are misinterpreting 'obscurity' to as much of a degree as the grandparent post, though in a different way. The 'obscurity' that the adage speaks of is meant to relae to the system you're using. ROT13 is security through obscurity: as soon as your adversary knows you're using it, your system is broken. RSA is not security through obscurity: you can advertise that you use it (indeed, that's fundamentally a part of public key crypto), and still be safe.


> still be safe

Less safer than you earlier were. I think the whole point is that security is not a binary variable but a matter of degree.


"I can't think of any reason for someone in Schneier's position to publicize his actual security arrangements at this time."

Schneier is more and more about promoting Schneier. I can't imagine a reason why someone who is so concerned about security would provide details on exactly what they do security wise (that would be accurate at least) because the more information you have about someone's practices the easier it is to defeat those practices.

Not to mention the mere fact that he is so much a public figure now makes him much more likely a target which could negate many of the things he is trying to do.


He's not concerned primarily with his own security. His entire career has been about promoting good security practices.


Has he actually said that he isn't primarily concerned with his own security and he is willing to risk his security for the benefit of others?


Come on, he is doing one of the best jobs in the world educating him about security. No, he probably wont risk his security for the benefit of others but he is a positive force. So please stop holding him to the absurd standards.


He is educating himself about security? This may actually be a tautology. :)

By the way, how many of you guys have read his books? Having actually read Applied Cryptography, I felt his contribution to this field was very little.


Well, that was obviously a typo. There are generation of new knowledge and dissemination of old knowledge. I am not quailified to comment on the first, however his contribution to the second is substantial.


How exactly is it substantial? I felt he mostly says things along the lines of, "oh, here is the 3-DES algorithm." But there is no commentary after it. Queestions like why is 3-DES structured so, or what are these fields are not explained at all. To me, it says that he doesn't really know how 3-DES works at all. Or, another case I remember is the simple password protection scheme in ZIP. He says that the three keys are initialized with these three values (which can be found in the PKZIP app readme). Then he goes on to claim that it is not secure and that cryptographers can break it easily. But how would start to do it is not explained at all. Why even mention what these three keys are initialized with? It was a disappointing book.

I felt he did not even disseminate old knowledge because he does not know/comprehend old knowledge. He just disseminated old predicates.


> and he is using Windows !? WTF. He is going against his own advice - to use public/free software as often as possible.

It is weird, but I saw it coming. He has always used Windows. Back early 2000s it was because that was what his company was standardized on, and that was to make things simplest software distribution-wise. They did network security solutions (consulting-style, IIRC) and it may have been it was easiest to match their clients. I take it he hasn't bothered to migrate since then.

I'm curious to know which version of Windows, but I kind of assume it will be 7. XP was too security-poor, Win8 is too new. Security people who use Windows tend to lag major revs by a little (like most software releases).

For the record, the irony here isn't that a security guy is using Windows, it's that he's using Windows for security as he reports on deliberately backdoored commercial software.


What's wrong with using Windows for this? If the machine is air-gapped, you could run as antiquated or insecure system as you like. Windows seems perfectly practical and it's easy to obtain a DVD install for it.

The critical thing is to ensure you don't ever transfer nasty stuff to the machine. For the true paranoid, this means you are facing potential NSA zero-day vulnerabilities and such, so you could have the most patched version of Linux or Windows and still be at risk.

Now, it still makes sense to start with a good system, but the weakness here, whatever your OS, is the trustworthiness of the data you transfer to it on USB.


An air gap is just one layer of security.

Not knowing if there are code exec backdoors, intentional or otherwise, in your kernel: a whole different ballgame.

Schneier should know better.


or he knows that NSA can own both Windows and Linux. Which is probably true. At which point, he needs to 1) know his system is not yet owned and 2) that the encryption system he uses works and isn't backdoored.

So about his only plausibly concern is if Truecrypt uses the Window's entropy pool switched to use say EC_DAUL_DRBG. That is a concern, but it's one with Truecrypt and can be checked.

This is assuming that even for the NSA, having Window's identify and modify the truecrypt binary is impossible. Realistically, if you are worrying about that, you should worry about Intel chips doing the same thing. At which point, you're screwed no matter what OS you use.


Would add to this that he is in a sense a high value target by the fact that he is well known and has an agenda which others may not approve of.

What might not be practical or possible with "anyone" becomes "possible" with a high value target. [1]

I'm remembering back to OJ Simpson case back in the early 90's. The police did things to try to pin the crime on him (searching trashdumps) that they simply don't do or don't have the resources to do in ordinary cases. (They found nothing but they tried and went to an extraordinary effort to try and find some evidence against OJ.)

[1] I'm wondering for example where physically the air gapped computer is kept and the physical security and/or alarms around that computer.


Physical security is the least of his problems. You can mount a lot of tempest[0]/ attacks from across the street.

http://en.wikipedia.org/wiki/Tempest_(codename)


And -you- would know that your open source OS does not have backdoors and zero-day vulnerabilities and such?

Don't be silly.


So what if they can't access those?


Who is "they" and what are "those"?

Is "those" referring to zero-day vulnerabilities?

You realize that a zero-day vulnerability means "they" accessed "those," right?


For one thing, Windows has all these standard "auto-run" systems that enable infection through simply inserting media.


>For the NSA-agent wishing to leak, a good idea is too look into HaikuOS, MenuetOS etc and use those instead of GNU/Linux, or ArchHurd. Something very rare, something unexpected.

Is that really the best idea, though? One of the things Linux has going for it is a couple of decades worth of public scrutiny and hardening.


The suggestion is that hardening is less valuable than obscurity. As long as your choice is secret and uncommon you'll probably be safer than a well known hardened OS.


The attackers are NSA.

It is stupid to think that obscuring the OS would delay them by any more than a couple of hours at best.


On an air-gapped machine? How would they determine the OS, so long you're careful about how the files you export are written? The article rules out TEMPEST and physical attacks.


How would they know if it's not on the internet?


And thats why Schneier probably isnt using Windows but its a diversion. Takes tinfoil hat off.


If Intel have put a backdoor in then the OS does not matter.


He also claims that it's impossible to avoid connecting to the Internet. Which is of course not true; there's no reason he couldn't just boot an OS installer (or the entire OS, like Tails which he mentions) from an optical disc, or another media. Which is precisely what he will do unless he thinks it's a good idea to boot and connect the pre-installed, unpatched, crapware-loaded copy of Windows...


Actually, the "ultra paranoid" would live in a cave without computers.

You didn't really explain what was wrong with using Windows.

Don't be like, "Jesus christ, and he is using Windows !? WTF." without explaining why using Windows is insecure, and how the other OSes you claim will be any better.

Don't think that because you're not using Windows you are secure. If a well-funded entity wants what you have, they will get it.

Schneier, IMHO, is paranoid for no reason. I'd lay a wager that the US government cares very little about him or his air-gapped systems (lol). He just wants to be heard.


I take it you use Windows..

Windows is known to be deliberately backdoored. Schneier has publicly stated he's got the Snowden documents, and has been reporting on them for the Guardian. I can't think of a better need for an airgapped machine. On what planet would he not be being heavily surveiled?

Next you'll be saying Snowden is a fantasist too.


I use all OSes. And it matters _very_ little what OS you use. All that matters is how much resource your adversary is willing to expend to get at what it wants.

And can you back up your claim of Windows being deliberately backdoored? My third statement in this post makes this point moot, too.

RE: Schneier having Snowden documents, I don't know if Schneier's cleared or has access to compartments to view them. If he does not, people would arrive at his doorstep to confiscate them. And since we have not heard Schneier saying MIB have come to his door (which trust me he would since he's a publicity hog) leads me to believe that the US government cares _very_ little for him.


I've been wondering why the government is supposedly so interested in what Schneier is doing.


Because he's working with the Guardian, helping them interpret the Snowden documents.

http://www.theguardian.com/profile/bruceschneier

http://www.technologyreview.com/news/519336/bruce-schneier-n...


I see. I'm going to have to assume that this article is complete misdirection in that case. He probably uses Linux on a 486 from Kazakhstan.


I'm not really sure the government is interested. But Schneier seems to think they are though.


Jesus christ, and he is using Windows !? WTF.

Exactly. He's not using any applications that won't run on Linux (or one of the BSDs, or even OS X for that matter).


No, the ultra paranoid would buy two computers, perform install 1 from friend 1s internet connection, downloading everything keeping a copy and check-sums, then perform install 2 on a friend of a friends connection, then compare the results of both the downloaded check-sums and the installation. (For certain flavors of Linux it should be the same).

Because the NSA would have no idea who your friends are, and wouldn't dream of monitoring their internet connections.

He forgot to say that you must encrypt the entire usb-stick as well

Rule-10 says to consider encrypting everything you transfer. (How do you "write an encrypted blob" to a USB disk without using a filesystem, on a normal computer that you aren't writing your own fake-filsystem driver on, than you can also read on an internet connected computer, e.g. one at your friend's house?).


What makes you think you need a file system or partition table for a block device? Please tell me, because I know, you don't need any.


I don't think you need a filesystem of partition table to put data on a block device.

I think you need a filesystem if you are in the world of Bruce Schneier's article, using Windows on both sides, writing at the level where "don't connect your air-gap computer to the internet" is worth saying.

Of course you can C-x M-c M-butterfly[1] this scenario, increasing impracticality more than security at every step, as far as you find enjoyable.

[1] http://xkcd.com/378/


> Jesus christ, and he is using Windows !? WTF. He is going against his own advice - to use public/free software as often as possible.

I can think of a few reasons why that might be reasonable, but I agree that the advise could be better.

1. He IS using Linux/BSD/similar, but has changed the article to Windows to be useful to Windows users and/or add some obscurity to his setup. 2. More familiar with Windows security practices, so more confident in overall security by using Windows (not that really helps with 0days or NSA backdoors though) 3. He has NSA docs that show Linux is compromised in some way and Windows isn't (seems unlikely TBH).

That said, it does seem very odd to stick with Windows if you feel it necessary to air-gap the machine.


I'm not sure air-gaps are as safe as we think they are.

Yes, its tinfoil time: the NSA and various other Defence agencies have deployed satellites capable of tuning into any CPU built since 1998.

Air gap in a deep, deep hole. Or maybe on the other side of the Sun. These are the only really safe places fur humans subjects of the new Tech Overlords to to stash data...


It's something worth thinking about in concept. Comments in here saying that this sort of precaution was nonsensical only months ago. So perhaps it's worth considering the ideas and precautions that seem ridiculous now.


  [citation needed]


I don't have a desire to provide a citation because I'm exploring, conjecturing .. I mean, after all its not infeasible that the satellite-launches that the NSA has been progressively making, over and over, are to support a network of CPU-sipping listening posts. This had been discussed even in the 80's, in certain circles ..




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: