Hacker News new | past | comments | ask | show | jobs | submit login
Cybersecurity and the curse of binary thinking (philvenables.com)
153 points by kiyanwang on July 11, 2021 | hide | past | favorite | 99 comments



I’m sure this article will get some hate from parts of the cybersecurity world, but to me —- who started in cyber in the 80s (when it was called “trusted computing”) and then came back to it decades later — it really resonates.

The binary fallacy is endemic in cybersecurity. At INKY we do active blocking of phishing emails, so people automatically assume that we must take the position, as many of our peer companies do, that “simulated phishing awareness training is worthless”. What we’ve actually found is that phishing awareness training is useful in that it trains users to be rightly suspicious of the identities of email senders. It doesn’t really train users to spot phish, no, but that doesn’t make it worthless!

On the subject of end users I agree with the author as well. What we’ve found is that if you give users useful guidance they truly understand, on a minority of emails, they actually follow it and click on far fewer bad links, pay fewer fake invoices etc. On the other hand, if you slap a static banner on every incoming email that says “external: be super careful!” and nothing else, users quickly learn to ignore this useless information and ultimately become completely blind to it. (And no, making the banner really fugly doesn’t help any.) In our experience with email security over the last 6 years, escaping the tyranny of binary thinking is absolutely critical to getting users properly engaged.


Trusted computing. Oh man, those were heady days. We really thought we could lick that shit with type enforcement and all that jazz. Spending months analyzing covert channels and trying to hard to build safe systems. Now days all you have to do is send one damned email and you own the whole enterprise. So sad, it’s like it was all for naught. But hey, I still have a pristine copy of the rainbow books if you want one :-)


>We really thought we could lick that shit with type enforcement

Why would type enforcement do any good? When do operating systems enforce types?

My money is on capability based security, Genode and Fuchsia and GNU Hurd when it comes out. Give the users a safe way to run a program without exposing everything to danger, and you'll save everyone a ton of grief.

The present scenario is analogous to building more and more layers of security out of crates of explosives. Any little reaction anywhere becomes a reaction everywhere, because all the code in our systems is trusted.


No offense intended, but it would be easier for you to Google “Type Enforcement” than for me to try and explain in a quick reply.

Full disclosure: I was on the Sidewinder team at Secure Computing way back when.


I knew what type enforcement meant, not allowing assignment of the wrong type to a variable in compiled code. I had no idea they (the security community) overloaded the meaning with something completely different.

Thus, you can see why I was mystified.


Trusted Solaris did. It was very complicated to use, so only organisations that really needed it were using it.

The product was discontinued in the late 90's and its core features such as RBAC was included in standard Solaris in later versions. However, the more advanced stuff like tagging of connections, etc, was never included and was dropped with the demise of Trusted Solaris. I think it says a lot that I worked at Sun at the time and I actually never used it.


object-capability security and type enforcement can go together - require programs to use some kind of hardened JVM/CLR-like typed runtime, then use the type system to model capabilities.

A bunch of research operating systems from the 90s and 00s were based on this kind of design. Microsoft had a largeish engineering team working on one for 9 years (with a notion that it might some day supplant Windows; it was even briefly used in production to run some services, before the project was shut down in 2015). If you're curious about how it worked and why, one of the designers wrote some fascinating posts:

http://joeduffyblog.com/2015/11/03/a-tale-of-three-safeties/ (on how type safety worked)

http://joeduffyblog.com/2015/11/10/objects-as-secure-capabil... (on how the type system was used to model capabilities)


What fraction of the phishing you see was just harvesting credentials? Because every such incident becomes irrelevant if you have unphishable credentials, and yet companies are going to spend a bunch of money on phishing prevention/ training and not move to unphishable credentials.


I don’t know the percentage off hand but it’s certainly quite high and we do see huge numbers of fake O365 login sites in particular (often tailored to the intended victim’s company). The problem, though, is that the less frequent fake invoice or malware drive-by phish does a lot more damage, so frequency isn’t a great gauge of overall implied risk. Many of these other kinds of phish originate from third party accounts that have themselves been taken over. So it’s critical to deploy MFA to protect yourself, but that doesn’t help with all your third party contacts who don’t require MFA themselves. There have also been, in the last 6-9 months, more published on attacks that subvert MFA.

You’d also be surprised how many “please buy gift card” kinds of phish we see. And yes people do fall for them if they get through.


The banners can be more intelligent than just ‘this email is external’ - eg. Google Workspace does a red ugly banner when incoming emails match names of people in your organization, while the default external banner is small and orange.


The author of this is ex-Board of Goldman Sachs Infosec, current GCP CISO, and is about the only security thought leader who puts up LI content that is not total garbage (Ron Ross of NIST is the other I can think of).

Yet, in the comments, people are explaining Phil’s view of the job to Phil, because Phil is apparently not an “actual technical security expert.” … what?

People like PV build and lead very competent security programs because they see nuance, and focus on biz value. Sec engs, aka “actual technical security experts” who burn out or their companies burn out on them usually don’t.

Edit: I just double checked his resume, forgot to add board if h1, BISO/CISO of a few other banks, former SWE, so on. Security culture is it’s own worst enemy sometimes when evaluating content like this.


Thought Leader & Ron Ross.

Big Nope.

I think one of my actually business oriented colleagues put it best, "I've read through 20,000 pages of NIST security material, and there is no sense of business prioritization, or cogent strategy between all of it."

Truth be told, some of this came out much later. As in, 5 - 7 years later. In the interim a huge amount of abstract security controls that significantly lag actual defensive industry, and the paper A&A / NIST 800-37 process.

Read about two sections that I thought were relevant, the rest was fluff.

LI is not normally a place for security thought leadership.


Ron puts out decent views on system security in his smaller formats, and pretty interestingly - if you have his cell phone, you can call and he’ll chat for advice.

Given the central points he sits on for his work, that level of access is at a minimum a great way to get some Intel into “govt’s” view on this world. This increasingly matters as they step into the blue team fray vs FEYE leading it. More importantly, security world needs more of that open door policy from senior leaders.


What is "LI" in this context?


linkedin. if you ever hear "thought leader" in a sentence you should default to "virtue signaling gas bag", the majority of which congregate on LinkedIn, trying to attain a weird sort of guru status and build a following. Sometimes you get people with a good point a view, like this guy. Most times not.


I was having flashbacks to stuck LILO bootloader prompts.


Mostly agree except for security ratings. Not only is it a wildly inaccurate measure of any organizations security, but it's potentially actively harmful also. Example: ratings vendors do a false assessment of the vulnerable state of assets you have, and refuse to correct the information on notice, therefore ruining your reputation with completely false information.

It needs to die a painful death and re-emerge in a different form to be useful.


Completely agree. All of the incentives are wrong for security rating companies.

Rating vendors are incentivized to have a rating for every company regardless of whether or not they have any insight.

Rating vendors cannot act like real attackers, so they must rely on passively collecting some surface level information. This is often no better than you would get by walking through a website with Burp on and checking the list of alerts.

You can pay rating vendors to help you "improve." That is to say, they are incentivized not to fix false assessment data. They can get you to pay for that.


I think on every point this article experessed pros and cons. That's what's needed. If you read and think "yes, but, this one thing is absolutely unacceptable" , you are at risk of missing the point.


Yeah, I really liked the article overall. I see it as saying that these things people think in a binary fashion about should be treated with more nuance. The author addresses why these binary-style mantras exist and how they are not always true.

There are two where I disagree with the author (not saying it's unacceptable; the author may be correct and me wrong): the CISSP one, and also the rating vendor one. The author takes a deeper view than the binary thinking, and I want to take it deeper yet to refute the author's view. This arrives at the same conclusion as the binary groupthink in the infosec community, but does so by explaining why the binary idea became de-facto standard.

In case anyone is interested, my beef with CISSP is that the curriculum for CISSP certs promote a highly bureaucratic approach to security. I feel like it is largely a waste of time and money to get that cert. It rubs me the wrong way because once things like CISSP become required for some employment, it is hard to go back to more "lax" standards. Of course, it having a baseline of useful information like the author says is true. I just think CISSP is a net negative for society even so.


I think there's two things at work here.

1. It's extremely frustrating how much bad security work is done, and how much of it is justified in the name of "defense in depth". So many useless hours spent on mitigation without understanding the threat.

2. Everyone talks in hyperbole. It's shorter and more to the point, and nuance is usually not important in conversation among experts - we already know the nuance. The difficult bit is if you're not an expert, but talk with them, the implicit nuance is lost.

There's also a lot of resentment. I resent compliance. It is a massive waste of both time and money at my company, and is driven purely by sales. We would be safer without compliance because we could have spent that time elsewhere.

> In reality there can be a lot of value in keeping an attacker guessing and creating uncertainty by using deception or a variety of other techniques that could be described as obscurity.

This isn't security through obscurity. I see this all the time, really. If you're keeping an attacker guessing, that's not obscurity, that's a secret. If you're doing it via deception, that's not obscurity, that's a trap. Security through obscurity is "the implementation is less popular, therefor safer", which is nonsense.

I agree with many of the statements though.


One bit of binary thinking in cybersecurity that surprised me a lot is when DJB came out against the "principle of least privilege" (http://cr.yp.to/qmail/qmailsec-20071101.pdf)

    > I have become convinced that this “principle of least
    > privilege” is fundamentally wrong. Minimizing privilege
    > might reduce the damage done by some security holes but
    > almost never fixes the holes. Minimizing privilege is not
    > the same as minimizing the amount of trusted code, does not
    > have the same benefits as minimizing the amount of trusted
    > code, and does not move us any closer to a secure computer
    > system
This is from almost 15 years ago, and it's possible I am misunderstanding him as I am far from a security expert, but I found it surprising. I wrote a bit about my surprise at the time: https://blog.reverberate.org/2007/11/djb-hating-on-principle...


DJB is famously "extreme" in a lot of his positions and although his work is incredibly valuable is not a good representation of the typical beliefs of the community.


Ask yourself if you use this principle IRL? Do you give your cleaner the house keys and make yourself scarce so they can work in peace or do you treat them like an evil maid and watch their every move? Which approach has better long term outcomes for your security?


Counterpoint: do you give a cleaner the key to the house, or do you hand them a box with your whole keyring, your website passwords, the combination to the safe in the bedroom, etc?


Yep. "It isn't the one solution, so it is worthless". Or even, "It isn't the best solution, so it is worthless".

Defense in depth, people!


word bro. I am sick of people telling me that sha1 is "insecure" . There are collisions out there but that does not mean if you use sha1sum for something that you app is insecure full stop.


Same here, and I've found it's a good way to tell apart decent security researchers/consultants from those that just blindly run their tools and send the results. If I get a report that my app is vulnerable because it uses sha1, and it's obvious that collision resistance doesn't matter at all in the context where sha1 is used, then I know the reporter can be ignored.


Sounds to me like his phrasing was poor and its a bit out of context. I think what he is getting at is he doesn't like it when people confuse minimisation of damage from security breaches with prevention of security breaches. Obviously you should have both, but don't pretend one is a substitute for the other.

The way I read it, privilege minimisation is almost isomorphic with his actual principle "eliminating trusted code" ... if the code doesn't have access to a resource then it doesn't need to be trusted with that access. So perhaps he is railing against people who want to deploy code into fundamentally insecure contexts and then claim its OK because of protection through limits on privileges .... but he's saying it is always better to improve the fundamental architectural security and not put it in that context in the first place.


That is an excellent comment by DJB, and one with which I concur.

There is an inverse relationship with privilege minimization also, that many commodity operating systems require users to maintain a certain amount of privilege to effectively use the system. Reduction of privilege then results in a loss of functionality if there is not a replacing system. With things like high level privileges for admin, but not truly Forest Admin / Enterprise Admin, we only saw native fixes for this recently with the addition of JIT / JEA circa ~2017, or, use of 3rd party administration tools.


Least privilege implies ambient authority and the problems that brings. The capability security answer rejecting it is respectable enough isn't it (e.g. seL4, Fuschia recently)? I'm not sure that was what the quotation was about, but rejecting least privilege shouldn't be too surprising per se.


I agree with the principle, but the way these arguments have been summarized here has led to near-complete strawmanning. It's like the author started from the blog title and then came up with their own contextless, binary arguments.

Certifications: The typical arguments against security certifications are not that they "don’t represent the full spectrum of skills a professional needs" but instead that many of them teach outdated, useless, or actively negative practices. Then they're used as an advertising tool and organizations with less security expertise are told they must hire based on certifications rather than actual skill.

Compliance: "compliance is counterproductive for security." Most security practitioners don't necessarily like compliance primarily because it's not enjoyable for them. It distracts them from the tasks that they want to be working on. In most cases compliance is orthogonal to security. In some cases it can certainly be counterproductive (e.g. government compliance programs requiring outdated crypto).

Management: The typical refrain "management doesn't spend enough on security / take risks seriously" has been turned into "management doesn’t care about security because they don’t fund every single thing the security team asks for". I mean, it's obvious that the argument wasn't taken seriously by the author just based on how they wrote that.


I have been doing infosec consulting, appsec, penetration testing, threat modeling, risk asssesment, etc. for 15 years and that was my take on the article. It is a nice discussion piece but a little one sided. They kept erecting straw men that don’t really reflect the nuanced opinion of most of my peers. On Twitter and social media some luminaries are really prone to hot takes and it could be easy to assume that is reflective of the industry as a whole (and especially the authors opinion). Often it is neither.

One other vexing thing in this industry, is that it is very deep. You will often see folks with a deep background in say, reversing, come out with really strong opinions on some other topic such as phishing even though they are little more than observers to that aspect of infosec. Reversing doesn’t qualify you to be a CISO, etc. I just made my own straw man there, but it’s a truism in my opinion.

The core thrust of the article is reasonable though. Often we want an amazing solution or a big win when improving something even a little is a real improvement from a security perspective. A lot of little wins in an organization can really add up to changing its security culture, etc. I would ultimately agree the saying “perfect is the enemy of good” applies in the security world.


> Compliance: "compliance is counterproductive for security." Most security practitioners don't necessarily like compliance primarily because it's not enjoyable for them.

I have a B2B micro-ISV in the cyber security space, largely targeting a compliance niche - you get out what you put in.

I have customers that treat compliance as nothing more than a pointless burden; a series of boxes to be ticked, "check-box compliance" - all they want is to prove to their auditors that they are following the letter of the compliance standard. I imagine security consultants see this kind of thing a lot, and it's easy to see why they might view compliance negatively.

However, I also have customers that look past the letter of their compliance standards, and look towards the intent - these customers get a lot more out of it, and are actually increasing their security posture as their compliance standards intended.


> It's like the author started from the blog title and then came up with their own contextless, binary arguments.

Most of the arguments are actually quite common on Twitter’s Infosec communities. It’s common to read smug tweets dunking on certifications or security through obscurity or management similar to these strawman arguments in the article.

Not coincidentally, Twitter isn’t a great place to get good infosec advice. It’s too focused on calling out less-than-perfect solutions from a safe distance rather than actually examining practical security in the real world. This article makes a good point of showing the difference and would be useful for newcomers who might be confused.


Counterpoint: Security admins are overwhelmed with process and tasks to keep the trains running, and perceive they don't have time to go back and clean up bad configs. If something isn't done right the first go-round, it will never be right.

Compliance is the bludgeon that says "go make this right". Then security admins bitch about not having the time, and we say we don't have enough people in the industry.

Automate the boring stuff. We do have a shortage - a shortage of people who are creative enough and talented enough to script their toil away.


I work in a team of 100+ cyber professionals, and consume the typical infosec content that’s out there. None of the authors that I know, or any of my peers argue in this presumed way. Additionally, as everyone in cyber knows: every answer to any question should start with “it depends”. That’s also how I experience knowledge exchange between peers most of the time.


Great comment which I upvoted for accuracy because it is how the real professionals in the industry talk.

A great example of this is the debate around fail-open and fail-closed in different scenarios.

Depending on the system, the function, the security objectives underying it, and the way in which success or failure is determined, eventually, a decision can be reached about what is optimal for an organization in a particular case.

It is completely consistant to argue for fail-closed for a low availability requiring system with a big attack surface that is internet facing, while simultaneously proffering fail-open for a mission-critical industrial control system with strong physical protections that is in a locked-down closed off environment, unpivotable, for which work stoppage is a serious threat. Basically, something unlike Colonial Energy..... :)


> Reality: most end users make fully rationale [sic] decisions to optimize their time and energy to get their work done.

Not true in my experience. Most end users act in ways that are both less secure and less convenient than they could.

Either because they don't want to make the upfront investment (in setting up a password manager for example) or because they simply don't know any better.

Wore still, surprisingly many more advanced end users believe in security through inconvenience: rotating passwords, timed logouts, etc.


Not using a password manager is completely consistent with “optimizing time and energy”.

If you already know how to use a password manager, yes, it will easily improve productivity.

But for a non-technical end-user who has never used a password manager before, learning to use one is neither straightforward nor is it convenient.

Not only is it learning a skill which is not critical to their core deliverables — which virtually no one has time for — it also has the worst kind of failure mode: if the user needs technical support, it’s likely that they are locked-out of a service and at a standstill until they receive that support.


Learning to use a password manager to improve your productivity is the "optimal" path, so I'd disagree with this. It is optimal if the time spent learning it is saved by using what you've learned, which is almost certainly the case over the span of even a moderate amount of time.

The irrational thing to do would be to refuse to learn about a password manager. Your argument works if you focus narrowly, but when you see a fuller picture, your argument falls apart.


Very little about work is universally optimal. Work is made out countless compromises so that people can function together. “Optimizing time and energy” is not the same as doing the universally optimal thing.

Optimizing for time and energy has to do with how one uses their waking/working hours.

At some point in my career I worked at a private Catholic college where many of the professors were 60 to 80 year-old nuns.

They were very smart people but their predisposition to learning computer technology was minimal. Something that might take me hours to learn might take them weeks of frustrating, unintuitive trial and error. Frequently, they couldn’t pick up certain new computer skills at all.

I could not imagine teaching the nuns how to use a password manager. It would be a disaster.

This is an extreme example, but somewhere in every skill there’s an inflection point where it becomes impractical to learn that skill if it’s not part of your core competency as a worker.

Many, if not most, users who would benefit from a password manager simply can’t develop that skill if they are expected to finish the normal duties of their work. They are optimally using their limited time and energy because learning a new skill would impact their functionality in their primary responsibilities too much.

If you can’t imagine users from your work-life that would have this problem... I feel like you just don’t know very many users.


I understand your point, I just think you're letting those nuns off the hook too easily! "Can't" is getting thrown around here in ways I don't think is totally accurate.


I assure you, I have anecdotes indicating that I am not. :-)


The optimal path is for companies to have single sign on.

Password managers are a poor substitute. They don't work well and they are not consistent across websites.


100% SSO coverage is impossible.

Let's say you're part of the marketing team and need to sign up to some ad platform or do a one-off order for branded goods. They will most likely not support SSO, and even if they do you won't have the necessary privileges to actually set it up.


Yep, the good ol' "if this feature annoys the hell out of me it should probably also annoy the hell out of the attacker"

(Which is not even remotely true)


I think there's another sort of binary thinking that's even worse: thinking that systems are either "secure" (because no bugs have been identified) or "insecure" (because a serious bug has been found).

In reality, all systems contain bugs, but the presence of a single bug shouldn't be enough, on its own, to render a system insecure: defense in depth should ensure that a system remains secure even in the presence of minor bugs in any one layer.


I don't disagree with these. However, I would add on compliance being necessary but not sufficient, I would say it is not necessary either. Compliance is a kind of virus that infects organizations. It's a tool to get people aligned certainly, and a set of principles you can use as a foundation for a program, but if the problem you are solving for is how to get people aligned, security is orthogonal to that.

On the skills crisis, it just means that security professionals are both expensive and not worth it. As if they (we) were creating value, nobody would say they were expensive, or say that they didn't have the skills to solve the problem. It's not unlike insurance, where you make sacrifices at the altar of compliance and hope the authorities are kind if calamity hits.

While I appreciate the creative cognitive tools for finding alternatives to percieved limits and principles, this dissolving of binary thinking is also a trend to destabilize concreteness and logical thinking and convert issues into an unstable managed consensus, which is effectively a political struggle. It is a cognitive style with a whole bunch of tactics wrapped up in it that are designed for managing groups and not for making things, fixing things, and getting people things they want. We would benefit from some of this in security, and in fact I have used it and seen it work. However, the entire approach resembles how a mother might tell her children to share something, which assumes the thing already exists and what needs resolution is the rights to it as governed by her, which is certainly appropriate in some organizational contexts and situations, but as a single note cognitive style that includes things like non-violent communication, narrative controls and some other tactics, the method sets off a bunch of alarms. The instinct to subvert and subordinate problems as a means to manage them instead of solving them concretely is a powerful tool, but one that we should acknowledge as critically as we do so-called binary thinking.


I had not considered a link between being against binary thinking and political-style management. Thank you for writing it, because I found it very thought provoking.

Another way to look at it is that the author isn't attacking concreteness. Instead, the author is taking generalizations and adding context for why that generalization exists. This helps identify when the generalization applies and when it does not.


I take issue with part of this, as do many other actual technical security professionals.

What is said: certifications like CISSP don’t represent the full spectrum of skills a professional needs therefore certifications are a waste of time.

Reality: certifications represent a foundational body of knowledge for new entrants to the field - to start the "scaffolding" of their knowledge. It helps employers test a candidate’s commitment to the career and so is useful as long they don’t mandate it above all else.

However, what does the ISC2 marketing material say? Let's take a peek!

Earning the CISSP proves you have what it takes to effectively design, implement and manage a best-in-class cybersecurity program. With a CISSP, you validate your expertise and become an (ISC)² member, unlocking a broad array of exclusive resources, educational tools, and peer-to-peer networking opportunities.

Prove your skills, advance your career, help earn the salary you want and gain the support of a community of cybersecurity leaders here to support you throughout your career

From https://www.isc2.org/Certifications/CISSP

So, basically, a nontechnical certification that is shallow and broad has managed to insinuate itself as the way towards being able to run a security program, and more valuable than any technical security degree, or a CS degree, or a PHD in Cryptography. It claims to be both for security pros with many years experience, but others say it is foundational material. This has then led to a negative cycle of incompotence, bringing in nontechnical types into a field that is uniquely demanding for broad technical skills normally gained through years in the trenches (SRE/development/administration/network engineering)

Is there any wonder why we have a security problem?


“But others say it is foundational material” CISSP requires 5 years of industry experience. Clearly not foundational. I pin it as intermediate. The EU agreed, ranking it equivalent to a masters degree. https://www.isc2.org/News-and-Events/Press-Room/Posts/2020/0...


It's definitely true to say it is framed as foundational in some quarters, however. Consider the common "CISSP required" "entry level" roles that are regularly shared and pilloried

Calling it equivalent to a masters was also a marketing coup, there was plenty of discussion and pushback on that characterization when it was announced


Some claims are so wrong, they do not improve the discussion with their repeating. Like the moon being made of cheese.

The CISSP, with it’s 5 year of experience requirement is not foundational, full stop. What some people attempt to lie about is irrelevant.


> It claims to be both for security pros with many years experience, but others say it is foundational material

> Is there any wonder why we have a security problem?

You're absolutely right, but it hurt to read.

I know the management/technical divide has existed for a while, and yes, I know management sometimes has a point in stopping technicals focusing on irrelevant details. But... jesus. I'm currently tasked with evaluating "what an attacker could do" having compromised <X> on a <Y> system. No, I'm not allowed to evaluate a specific <X>'s likely vulnerability, or even an exact specification of <Y> - because that's losing the bigger picture.

What do I even say?


Run for the fire exit


The only way to look at security is to ask: “How many US Dollary-doos does it take to break this?” And if something costs more of them US Dollary-doos to break, then it is “more secure”.


It takes zero US dollars to walk onto my lawn and take a flower pot with you. Yet putting the flower put onto the sidewalk would drastically increase the odds of someone actually taking it.

Opportunity and convenience do matter in terms of security. Maybe more in the physical world, but it's not irrelevant in the digital world either.


Convenience aspect quckly turns into money at scale either directly or indirectly through time. It takes zero dollars to take single flowerpot but if you want to take 10000 you will need car, fuel and longer it takes to walk onto and away from lawn less flowerpots in an hour single person can collect. Same applies to computersecurity entering single captcha costs zero dollars but at scale you will have to hire someone doing it at x dollars per 1000 captchas.


So, you can't account for that the cost of scale? I guess that makes sense. I'll never ever ever measure things in dollars again because apparently I can't account for the effects of economies of scales...

Or I could, you know, take the time to inform myself about these things.


Replace the Dollary-doos with your favorite currency or medium of exchange.


I agree with most of this, but want to throw my two cents in.

Re: The cloud

I used to say that snarky thing proudly, and then I worked on AWS services for a year. Yes, many of them are seemingly overcomplicated. No, you should not go to the cloud to save money.

Yes, you should go to the cloud for a host of potential reasons. One example is IAM. In AWS, every single thing that one can do is controlled by the same policy language and user/role/action framework. Even if the policy is (very woefully) expressed in JSON and not one of a dozen more appropriate languages, it is an incredible feat of IAM to have so many disparate tools controlled with the same language.

Re: Security through Obscurity

I put my SSH ports for lab boxes on port 222. My auth logs drop by orders of magnitude. It doesn't mean I weaken my password or SSH cert, but it means my logs are cleaner, the chance of getting pwned by a 0day are just that much less, and those are benefits.

Re: Open source

I love FOSS. I don't think it is the end-all and be-all of security, though. And when it comes to cybersecurity tooling, I would much rather use certain closed-source NGFWs than opnsense/snort at the perimeter of a large organization. Disclosure: I work for one of the NGFW companies. Point is, seeing the code isn't the last bastion of security.

Re: Compliance

Yes, yes. OP is correct. It is needed, but is not the end-all. A competent audit/compliance regime sets up requirements and recommendations specific to your organization based on government regulation and industry best practice. They will work with you on what is a hard requirement and what can be justified as a mitigated measure, balancing the law, the threat risk, threat likelihood, and compensating controls. At best, they are partners with Infosec, pushing management to do what Infosec knows needs more focus, and slows down management that doesn't want to spend money, or just wants to "move fast and break things".

So yeah, this person knows what they're talking about.


The single best IT security approach I have seen was (paraphrased):

- IT Security is not allowed to say "No" to Business.

It was like the "Yes and.." drama game.

IT Security had to accept that business had a need to do a thing. That people didn't just think up dumb shit to do, despite appearances.

And it was Security's job to help to enable that thing rather than simply saying "No". Of course, they might end up saying no to a particular process, but then they had to work with the Business for a better process that both sides were happy with. A rebalancing of the tradeoffs regarding speed and security, which can often sacrifice very little speed in the IT world.

It meant that any project or even simple day to day stuff could be run past IT Security with no fear. It helped of course that IT Security employed sociable helpful people too.


One of the more insightful ways to think about Compliance vs Security is best addressed by a British Security dude's video on it

https://www.youtube.com/watch?v=CBdg0682Qzg


Great list. Lots of stuff that is repeated a lot on HN. Another one is: "biometrics are usernames, not passwords". The reality is they are neither usernames nor passwords. They are something else.


> Reality: certifications represent a foundational body of knowledge for new entrants to the field - to start the "scaffolding" of their knowledge. It helps employers test a candidate’s commitment to the career and so is useful as long they don’t mandate it above all else.

Sure, that's the intended outcome. But I work with equally as many certified idiots as I do uncertified geniuses, so at this point it means effectively nothing to me.


Compliance: all internal traffic must SSL too! Reality: Cert errors, cert expiry, stuff starts breaking

Compliance: Encryption at rest! Me, designing for AWS: wtf?!?


> Compliance: all internal traffic must SSL too!

Let's not forget that by not encrypting internal connection is how google internal traffic was stolen by the NSA (see Snowden revelations), which is why they now (reportedly, I'm not there) encrypt everything in transit.


Yea, I use AWS, subnets, Sec Groups, IAM Roles, different case


>Reality: certifications represent a foundational body of knowledge for new entrants to the field - to start the "scaffolding" of their knowledge. It helps employers test a candidate’s commitment to the career and so is useful as long they don’t mandate it above all else.

I'm not sure how I could get a certificate and by that I mean, without the employers money.


Blaming users for clicking links in emails I also agree is stupid. Many company systems require (or at least offer) the users to click on links in emails, Jira, service now, workday, gmail meetings.

So if you don't want people clicking on links in emails then stop sending email with bloody links in them for everything

PS. I always get my annual security training by a link in an email


Phil Venables is a smart dude but he is dead wrong about certifications.


Why?


Certifications don't represent foundational knowledge in security and don't evaluate commitment to the field, nor is commitment to this particular field a reasonable thing for employers to measure; regardless, if anything, certifications represent the opposite.

Many further comments on this thread: https://news.ycombinator.com/item?id=27494450


They provide solid scaffolding, which seems to be his main point. I wouldn’t call that dead wrong.


They do not, but see the link above; this all got argued a couple weeks ago.


Your argument in the other link is that they’re not the only thing thats required to get in. I totally agree.

Plainly stating that they do not provide scaffolding is nonsensical. Newbies have to learn the boundaries of the field IOT target their prep. Certs basically excel at this (and IMO otherwise useless aside from HR speedbumps).

Otherwise, you see a very common situation like this (verbatim example): someone wants to do cloudSec. They decide a good prep area for that is hosting a DVWA in the cloud and pentesting it is good cloudSec home lab work. Very rarely, short of a cert in AWS for instance to start off with, is it easy to ID that cloudSec prep is actually configuring IAM and hardening some servers, and there are a ton of jobs here and the future of SOC work.

Certs provide scaffolding. If you don’t think this, you’re giving your mentees pretty bad advice and should try to open up your view point here.

I’ve helped lead fairly large 0->Sec Job 1 volunteer/non-profit groups, and have seen X000’s of success stories and failures. Failures always share three things, successes always share the inverse

- don’t understand people-networking matters

- don’t understand certs won’t get a job, and they’ll only help re: credentials to get past basic HR boundaries. The field of Sec+/CySa is crowded as hell.

- do projects that veer very basic red team and overall fairly outside the territory of what they’ll actually be hired for. If they’d have taken a cert, a job description, and a cloud or SIEM localhost home lab, they could instead do very simple, almost out-of-the-box projects they’d work on immediately once hired.


That is not my argument on the other link. My argument on link is that they are worth nothing.


It seems like you got into computer security a fair bit ago, and have a lot of experience on the appsec side.

With that background paired with “certs are worth nothing” points out two blind spots to me:

- the value-add certs have exists in non-appsec areas, putting aside CISSP and promotion paths. CySA is going to help you be a better soc anaylst, but ya it’s not getting you into NCC Group anytime soon. A CS degree or knowledge in that direction matters a lot more/only. SWE<>Sec exists in a totally different part of the field, in a way.

- early days security folks have a very different view on how to get into the industry. Back then, it was pre-certs, and even really pre-compsec jobs. Things are a bit different now.

You may not realize that where you stand depends on where you sit, but you’re pretty far from new talent pipelines these days it sounds like.

If single sentence platitudes are how you want to engage, then I’m done as well.


It’s kind of amusing to tell somebody whose prior ventures were “how do we do tech hiring better” and “being the interim security team for small companies, and then helping hire their replacements” that they seem pretty far removed from new talent pipelines.


FWIW: the interim security team company is still going strong; I'm just not there anymore (I'm at Fly.io).


Not sure what to tell you, I’m pretty close to the same hiring pipelines and outcomes tracking as well.

There is huge nuance to the cert topic, but it’s silly to hold a binary “they’re worth nothing” view.


The most I’ll concede is that certs are very useful in identifying job opportunities: if a tech job uses certs to assess a candidate/employee’s skills or value, it’s a good indicator to not take that job.

There’s not huge nuance here.


Last week when I heard about the Kapersky password manager thing (single source of entropy was the current time [1]), I asked myself, why is everybody allowed to write software.

If you do a job like electrician or plumber, you have to learn and do tests in pretty much the whole world before you are allowed to work. Since I'm electrician myself, I know how stupid it can be sometimes to follow all the rules. But then if something happens, you are happy everything was installed in the right way.

[1] https://donjon.ledger.com/kaspersky-password-manager/


I largely agree, although I would say it's less "stupid" and more "cumbersome". Nobody wants to take the time to do things right because of the effort costs, but this is going to ultimately be more beneficial in the long run. I think my favorite phrase from recent happenings that encapsulates this concept is "Ransomware actors are just technical debt collectors."


These are relevant within security professionals.

But for the rest of us, the question more is about balancing between security vs usability+productivity+accessibility. Where you draw the balance is based on individual judgement, leading to endless debate and "binary thinking".

> [...] if something isn’t perfect then it must be terrible, or that if you don’t fully agree with something then you must therefore absolutely disagree with it.


Good article. I would summarize as “don’t let the perfect be the enemy of the good.”


Lost me at promoting security through obscurity.


This is an attitude that I really hope leaves the industry eventually, because in truth people use security through obscurity all the time. Using it exclusively or in place of real measures is when it becomes a problem.

Ask any IT professional who's had to patch zero days on internet-exposed systems whether they think changing the default port is useless, and practically all of them will tell you that it at least cuts down on logs, and means that you almost definitely won't get hit that first day with all the other people by script kiddies scanning the internet. Not posting your internal network diagrams, even though your security is 'open design', means that when someone sends the right email and breaches your perimeter, they still have to scan for what to go after. Additionally, this belief is almost exclusively held dogmatically by the private sector - classified government networks don't get hacked nearly as often as even your air-gapped corporate ones. Obscurity is never a replacement for 'true' security measures, and should only be added on after, but for a system you actually want to protect in the long term some amount of is often very useful.


Just changing my SSH port to 900 has reduced the amount of brute force by a fraction.


That is reason enough, in my view. If you're going to install fail2ban for instance I feel that you might as well also change the port.


And conversely, I hope the attitude that security through obscurity is effective leaves the industry, because the people who are using it (and you're right it is common) are not as safe as they think.

It's a complacency problem. Changing the port of your SSH server to 900 may, in isolation be a fine thing to do, but when actually done in the real world it tends to be a substitute for keeping your SSH server up-to-date, or more realistically, even remembering you opened up the port to the world in the first place.

The concern isn't and hasn't been the IT professional patching zero-days, it's the IT professional who doesn't know what a zero-day is. Once you've worked with those people, after they've been referred to you by the FBI, you start to understand the harm Security Through Obscurity causes.


Ultimately like most things in security this is an education problem. It's all about people knowing more than what some online "secure SSH guide" tells you, including recommendations to "secure" your machine like changing the port or "disabling root login", etc. Most of it under some scrutiny isn't actually that substantial, but a lot of professionals are out there are treating it like dogma.

> Changing the port of your SSH server to 900 may, in isolation be a fine thing to do, but when actually done in the real world it tends to be a substitute for keeping your SSH server up-to-date, or more realistically, even remembering you opened up the port to the world in the first place.

It's interesting that you frame it this way, because I was thinking of this as the opposite: that the 'theory' being taught is not changing the port because security through obscurity is bad, and that the 'practical' solution is doing all of the things you mention it shouldn't be a substitute for, and only then adding obfuscation methods.

I think we're saying the same thing, that you can't substitute obfuscation for 'legitimate' security measures, but from different perspectives.


Putting effort into being obscure is often misplaced and adds a recurring cost in cognitive overhead. Furthermore, it creates a false sense of security when people think that something is unlikely to be exploited simply because an attacker doesn't know how to exploit it. So I agree with your sentiment in many ways.

However, I read your comment to imply that the author lost credibility with their take on security through obscurity. This seems like the exact kind of harmful binary thinking the article addresses. The author presents a more nuanced take on a mantra, and for daring to do so, you dismiss them.


As the article says, "It is undeniable that if your only defense is obscurity then you’re asking for trouble."

Obscurity can be useful when overlaid onto stronger security. It's why are military tanks camouflaged and not pink or hi-vis yellow.


The better term is "behind the curtain", maybe institution X should take out an insurance policy for the ransom money asked after a kidnapping, but any such contract will state this agreement most be held confidential or be void.


Do you have all your internal namespaces and hosts on public DNS servers?

Do you post all your internal documentation onto a public repo?

If not you are practicing some security by obscurity.


The point is it doesn't help, even if the person your replying to's company is doing it.


What doesn't help?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: