Rob Zuber, CTO of CircleCI here. We take security very seriously and are taking a deep look at the issues Kevin raised.
We'll save additional commenting until we have gathered more information. In the meantime, our security policy and steps for reporting issues are here: https://circleci.com/security/ and we'd like to ask the community to please use our outlined methods for reporting potential security issues so we can keep CircleCI as safe as possible for everyone.
OP here. Thanks for responding promptly. To be clear, letting third party JS run in a trusted environment like a dashboard is an industry wide problem. If we assume CircleCI is the only bad actor we're kind of missing the point of the exercise.
CircleCI is a notable example, due to the fact it hosts source code and secrets for so many different companies and loads so many third party scripts in its dashboard context. But they're not unique in this regard. I hope everyone reading this is reconsidering the scripts that have access to their dashboard and pushing for changes at their company.
If CircleCI was immediately vulnerable to injection via the scripts above I would have used the private disclosure route and I encourage others to as well. But I don't know that there is a "vulnerability" here so much as a discussion that we need to have about what and how much third party code we let run in a trusted context. I wrote a little more in the thread below, https://news.ycombinator.com/item?id=15442988.
Thanks Kevin. We really do appreciate the discussion and, as you would imagine, are rethinking some of our approaches in line with the issues you've raised.
The real thanks should go to you Rob. As it's stated, it's an industry wide problem and the real guts come from the companies that take the issue seriously; the ones who come out, hold it close to their own, admit they were wrong, and work to improve. This says a lot about you and your company, and as a customer it makes me proud that you're part of our mobile pipeline.
As a customer, Rob, what would be a good avenue to notify you of issues like this in the future? Not every company wants to be front and center of what is essentially the news hub of tech people like HN for issues, so is there some way we can get with you guys directly to straighten security issues out, as well as some assurances as how you'll handle it along the way?
You can always email our security team at security@circleci.com Our gpp key is available on our site https://circleci.com/security/ if needed. You can also email me directly at rob@ if you have an immediate question.
In the interest of said discussion, I would appreciate a write-up of the rationale behind the approach chosen, when you are ready. It could help others that are in a similar situation. Perhaps inspire a better practice in the industry.
You mention vetting third party scripts, but did not explain how your app is not vulnerable to those scripts being hacked or modified in the future. This vulnerability seemed to be the main point of Kevin's piece.
IMHO you should have followed the standard practice of telling them first, then posting the story afterward. Blindsiding them wasn't nice, even if (hopefully) this makes them and others think more about what they risk with external JS, rather than just "oh shit circle...". Just my 5c.
p.s I'm curious why you thought this route was better?
I hear this attitude a lot and it always rubs me the wrong way.
It's a corporation... Why should he care about it's feelings? What does he owe CircleCI? Are they paying him? What claim do they have to his time to jump through their hoops / process?
I'm positive in this case that they weren't intentionally messing with people's security, but shouldn't we and their customers be able to judge that ourselves instead of getting it swept under the carpet via private channels?
I do believe it's good practice to "not be a douche", especially at a personal level, but I wouldn't even come close to categorizing this as douche behaviour.
* Note: this comment is not actually directed at CircleCI, just the attitude that we all somehow owe it to companies to tell them about their goofs privately.
This is, imho, douche behavior when it's the accepted norm to approach a company through their (listed and public!) security page. Why? So they can fix things before they get fucked. Yes of course you don't actually have to, but that makes you a douche imho.
Also, it's not swept under the carpet - it ends up usually getting $ for the reporter and a better story as they'd also know how the company fixed it. If they refuse to fix, then publish away.
That would have been appropriate if I had found e.g. an endpoint that did not accept CSRF tokens, or a vulnerability in Ring. It's not appropriate when the problem is they made a business decision to expose my source code to Quora.js.
How do you know it was a conscious decision, as opposed to ordinary human oversight? As the old saying goes, never ascribe to malice what can be adequately ascribed to incompetence. (Though I wouldn't go so far as to call CircleCI "incompetent" -- security issues are rampant in this industry, despite everyone's best efforts.)
I'd say it's more of a tossup. I mean they write software for people who write software.
I think we can rule out a malicious decision regardless though. I'd wager if it was flagged by the annoyingly pedantic but super smart developer, then it's still sitting in their Trello board buried under a few hundred features that were considered a higher priority by the product manager. In this case, the decision felt mostly harmless.
Or yeah, probably just as likely that no one noticed.
It's rarely about the company and much more about those affected by a potential issue. This issue's a little different as it's the potential for a vulnerability more than an actual vulnerability.
A typical case for responsible disclosure is something like a bug in Apache or Nginx that's serious and if a security issue is found that they're given time to address it. So when they make an announcement its:
1. Here is the issue
and
2. Here is the fix
Instead of just spreading and publicizing a vulnerability.
I'm mostly with you when there's potential damage to their customers, especially when those customers are individuals (like a database of social profiles being compromised), which was not the case here. However I think saying it's "rarely" about the company is a bit naive.
There's been some really high profile breaches where the extent of the impact to the customer has taken an unacceptably long time to be made public... which results in increased fallout damage.
A company's natural inclination is going to be to minimize losses (generalizing of course), and that often means dragging their feet, or dealing with it privately and never notifying their customers.
We should recognize these incentives and be a bit more aggressive about holding their feet to the fire, instead of criticizing the researcher who discovers the security flaw. Unless of course his / her behaviour is blatantly malicious.
It anecdotally feels a bit lopsided in favour of the corporations at the moment. Especially when someone like in this post, who clearly didn't endanger any customers, gets criticized...
I can't speak to what the OP was thinking, but given their above statement of this being an industry wide issue, it seems to me posting this privately to a single company would only address the issue within that specific company and very little to nothing would change in other companies due to the nature of competition. While this isn't nice, I guess, it does seem to be the nature of development and business. Just my 5 cents, though, I've been wrong before.
I disagree; report it to circle, if they fix then you can write the story on how it's a massive problem, how they fixed and people can learn. We learnt basically shit from this: nothing about how it's fixed or is avoided.
If I did that, no one today would be opening the Network tab on apps/credit card forms they care about and wondering whether third parties could steal the data.
I'm sure if CircleCI had a prompt to opt-in to enabling the tracking then they might have returned the courtesy. Why is it OK for a company/project to violate the privacy of so many with no warning but bad for users to report the privacy and security violations?
I'm not sure private disclosure is appropriate here.
This isn't the same as a browser vulnerability where some kid could hack a load of people before they patched. One of these supposedly trusted companies could attack your customers... But they could already. They know their scripts get loaded into all sorts of inappropriate environments.
Getting people notified, getting credentials changed. That's the paramount concern. You can rub PR lotion into an in-depth audit later on.
I didn't suggest forcing anything... But just because other people do it doesn't make it better.
Many of their clients may treat this as they would an actual breach. It's somebody they haven't vetted having potentially complete access to their development chain and production secrets. They won't know until they look. They won't look until they're told.
And what's CircleCI paying a for this breach got to do with the price of fish?! Say you hired me and gave me full access to everything in your business. Then one day I turn around and tell you my extended family, my friends, my dog walker and my cleaner have all also had access to that data. No big problem eh?
This practice is allowed by industry security standards, like PCI-DSS. If it's determined that the third-party acts as a PCI Service Provider, then the compliant party has a duty to determine that the third-party is also compliant.
The client vetted CircleCI, and CircleCI presumably vetted the third parties. It is not fair to say these vendors have not been vetted.
It may not be a best practice, but it's little different than CircleCI (or any other company) contracting with a private data center, which has direct physical access to their equipment. They have presumably vetted the data center provider, or cloud computing vendor.
I think you're the one being more reckless with language here. But please, what does "breach" mean to you?
I count it as "inappropriate and unauthorised access to data". Where "access" is potential, not necessarily actual unless you can absolutely prove there was no access.
These third parties have had access to sensitive data they shouldn't. That's a breach in my book.
Neither you or Circle CI even can say this hasn't lead to current or past third parties —or their rogue developers, or people who have hacked them— gaining source access or customer data from Circle CI users. Why? You simply don't know what was running at any given point.
Auditing and sub-resource integrity would help in the future, but it's too late. Unknown people have had access. Only the Circle CI users will know what ramifications that could have on them.
If your argument is anything more than a redefinition of "breach", please explain why you're being so nonchalant (and why you think I'm being "ridiculous") about this.
Is there any way to limit what access CircleCI has to my GitHub account/orgs? GitHub says you have "Full control of private repositories," but it seems like all you need is read access + a push webhook.
When I need more granular access, this is how I set it up:
1) Create a new GitHub user: e.g. (circleci-builder@example.com)
2) Grant read-only access to specific repositories to the new user.
3) Configure CI to use that user.
This breaks the HN guidelines, which require you to remain civil. We ban users who can't or won't do that, so please read https://news.ycombinator.com/newsguidelines.html and fix this in the future.
What serious business hosts their proprietary, mission critical code on someone else's computers? Isn't that like Coke or KFC putting their original recipe on Google Drive?
Edit: Okay, I get it. People like convenience over security. I'll stick to self-hosting a Drone-CI instance and only deploying binaries.
At GitLab we separated the infrastructure between about.gitlab.com (our marketing site, lots of third party javascript) and gitlab.com (our application, zero third party javascript). We recently deprecated the Piwik instance we ran in house for gitlab.com because it slowed the page load time. Please let me know if there is anything we can do better, these things are complex.
Only 3rd-party code I see in a GitHub is Google Analytics. There's a few loads from githubapp.com and githubusercontent.com.
I also think the CI service, which GitHub doesn't provide, is a higher-risk environment for this kind of thing. GitHub settings pages don't production deploy keys.
We’re concerned about this as well at GitHub. We don’t link directly to the Google Analytics script, which could be updated at anytime. Instead we host our own script version that’s locked down with CSP and SRI. We still allow XHRs to the Google Analytics origin to report the data but the script code itself can’t be changed without an internal security review.
Github has one of the best application security teams in the industry.
(I have no relationship with Github other than that I am a customer and have watched them steadily hire some of the best people I know in the industry).
Thank for the kind words. As Neil noted below, we would like to lock things down even further by proxing all data that is sent to Google Analytics. And, as a bonus, it would remove the last destination host for content exfil attacks that we know of. Our strict CSP policy has been a nice win, but the strictness has made it that much more clear that allowing nearly any third party sites, even for innocuous things such as images/xhr, isn't ideal. And, we are always on the lookout for more bypasses: https://bounty.github.com/targets/csp.html.
Oh nice, tell me more about proxying to GA, do you just change some hostname config in the ga universal js snippet to point to your forwarding proxy? how does GA know end user IP as it will be your proxy forwarders normally?
And this can be ratcheted down further by leveraging something like the measurement protocol. It would eliminate the 3rd party calls/code in the browser while giving GitHub the ability to anonymize the source (e.g. IP address, user agent, etc.). Twitter does this with some of their 3rd party integrations.
> we host our own script version that’s locked down with CSP
Excuse me, is there any article about this, or maybe some pointer where one could get a GA script that doesn't need `script-src data:` (or eval or similar insanity) in the CSP?
I've tried to add CSP for a page that has GA (no other external deps) and it seemed to deliver some scripts from base64-encoded data URI. I haven't researched what exactly it does, but suppose it was the unpacker inserting code that way, instead of using eval. Could be wrong, though, but the only external JS reference was analytics.js, and when testing in Firefox 57 CSP had complained about script with a data URI.
Ai yai yai, if you care about HN then you need to follow its rules: https://news.ycombinator.com/newsguidelines.html, regardless of how you feel about source code configuration systems. Posting like this gets accounts banned, so please don't!
The person that I responded to said something about GitLab so I told them how I felt about their comment and their product. I would've said the same thing in person.
If this post was about GitLab you might have an argument. But it was about CircleCI and tracking code in general. GitLab is not some random company spamming their crap; the person is just relaying their infrastructure notes for the rest of us.
Nearly everyone on HN has heard of GitLab. They don't need the astroturfing or spamming you are accusing them of.
I'm not sure if you're aware, but GitLab has integrated CI available on gitlab.com, which is a like-for-like alternative to CircleCI - so in that sense this comment is very much on topic, in my opinion.
HN isn't but the place I'd expect to be downvoted for a factual statement. I have seen the comment before and it feels like someone trying to plug their service the moment "CI" comes up.
I know Gitlab is HN's darling but to me it's not reasonable to plug your service every time a competing service is mentioned.
You refuse to use an awesome product because the CEO of a startup is actively engaging with a community that may (or may not) use it, accepting feedback and discussing the product with people?
I wouldn't go as far as OP, and I'm not saying the post doesn't add anything useful to the discussion, but in this particular instance it does feel distasteful and opportunistic. I'm sure this discussion around what other CI providers do would have happened with or without this comment. I'd like to have seen some restraint exercised in this case.
You're right, his post does contribute to a discussion on how to avoid this situation (which is what I'd like the thread to be about). I think it is based on a history of seeing GitLab's marketing team or CEO in any comment section tangentially related to their product that has put a bad taste in my mouth. I usually know before clicking into a comment thread whether or not they will be pushing their platform inside.
In general, I would tend to agree with you. In this case, though, the original article was about CircleCI specifically but mentioned this being an issue across the entire industry. GitLab is a part of that industry and they have lots of users on HN so, in this case, the comment is relevant (IMO).
More generally, however, you're right although I have just come to accept that this is how things are on HN. Choose pretty much any "Show HN" thread or any thread about some cool new app/product/service and in the thread you'll find a few "shameless plug" comments that would be considered spam in any other forum: "Hey, we also make an app that does $foo, sign up for our beta at example.com and check us out!".
(One of the worst "offenders" is Userify. Pick any thread that relates tangentially to user authentication and chances are good you'll find a comment from them that manages to work their name/URL into the conversation somehow.)
OP clarified it as a general topic of discussion in the industry, and not specifically about CircleCI. Other companies in the space respond with what they are doing. I saw it as a completely on-topic post, and assumed the best of his intentions. Both gitlab and github have shared incredibly valuable information on their architectures, and there are many many people that can benefit from an open discussion about this, including CircleCI.
considering Gitlab has integrated CI, I'd say this is more than "tangentially" related. Developers are going to wonder if their CI tool is protected from this kind of issue.
> To be clear, letting third party JS run in a trusted environment like a dashboard is an industry wide problem. If we assume CircleCI is the only bad actor we're kind of missing the point of the exercise.
I understand that viewpoint, but I feel that, by not respecting the norms of the site, it's detracting from the community that 'sytse wants to get to use his product. The world isn't about GitLab.
Reminds me of those who never forget to mention Mastodon in every thread about Twitter. They make me believe the Mastodon community is full of know-it-alls who pity us the peasants who are too stupid to leave Twitter.
Google's policies are a big part of this problem. Ad code and tracking code ought to be in an iframe, where the code can't snoop on the surrounding page context. Google insists that their ad code must not be in an iframe. That gives the lesser players an opening to insist that they, too, should't be constrained.
A few big site operators should push back against Google on this. The New York Times and CBS, for example. They use Google Publisher Tags, but no Google ads, so they don't really need Google on their pages.
Most ad code does run in iframes and the industry standard is to use "friendly iframes" which offer security but also access to the parent window. And even if you use iframes, something has to create that iframe initially and publisher devs are not going to do that manually, which is why there are top level script tags. All tag managers and adservers today default to using iframes for 3rd party tags.
Q: Is it violating program policy if I place ads on iframe webpages in my software?
A: Yes, it does violate our policies. Firstly, you’re not allowed to place ads in a frame within another page. Exceptions to our policies are permitted only with authorization from Google for the valid use of iframes.[1]
That's different. Yes, their policy is that you cannot have a page of your site that you load in an iframe, then put ads into that child page. This is to prevent fraud and viewability issues like loading 100 invisible child pages in the background to rip off ad networks.
None of that has to do with how ads from their network are actually rendered when you use their tags. Adsense/DFP tags do use iframes for almost everything unless explicitly set to load directly on page, usually for legacy code or rich-media. The relevant help article is here: https://support.google.com/dfp_premium/answer/183282
"...SafeFrame is supported in DFP and enabled by default when using Google Publisher Tags. Reservations serve into SafeFrames by default, but you can disable this setting...and use friendly iframes instead, if needed.... Some creatives, such as expandable ads or creatives that access your page’s DOM elements, might not render correctly in SafeFrames or other cross-domain iframes. We recommend updating these creatives to make them SafeFrame-compatible, in order to retain SafeFrame’s security benefits. If this is absolutely not an option, there are a few things you can do to allow reservation ads of this type to render properly..."
To add to this, there's also a slow transition towards safe frames [1] that allows for better protection than friendly frames by restricting all page-iframe communication to a specific API.
By "friendly iframes", the ad industry means "doesn't get in the way of ad code doing anything it wants to." "Unfriendly iframes" keep ad code in its own sandbox and its own screen real estate.
Either way, iframes are common and the ad industry is not the issue here. Many analytics providers that track all events automatically do need window access and the normal install for all of them is tag on site.
> Send an email any time an API access token is created. Add a setting to allow org-wide disabling of API token creation.
I wish more apps supported this.
Also on my list is a "Don't ever let me disable 2FA" setting. I'm more worried about malicoius resets than I am about ever losing my 2FA device and backup codes.
> Add an option to delete old logs. If you have ever dumped env vars to the log file, an attacker can export these.
I surprised that secrets make it into the logs at all. I would have expected them to filter anything that's in an env secret from the log output. Pretty sure Travis does this.
> Enable subresource integrity, or serve JS from each of these companies from the CircleCI domain.
That's not much of an option for this type of thing as the third parties would then have a PITA time dealing with upgrades (so they wouldn't support it). Using <script src="hxxps://tracker.example.com/path/to/script.js"></script> (rather than a fixed version) allows for live upgrades as they're in control of the resource that is loaded.
And you can't load the script from your domain as at the end of the day it's got to get instructions and upload data to the third party. At best you'd be loading a shim that does the same thing as the script tag.
True. I think it's a bit of a security compromise for convenience. Having backup codes stuck in a folder somewhere at home is only useful if I need to regain access to one of my accounts while I'm at home.
Wouldn't this be possible to circumvent if the user is allowed to switch TOTP devices? Are you saying you'd like a way to irrevocably tie this to a single TOTP secret? And you're not worried that someone could steal your TOTP secret and you'd be 100% powerless to stop them?
Perhaps I could have phrased it better. I meant don't let me disable 2FA without first authenticating using 2FA. So ignore any emails to support, 2FA reset links, secret questions, pleas for help, and anything else that would have you think that I no longer have access to my 2FA device and backup codes.
Changing devices is allowed but to do so I would authenticate using my password + 2FA prior to doing so.
Another option would be to make it an easily abortable multi-step process with a long delay timer, ie. you disable 2FA and an abortable 3-4 day timer starts ticking with frequent reminder mails inbetween. Once the timer is done, you can confirm the deactivation.
Video games have been doing similar things for character deletion for years, yet it is rarely if ever found for account credentials in general.
I feel calling all 8 of these companies "analytics" is a little unfair.
Pusher is a SaaS product for websockets and push notifications. I think it's reasonable that a company like CircleCI might outsource this instead of maintaining the infrastructure for it in-house.
Launch Darkly is a feature flagging service. While I feel the value is much less than Pusher (and I'd be inclined to just build an in-house solution), I think it's still a relatively reasonable tool to use to create a better user experience.
Intercom when used for customer communication is also something that I feel I'd want as a user, although this could perhaps only be loaded when requested by the user.
Why does this matter? Most 3rd party services are "reasonable" in some setting. When you build a highly-sensitive application, the bar for reasonableness changes.
This is a world where internet banking and healthcare, arguably the canonical 'highly-sensitive applications', are using 3rd party js services like this. I think we need to redefine 'reasonable' as the industry clearly isn't acting reasonably (IMHO).
There is no reason to accept that your bank runs javascript from third parties. Complain and use only in an environment where you can block them. While it may seem futile to report these transgressions to big institutions, the fact that your complaints are logged helps those fighting these stupidities from the inside.
The overwhelming majority of online financial services would self-host everything. It's not strictly a PCI-DSS compliance requirement, but it is common sense and industry standard.
So, startups and companies hire for standard, coglike (CTCI) CS skills, yet they outsource basic functionality.
There should be a corollary to NIH Syndrome[1] that takes into account 3rd party APIs. The way I see it, it's not about reinventing the wheel, it's about having your house in order.
Whether we can run the tests ourselves, and whether we want to spend the engineering time to build and/or maintain an in-house CI system are two very different questions.
Whether a developer (or business person) made a business decision to outsource CI has nothing to do with technical competence.
Using SaaS for any business data has a risk, but usually it's worth it. I'm sure you use slack, slack could be breached, and I'm sure no one at your company has ever slacked a password to someone else.
I've been spending the last year unravelling a ball of yarn wrapped in another ball of yarn because of "guy before me" syndrome who believed this. Fully.
I think I was reacting to my parent use of the word 'trivial' and 'keeping your house in order'. Its not that I believe that circle-ci has no value in any circumstance.
you can say that they are 'doing it wrong', but without execption the shops I've been in that use CI tools
o have one guy, maybe no longer with the group, that set up the CI deployment and no one else knows how to deal with it
o dont have a firm control of their development build and dependencies and the different CI environment is a constant source of shear
o are running a service with its own deployment chain, which differs in environment from the CI server and should arguable be used for tests
o dont have a decent way of running the tests outside the CI environment at all, which makes debugging CI failures pretty problematic
o because of the single maintainer issue, new tests often dont get integrated into the CI, which is really conterproductive
o for distributed services and services with runtime dependencies, the CI isn't really providing the whole picture (meaning we should really be using the delopment tools to spin up transient test instances anyways)
o the state in the CI often tool doesn't get packaged up with the repo, so it doesn't transition easily to other development teams
focussing on the 'keeping your house in order', CI is often providing a solution to a small part of the overall test and development problem and creating artifical boundaries.
I'm very sympathetic to avoiding wasting time on home grown solutions (even if they are as trivial as 'run these 50 tests and collect status). but if the development effort is sufficiently small, or sufficiently complex, I think external CI is actually costing more than its worth, or giving people a false sense of comfort about their testing and dependency management. thats all without considering any trust issues.
Services like Circle CI tend to add functionality like deployment, but the same is true.
That said, the same could be said of any piece in the stack: Why use a framework? Why use Github? Why use hosted email? Why use cloud? Why not hand new hires a blank laptop and a USB key with the latest Ubuntu?
I feel like narrowing down what any of those companies could do to what they are doing today irresponsible. And analytics is one of the could-do's that many third party tools do offer or add on later to increase monetization or perceived value to customers.
The important point is that there are now 8 attack surfaces instead of 1. Whether they're analytics or pad-string doesn't really matter.
I agree there are now 8 attack surfaces, but to call all 8 "analytics" I does not think does justice to the reasons why CircleCI may have chosen to use some of them.
Thinking about mitigation here... It appears that some of the included scripts have crossorigin="anonymous" script tags. Wouldn't this prevent authenticated access to to the circleCI domain, aka preventing the creation of api tokens or access to the API using the logged in browser context for any script loaded off the circleci domain?
Also, not that they do so, but would Access-Control-Allow-Origin set to something other than * prevent 3rd party requests to the API for scripts loaded from 3rd party domains.
Also curious if anyone has written a JS library that patches XMLHttpRequest.prototype to audit exfiltration of data in the DOM.
I use CircleCI because it allows 1 free private repo. Personally though I do not like Circle too much, it has a lot of cool features but the UI loads so slowly because they have so much stuff going on. They use the UI skeletons technique to make the load time appear to be faster but it is still very apparent how slow the load time is.
I have hit numerous bugs with their website as well where stuff doesn't load, builds kick off into infinity, when I transferred a repo everything broke, ack!
Furthermore, I am frustrated with their pricing, they go from free to $50/mo for the next upgrade tier, that seems like a crazy jump to me. I would gladly pay $10/mo or so for another container or 2x parallelization. The issue is being a single developer I rarely would save any time and the $50/mo is just too steep.
Finally, when I tried to build a justification case for my manager to pay the $50/mo, I wanted to use data from their CircleCI Insights which shows how long your builds are queued on average and some other important data points. But you cannot access these insights from a free account. Seems like that info should be available and prominent to help people understand the cost-savings they might receive by upgrading. I emailed support and asked for a one-time data point for that statistic to build the case as I was considering upgrading and they said sorry, nothing we can do for you. Is their goal to make money and have happy customers? If so they aren't doing a great job of it.
Overall a lot of frustrations with the platform and this just adds more fuel to the fire.
CircleCI, with all its limitations, bugs, and misfeatures, is free, hosted, integrates well with github, extremely simple
- and yet has enough built-in functionality to support 80% your use cases.
We went from Drone -> CircleCI -> Jenkins in the last two years. Jenkins feel opaque and hard to reason about, periodically dies for no _good_ reason and the UI is atrocious.
Drone itself was fine, as a minimal feature set. I think the docs are offline now, but it's fairly straightforward.
You must be doing something wrong then. I run a Jenkins cluster with hundreds of automated pipeline build/deploy jobs and the master node never dies. (It's also the only node up and running 24/7) You should be offloading your work to build agents.
For me at least(I don’t use it, but I do outsource “core” parts of personal code) the reason is time. Time I spend setting up VPS’s, updating build software, managing permissions etc etc is home I could spend writing code. Hence why I use gitlab rather than manage a VPS and run a perforce/svn server
Have you setup a new Jenkins deployment recently? While I personally don't really like CircleCI (or most other CI providers), Jenkins is a clusterf@#$ to setup for anything non-trivial IMHO.
Have you tried the declarative Pipeline[1] files? This plus BlueOcean[2] has made Jenkins much more user friendly, although obviously not as much as CircleCI/TravisCI/etc.
I haven't tried either, no. They do look nice, but they still only solve part of the problem.
I have only been tasked with fixing setups on Jenkins, and a friend of mine has spoken at length about how difficult things are to set up.
Some of that is based on the difficulty in getting a Jenkins configuration into source control. For CircleCI, I know, the CI configuration goes in a configuration file that gets checked in to the repo.
Another big part is making sure the development environment is sane, and replicating it. When we were dealing with Jenkins, Docker didn't exist yet. I assume that build slaves running inside Docker are a thing now? Because that's another thing that CircleCI gives us.
Honestly, though, this is just a thing that's easier to outsource if you can. Running a Jenkins server on your own is valuable if you have business reasons ("Code cannot go onto a third party server!") or if you're doing testing that involves connected devices (build and deploy to this Android device, then run tests...). But apart from that or a similar motivation, why bother maintaining a server when services are available for fee or cheap?
Agreed. Declarative pipelines coupled with pipeline shared libraries and BlueOcean have completely changed the Jenkins game. Pair that with the EC2 plugin and you have an auto-scaling build cluster!
Jenkins is fundamentally broken. Upgrade any plugin and you risk everything breaking. And you won't necessarily know until you run every corner case of your build system.
After I changed to the LTS release channel, I have yet to suffer from that. I think it has been a couple of years without a hassle.
I do wish Jenkins to be a little less high-maintenance, like say having a postgres backend and a docker image for the master. I hate having a different backup routine for each thing.
Why not Shippable? Free for 150 builds/month for private projects, brand new UI created with a focus on usability and perf, integrates with GitHub and Bitbucket, and we're adding metrics very shortly.
Loading third party JS is increasingly common for a lot of sites, and I tend to raise it when doing security reviews, for this reason, you're trusting the security of those 3rd parties.
There are some defenses that can be put in place. The first one is kind of awkward in many cases which is to host the JS on your own domain. There's still the risk of course that it will go off and get additional code from the 3rd party source to execute, but that can be reviewed for.
To be honest I'm a external security assessor/pentester and I've not had much pushback from clients on this. That said I don't always get visibility of whether they implement our recommendations or not :)
To me, it's not really a debatable point that loading JS from a source you don't control implies trust in that source and therefore a risk that if they are compromised it affects your site.
Whether that risk is ok for a business depends on a number of factors like :-
- How trustworthy are the sources they're loading from?
- What reviews have they completed on the security of those sources?
- Do they have contracts in place with those sources that cover the requirement for security?
I noticed that Galaxy - Meteor Development Group’s hosting solution for Meteor apps does similar. I checked the console and saw multiple analytics, even some that seem to monitor your visual usage/screenshots etc, which would seemingly capture screens that have portions of environment variables / settings etc. So basically potentially a lower level marketing employee who has an infected laptop and is reviewing UX patterns etc could expose sensitive data on their MacBook Air that could potentially be infected. They might not expose that data. But I had the same thoughts as this article points out.
I'm trying to understand why "CircleCI browser context has full access to the CircleCI API, which is hosted on the same domain".
Doesn't API have seperate authentication credentials (API key/tokens) than the web UI (i.e. cookies)? I understand these loaded 3rd party scripts can scrape what's rendered on the UI, but making calls to the API??? I wonder how that's possible in CircleCI case.
The only thing that 3rd party JS will have access to is everything on the page.
Now, the things on the page are sensitive (the secrets from the env variables are dumped in the UI).
Secrets in the secrets page are not exposed at any time, even when you go to edit. Only in the build screen, it's exposed to the output.
Those 3rd party libs DO NOT have access to your source code at any time. The only time they will have access to your code is if they gain SSH access to the machines that are running the build. And that's not trivial.
Even if your public key is exposed, they don't have access to checkout github with that.
So yeah, it's 3rd party libs in a "private" context but not more than that IMHO.
From searching for discussions around CircleCI API and cookies it seems like at least some API endpoints might accept session cookies for authentication.
I guess we'll have to wait for a CircleCI statement to address this in detail (or someone testing it).
Wouldn't similar vulnerability be present in most SAAS platforms? All of them include tons of analytics scripts on their website. If you are a logged in user, your session presumably includes a token (encrypted cookie?) to use the SAAS API.
Therefor in theory other scripts loaded on the page could grab the token and make authenticated API calls as well? Although I guess this is mitigated by verifying script integrity when loading scripts from CDN (e.g. integrity attribute of script tag)?
There are a few approaches, but yes, one thing I hope to do by publishing this is to draw attention to the problem of third party Javacript scripts running in a privileged environment and not on e.g the marketing page.
> Why don't you include a POC? I have one that demonstrates this attack, but I don't want to show script kiddies how. If you can't figure out how to construct it by reading the above description and the network traffic, you don't deserve to know how.
I feel this is a bad way to approach it and a bad attitude in general. Showing people how this can be exploited (after responsible disclosure) is a great opportunity to spread awareness of potentially their own issues in the same space and create points of discussion. If a user feels compelled to do so, they can seek out the info. But to show disdain to the reader for their lack of knowledge comes off as unprofessional in spirit.
I disagree, and we'll just have to leave it at that. Anyone who knows how to construct an AJAX request and look at the Network tab can figure this out.
While I understand the point you made in that last sentence, the very last part of it sounded needlessly loaded even though I can easily figure such things out. Replacing "you don't deserve to know how" by, say, "you're not the target audience" or anything to that effect goes a long way to not detracting from getting your real point across.
BTW this last easily misunderstood sentence sits just above your "I'm available for hire" link, which is not exactly painting you in a bright light for recruitment.
Saying "You don't deserve to know how" is not a friendly and constructive message to send to your audience or potential employers. There's no mis-interpretation here, it's in bad spirit.
As someone who has been a web developer since the 90's, this topic brings me back to ~2005 when Google Analytics launched. I thought it was absolutely insane that people would just stick someone else's JavaScript in their pages (and especially on e-commerce sites!).
Today it's perfectly normal. I don't have exact numbers but at least 50% of all sites pull in third party code. It really is bizarre to me that this became the norm.
1) they know exactly what they are doing; 2) arbitrary 3rd party JS running in privileged contexts (dashboards) is a wider problem I would like to draw attention to; 3) It's not immediately exploitable; 4) It takes five seconds to understand what the problem is; 5) I did not want to wait 90 days to be told "we are not going to fix this;" 6) my history of reporting problems (not security related) to the company in question is that they are not very responsive to fixing them.
I was with you until I saw this on your consulting pitch:
Write. I have written at least ten posts that made the front page of Hacker News / Programming Reddit (nb - Those aren't the best proxies for quality or popularity, but they are well known proxies). I can help kickstart your company's engineering blog, or work with your team on story/content ideas.
Now I'm not sure how much making the front page motivates your writing process. It is an interesting topic but I feel it would be a stronger case to demonstrate the issue across products from multiple vendors, though less incendiary/front-page-y.
PS. Props for not being the one to submit this particular article!
It's only worth reporting "responsibly" if waiting might prevent users from being hurt. The way that concept has been manipulated into implying that the companies are owed an early warning is crap. The author doesn't owe CircleCI anything.
This comment highlights the problem with calling disclosure 'responsible', because the word is used subjectively.
What you as a commenter, or a company (usually prefers to quietly fix this), or the wider user base (usually prefers a 'heads up' immediately), all have different opinions about what they see as responsible.
Also, it doesn't seem fair to dismiss this disclosure as a 'hit piece' unless it is factually incorrect.
Because it's a rampant problem not limited to them, and not obvious, and nobody seems to care? This is a good example for getting the target market to understand the dangers of just dropping in the latest js tool that marketing wants.
I used to be in ad tech, it always baffled me that we had first party js on every single customer page. Conceivable if we were breached, someone could inject code to read every login token or cc# or anything and send it to their servers.
I think these kind of issues will become more and more important, both from a security standpoint, and from a privacy standpoint as well. GPDR might apply to api access from 3rd party tools (not just analytics btw) - it’ll be up to them to prove they don’t, and I’m curious to see how it’ll be regulated.
I’ve tried to explore the privacy aspects here [1] (like many, we completely separated libraries for the public site and libraries for the apps)
I think we need a framework when thinking about which companies to pick when adding 3rd party features to our products - and probably a system to trust such 3rd party with their data processing.
I’d be interested to know how companies make these decisions today...
Marketing and sales people usually ask to have tons of third party scripts loaded everywhere and devs don't have power to turn them down. They need analytics for their work. It's a bad idea from security standpoint for sure.
Yeah, management and marketing need web analytics. If you don't want 3rd party scripts you'd have to, what, roll your own analytics software? There is some open source analytics tools you could self host, but they're garbage compared to google/adobe/etc.
Server logs dont have much useful information, especially for SaaS interfaces. You need to have the events from the actual UI tracked, which requires a lot more work and is the reason why these analytics tools are used.
Also, Github and Bitbucket should allow much more granular ACLs such as single repo, single repo shallow clone only. Ideally they would also support some form of steganographic tagging of repo contents as well, in case of a breach.
At my last company we had a similar issue with third party scripts on credit application forms. Analytics is critical for ecommerce sites but it's way too easy to accidentally log sensitive data across dozens of third party services.
There are self-hosted analytics (like Piwik) that you could self-host for sensitive data to solve this generally. I'd like to think that conversion of credit applications isn't something a marketer is trying to gamify though, that's probably wishful thinking.
>I'd like to think that conversion of credit applications isn't something a marketer is trying to gamify though
If it's in the overall sales conversion funnel it will be optimized. Credit application flow was scrutinized just as much as adding to cart and checking out.
uMatrix is defense in depth. They could solve the problem today, and tomorrow a new CircleCI programmer/marketer/CEO would decide that, in fact, they do care more about analytics than about user's secrecy, and revert this change.
By installing uMatrix and not approving these things, you are (a) reducing the probability that a mistake, misaligned incentives, etc, would harm you, and (b) get a notification that there's something you wish to block.
uMatrix IS part of the real solution. The other part is avoiding providers whose security standards are not up to yours.
Good security does generally not rely on a single measure. The service provider not doing this in the first place is good, but the user (also) having a second line of defense is objectively better, although it may incentivize poor behaviour of SPs.
We'll save additional commenting until we have gathered more information. In the meantime, our security policy and steps for reporting issues are here: https://circleci.com/security/ and we'd like to ask the community to please use our outlined methods for reporting potential security issues so we can keep CircleCI as safe as possible for everyone.