It's really annoying we don't have a better solution for this. Even outside of open source, I don't want to spend over $600 up front before I sell a single copy of an app just to stop MS from blocking it. And that's not even mentioning companies like sectigo being terrible at their job. I've spent over a week going in circles with their support about verification: "your license shows address A", "no, the back shows the current address B, it's in the file I sent", "please send us a valid ID with address B", (repeat).
But unfortunately that's just a rant. I don't know if there even if a better solution. The money barrier (rather than verification) will stop some opportunistic malware, but big players won't care.
Even for our company, we would fork over the $600 but it looks like all of the EV cert options require a hardware signing key. Putting a human in the loop for our otherwise fully automated release process is a non-starter.
Worse still, the SafeNet software that my cert vendor recommends using (to interact with the hardware key) doesn't even allow use of Remote Desktop sessions!
It somehow detects if you're in an RDP session, and shows that there are no hardware tokens attached if that's the case. No message or warning whatsoever. My only Windows PC is headless and I lost several hours trying to debug this.
The entire EV cert process is such an outrage. My cert vendor advertised that the validation process would take 2-3 business days if all docs were in order, DUNS info correct, etc. I spent a lot of time ahead of the order ensuring the docs were indeed in order, and the process still inexplicably took 9 business days.
It's not about virtualisation. RDP sessions are actually marked as remote login sessions. The login source can be checked easily in each app. (or just run `net session`)
If TeamViewer acts on an already logged-in local session, it should work well.
Back in 2014 I was working at AltspaceVR (a social virtual reality startup) and we had Mac and Windows versions of the product. I set up a Mac Mini at the office to do the Mac builds, and it also ran a Windows VM under Parallels to do the Windows code signing. (The actual Windows builds ran in the cloud and we sent them down to the Windows VM for signing and then it sent them back up to the cloud.)
We had a Digicert code signing certificate that used a hardware key connected to the Windows VM. Unfortunately it required a password to be manually entered each time the code was signed.
To automate this, I wrote a little AutoHotkey script that watched for the password dialog and entered the password.
There wasn't any RDP issue because we didn't use RDP, just a Windows VM that didn't need any user intervention. (It could have been a separate physical machine, but since we had the Mini anyway and it had the capacity, it was convenient to have it do both the Mac builds and the Windows code signing.)
I sometimes think there are few problems that AutoHotkey cannot solve.
Also, Microsoft is working on a code signing service called Azure Code Signing where Microsoft issues and manages the certificate and keys and you simply upload binaries/app packages to Azure which does the signing.
That sounds like abuse of a monopoly position to me. They keep the horrendous status quo as bad as possible so their new product looks good by comparison.
Of course, there's a kinda reasonable reason for the hardware token requirement: Widely publicised 2010 virus 'StuxNet' had a driver signature, using a stolen copy of Realtek's driver signing certificate. [1]
And stolen certificates make the whole code-signing house of cards falls apart - you can't trust something signed by Realtek if it was not, in fact, signed by Realtek!
Of course, hardware tokens aren't a panacea: Some malware authors simply set up a shell company and get a certificate issued to that company.
One of my clients has strict requirements for an automated build process, and we managed to use an EV code signing cert on a YubiKey w/ PIN - so it’s definitely possible with a little leg work.
After having gone through it, I agree with other posts that the main annoyance is the verification process and weeks of delays/back-and-forth. That, and the inconvenience of now having a single point of failure in the build process (unless multiple certs are purchased).
Correct me if I'm wrong, but when a fully preconfigured YubiKey is shipped to you as part of the EV cert fulfillment, then there is no way to do this after-the-fact.
You need the key, but there are ways to get a .pfx out of it. Which I unfortunately don't remember, but that is probably documented by whoever you got the key from. And otherwise signtool can be used with the key, though it is not always trivial to get working.
It's not easy to spot malware, even if you have the source. For example Zoom can capture your screen, start applications, capture mic and camera, and allows remote control of your desktop. Why wouldn't it be blocked as malware even if you could automatically inspect the source?
Automated malware detection typically looks for behavior during installation rather than just the payload. (You can use the payload as a hint, though.) If an installer downloads a PNG and injects the last half of it into another process, and that drops an unsigned EXE into 'all users\startup' that can capture your screen, etc. you can probably block that without pissing too many people off. If you block SCREENCAP.EXE it's a different story.
That's old news. The race happens every day. AV companies upload samples to explode in a simulated environment, meanwhile malware authors started fighting back using `sleep(10000)` to avoid detection since that's longer than automation is worth running. The bad parts are not being executed until much later. Then the test environments started faking time speed ups.
I think there was a good episode about that in the Risky Business podcast.
We had to manage flagging problems at my company and even though we now sign our installers with a EV Certificate, anti-virus software have their own reputation database and still flag our work until a certain amount of users install it
And to be fair that is very much _not_ the entire point of the article.
"Essentially you have the option of two different kinds of certificates and three different price levels, depending on your urgency and user sensitivity."
That's too bad. I see why you're angry. Good luck!
FTA btw "Essentially you have the option of two different kinds of certificates and three different price levels, depending on your urgency and user sensitivity."
Maybe you need to change something about your application.
And that’s before Windows Defender falsely identifies your executable as a random threat and moves it to quarantine without asking. Who do you have to bribe to prevent that?
Reminds me of a Coding Horror (?) story where a client kept asking the developer to have their app window be on top (most visible.) Poor developer kept trying to explain that whatever method they used to be on top, anyone else could use to be also on top of their app.
Anyways, there is no general heuristic to distinguish a good actor trying to prove their legitimacy from a bad actor trying to fake legitimacy. This is the essence of parasitism.
That's a much better written (and backed up) story than the one I remember, which I recall being in the Coding Horror "fireside tale" style. Maybe Raymond's was the original!
He doesn't quite come out and say what "being on the other side of this airtight hatchway" means - it appears only in the title. But the post makes it clear what it alludes to: if you're already in a position to execute arbitrary code on the target machine in the security context you want, then you don't need to jump through further hoops, because you're already there.
Same thing is happening to the web: I put compiled executables in a zip on my website and Chrome flags them as "unknown / untrusted" with a few hoops to download and run it. I understand the advantages of protecting naive users from malware but this is really going to hurt small / independent software developers.
Without a cert etc most people shouldn't be installing the applicarion at all. Thats the simple advice for most people, especially if some other trust is already in place, eg from someone you actually trust.
Also: Buy a the longest code signing cert you can. Reputation is acquired at cert and build level; updates signed with same cert are cleared much much faster. Cert reputation is lost when it expires, meaning back to square one to sign updates.
Even just to avoid the pain of acquiring the certificate on a yearly basis.
I've never had a smooth ride getting a Code Signing Cert, always some stupid bureaucratic rubbish to deal with, or the need to run some stupid obsolete browser in order to actually get the thing, and I know at least 2 others who also ran into the same crap.
Not sure this works. Just grabbed a build of WhyNotWin11[1] released 20 hours ago, and I get "Microsoft Defender SmartScreen prevented an unrecognised app from starting. Running this app might put your PC at risk."
I don't think that works, as I had an app hosted on GitHub, a signed one actually, and it was still showing the SmartScreen warning at first. It took a few days to go away.
That's my experience as well. I think that after enough people install something, Microsoft starts trusting it, so it's usually fine for a major release of a popular package. Nightly builds of less popular packages? I always get the popups. (Although oddly, not on any of my projects that I release Windows binaries for...)
Off-topic: I was giving a presentation and needed to download something from microsoft.com, it was a wsl update or something. After download MS-Edge marked it as dangerous and harmful. Needless to say the audience bursted in laughter and I was able to convert some people to Firefox that day.
All this does is train people that they have to fight with someone that "knows better than them" every time they need to do anything useful.
The one channel they had to communicate with users and warn them about threats is now gone, I hope they're happy.
A few years ago I had to go through the process of getting a certificate for signing Windows application. I provide some details here: https://henvic.dev/posts/cs-security/
Somewhat related: I would really like some sort of OS-wide flag informing all apps and websites that, yes, I understand what a hyperlink is, it is not necessary to warn me when I click one that it will take me to a website.
But unfortunately that's just a rant. I don't know if there even if a better solution. The money barrier (rather than verification) will stop some opportunistic malware, but big players won't care.