Hacker News new | past | comments | ask | show | jobs | submit login
Removing PGP from PyPI (2023) (pypi.org)
72 points by harporoeder 3 months ago | hide | past | favorite | 71 comments



This is slightly old news. For those curious, PGP support on the modern PyPI (i.e. the new codebase that began to be used in 2017-18) was always vestigial, and this change merely polished off a component that was, empirically[1], doing very little to improve the security of the packaging ecosystem.

Since then, PyPI has been working to adopt PEP 740[2], which both enforces a more modern cryptographic suite and signature scheme (built on Sigstore, although the design is adaptable) and is bootstrapped on PyPI's support for Trusted Publishing[3], meaning that it doesn't have the fundamental "identity" problem that PyPI-hosted PGP signatures have.

The hard next step from there is putting verification in client hands, which is the #1 thing that actually makes any signature scheme actually useful.

[1]: https://blog.yossarian.net/2023/05/21/PGP-signatures-on-PyPI...

[2]: https://peps.python.org/pep-0740/

[3]: https://docs.pypi.org/trusted-publishers/


It's good that PyPI signs whatever is uploaded to PyPI using PyPI's key now.

GPG ASC support on PyPI was nearly as useful as uploading signatures to sigstore.

1. Is it yet possible to - with pypa/twine - sign a package uploaded to PyPI, using a key that users somehow know to trust as a release key for that package?

2. Does pip check software publisher keys at package install time? Which keys does pip trust to sign which package?

3. Is there any way to specify which keys to trust for a given package in a requirements.txt file?

4. Is there any way to specify which keys to trust for a version of a given package with different bdist releases, with Pipfile.lock, or pixi or uv?

People probably didn't GPG sign packages on PyPI because it wasn't easy or required to sign a package using a registered key/DID in order to upload.

Anyone can upload a signature for any artifact to sigstore. Sigstore is a centralized cryptographic signature database for any file.

Why should package installers trust that a software artifact publisher key [on sigstore or the GPG keyserver] is a release key?

gpg --recv-key downloads a public key for a given key fingerprint over HKP (HTTPS with the same CA cert bundle as everything else).

GPG keys can be wrapped as W3C DIDs FWIU.

W3C DIDs can optionally be centrally generated (like LetsEncrypt with ACME protocol).

W3C DIDs can optionally be centrally registered.

GPG or not, each software artifact publisher key must be retrieved over a different channel than the packages.

If PYPI acts as the (package,release_signing_key) directory and/or the keyserver, is that any better than hosting .asc signatures next to the downloads?

GPG signatures and wheel signatures were and are still better than just checksums.

Why should we trust that a given key is a release signing key for that package?

Why should we trust that a release signing key used at the end of a [SLSA] CI build hasn't been compromised?

How do clients grant and revoke their trust of a package release signing key with this system?

... With GPG or [GPG] W3C DIDs or whichever key algo and signed directory service.


> It's good that PyPI signs whatever is uploaded to PyPI using PyPI's key now.

You might have misunderstood -- PyPI doesn't sign anything with PEP 740. It accepts attestations during upload, which are equivalent to bare PGP signatures. The big difference between the old PGP signature support and PEP 740 is that PyPI actually verifies the attestations on uploads, meaning that everything that gets stored and re-served by PyPI goes through a "can this actually be verified" sanity check first.

I'll try to answer the others piecewise:

1. Yes. You can use twine's `--attestations` flag to upload any attestations associated with distributions. To actually generate those attestations you'll need to use GitHub Actions or another OIDC provider currently supported as a Trusted Publisher on PyPI; the shortcut for doing that is to enable `attestations: true` while uploading with `gh-action-pypi-publish`. That's the happy path that we expect most users to take.

2. Not yet; the challenge there is primarily technical (`pip` can only vendor pure Python things, and most of PyCA cryptography has native dependencies). We're working on different workarounds for this; once they're ready `pip` will know which identity - not key - to trust based on each project's Trusted Publisher configuration.

3. Not yet, but this is needed to make downstream verification in a TOFU setting tractable. The current plan is to use the PEP 751 lockfile format for this, once it's finished.

4. That would be up to each of those tools to implement. They can follow PEP 740 to do it if they'd like.

I don't really know how to respond to the rest of the comment, sorry -- I find it a little hard to parse the connections you're drawing between PGP, DIDs, etc. The bottom line for PEP 740 is that we've intentionally constrained the initial valid signing identities to ones that can be substantiated via Trusted Publishing, since those can also be turned into verifiable Sigstore identities.


So PyPI acts as keyserver, and basically a CSR signer for sub-CA wildcard package signing certs, and the package+key mapping trusted authority; and Sigstore acts as signature server; and both are centralized?

And something has to call cosign (?) to determine what value to pass to `twine --attestations`?

Blockcerts with DID identities is the W3C way to do Software Supply Chain Security like what SLSA.dev describes FWIU.

And then now it's possible to upload packages and native containers to OCI container repositories, which support artifact signatures with TUF since docker notary; but also not yet JSON-LD/YAML-LD that simply joins with OSV OpenSSF and SBOM Linked Data on registered (GitHub, PyPI, conda-forge, deb-src,) namespace URIs.

GitHub supports requiring GPG signatures on commits.

Git commits are precedent to what gets merged and later released.

A rough chronology of specs around these problems: {SSH, GPG, TLS w/ CA cert bundles, WebID, OpenID, HOTP/TOTP, Bitcoin, WebID-TLS, TOTP, OIDC OpenID Connect (TLS, HTTP, JSON, OAuth2.0, JWT), TUF, Uptane, CT Logs, WebAuthn, W3C DID, Blockcerts, SOLID-OIDC, Shamir Backup, transferable passkeys, }

For PyPI: PyPI.org, TLS, OIDC OpenID Connect, twine, pip, cosign, and Sigstore.dev.

Sigstore Rekor has centralized Merkle hashes like google/trillian which centralizedly hosts Certificate Transparency logs of x509 certs grants and revocations by Certificate Authorities.

W3C DID Decentralized Identifiers [did-core] > 9.8 Verification Method Revocation : https://www.w3.org/TR/did-core/#verification-method-revocati... :

> If a verification method is no longer exclusively accessible to the controller or parties trusted to act on behalf of the controller, it is expected to be revoked immediately to reduce the risk of compromises such as masquerading, theft, and fraud


> So PyPI acts as keyserver, and basically a CSR signer for sub-CA wildcard package signing certs, and the package+key mapping trusted authority; and Sigstore acts as signature server; and both are centralized?

No -- there is no keyserver per se in PEP 740's design, because PEP 740 is built around identity binding instead of long-lived signing keys. PyPI does no signing at all and has no CA or CSR components; it acts only as an attestation store for attestations, which it verifies on upload.


PyPI signs uploaded packages with its signing key per PEP 458: https://peps.python.org/pep-0458/ :

> This [PEP 458 security] model supports verification of PyPI distributions that are signed with keys stored on PyPI

So that's deprecated by PEP 740 now?

PEP 740: https://peps.python.org/pep-0740/ :

> In addition to the current top-level `content` and `gpg_signature` fields, the index SHALL accept `attestations` as an additional multipart form field.

> The new `attestations` field SHALL be a JSON array.

> The `attestations` array SHALL have one or more items, each a JSON object representing an individual attestation.

> Each attestation object MUST be verifiable by the index. If the index fails to verify any attestation in attestations, it MUST reject the upload. The format of attestation objects is defined under Attestation objects and the process for verifying attestations is defined under Attestation verification.

What is the worst case resource cost of an attestation validation required of PyPI?

blockchain-certificates/cert-verifier-js; https://westurner.github.io/hnlog/ Ctrl-F verifier-js

`attestations` and/or `gpg_signature`;

From https://news.ycombinator.com/item?id=39204722 :

> An example of GPG signatures on linked data documents: https://gpg.jsld.org/contexts/#GpgSignature2020

W3C DID self-generated keys also work with VC linked data; could a new field like the `attestations` field solve for DID signatures on the JSON metadata; or is that still centralized and not zero trust?


(My mistake: PyPI is not a keyserver, and Sigstore is not a keyserver)


This feels like perfect being the enemy of good enough. There are examples where the system falls over but that doesn't mean that it completely negates the benefits.

It is very easy to get blinkered into thinking that the specific problems they're citing absolutely need to be solved, and quite possibly an element of trying to use that as an excuse to reduce some maintenance overhead without understanding its benefits.


Its benefits are very much completely negated in real-world use. See https://blog.yossarian.net/2023/05/21/PGP-signatures-on-PyPI... - the data suggests that nobody is verifying these PGP signatures at all.


I stopped reading after this: "PGP is an insecure [1] and outdated [2] ecosystem that hasn't reflected cryptographic best practices in decades [3]."

The first link [1] suggests avoiding encrypted email due to potential plaintext CC issues and instead recommends Signal or (check this) WhatsApp. However, with encrypted email, I have (or can have) full control over the keys and infrastructure, a level of security that Signal or WhatsApp can't match.

The second link [2] is Moxie's rant, which I don't entirely agree with. Yes, GPG has a learning curve. But instead of teaching people how to use it, we're handed dumbed-down products like Signal (I've been using it since its early days as a simple sms encryption app, and I can tell you, it's gone downhill), which has a brilliant solution: it forces you to remember (better to say to write down) a huge random hex monstrosity just to decrypt a database backup later. And no, you can't change it.

Despite the ongoing criticisms of GPG, no suitable alternative has been put forward and the likes of Signal, Tarsnap, and others [1] simply don't cut it. Many other projects running for years (with relatively good security track records, like kernel, debian, or cpan) have no problem with GPG. This is 5c.

[1] https://latacora.micro.blog/2019/07/16/the-pgp-problem.html

[2] https://moxie.org/2015/02/24/gpg-and-me.html

[3] https://blog.cryptographyengineering.com/2014/08/13/whats-ma...


Yeah I still use pgp a lot. Especially because of hardware backed tokens (on yubikey and openpgp cards) which I use a lot for file encryption. The good thing is that there's toolchains for all desktop OSes and mobile (Android, with openkeychain).

I'm sure there's better options but they're not as ubiquitous. I use it for file encryption, password manager (pass) and SSH login and everything works on all my stuff, with hardware tokens. Even on my tablet where Samsung cheaped out by not including NFC I can use the USB port.

Replacements like fido2 and age fall short by not supporting all the usecases (file encryption for fido2, hardware tokens for age) or not having a complete toolchain on all platforms.


I use pcks11 on my yubikeys, would I gain something by using the PGP functionality instead?


Easier tooling. At least on Linux. PKCS11 requires a dynamically linked library (.so) which you pass to programs using it (like SSH) which is a bit annoying and because it's a binary linking it's not an API you can easily debug. It tends to just crash especially with version mismatches. The GPG agent API is easier for this. It works over a socket and can even be forwarded over SSH connections.

Usually you end up using OpenSC/OpenCT for that. Also the tooling to manage the virtual smartcards is usually not as easy. I haven't used PIV for this (which is probably what you use on the yubikey to get PKCS11) but it was much harder to get going than simply using GPG Agent/scdaemon/libpcscd and the command "gpg card-edit" to configure the card itself.

It's still not quite easy but I found it easier than PKCS11. I used that before with javacards and SmartHSM cards. The good thing about the PIV functionality is that it integrates really well with Windows active directory, which PGP of course doesn't. So if you're on Windows, PIV (with PKCS11) is probably the way to go (or Fido2 but it's more Windows Hello for Business rather than AD functionality then, it depends whether you're a legacy or modern shop).

The big benefit of yubikeys over smartcards is that you can use the touch functionality to approve every use of the yubikey, whereas an unlocked smartcard will approve every action without a physical button press (of course because it doesn't have such a button).


I second this!


thank you for the detailed answer.


I believe the article you linked to doesn’t seem to say anything about “nobody verifying PGP signatures”. We would need PyPI to publish their Datadog & Google Analytics data, but I’d say the set of users who actually verify OpenPGP signatures intersects with the set of users faking/scrambling telemetry.


I wrote the blog post in question. The claim that "nobody is verifying PGP signatures (from PyPI)" comes from the fact that around 1/3rd had no discoverable public keys on what remains of the keyserver network.

Of the 2/3rd that did have discoverable keys, ~50% had no valid binding signature at the time of my audit, meaning that obtaining a living public key has worse-than-coin-toss odds for recent (>2020) PGP signatures on PyPI.

Combined, these datapoints (and a lack of public noise about signatures failing to verify) strongly suggest that nobody was attempting to verify PGP signatures from PyPI at any meaningful scale. This was more or less confirmed by the near-zero amount of feedback PyPI got once it disabled PGP uploads.


This all makes sense.

PEP 740 mentions:

> In their previously supported form on PyPI, PGP signatures satisfied considerations (1) and (3) above but not (2) (owing to the need for external keyservers and key distribution) or (4) (due to PGP signatures typically being constructed over just an input file, without any associated signed metadata).

It seems to me that the infrastructure investment in sigstore.dev vs. PGP seems arbitrary. For example, on the PGP side, PyPI keyserver and tooling to validate uploads as to address (2) above. And (4) being handled similar to PEP 740 with say signatures for provenance objects. Maybe the sigstore is "just way better" but it doesn't exactly seem so cut-and-dried of a technical argument from the things discussed in these commends and the linked material.

It's perfectly responsible to make a choice. It seems unclear just what the scope of work difference would be despite there being a somewhat implicit suggestion across the discussions and links in the comments that it was great. Maybe that's an unreasonable level of detail to expect? But with what seems to come across as "dogging on PGP" it seems what I've found disappointing with my casual brush with this particular instance of PGP coming up in the news.


(2) is addressed by Sigstore having its own infrastructure and a full-time rotation staff. PyPI doesn't need to run or operationalize anything, which is a significant relief compared to the prospect of having to operationalize a PGP keyserver with volunteer staffing.

(I'm intentionally glossing over details here, like the fact that PyPI doesn't need to perform any online operations to validate Sigstore's signatures. The bottom line is that everything about it is operationally simpler and more modern than could be shaped out of the primitives PGP offers.)

(4) could be done with PGP, but would go against the long-standing pattern of "sign the file" that most PGP tooling is ossified around. It also doesn't change the fact that PGP's signing defaults aren't great, that there's a huge tail of junk signing keys out there, and that to address those problems PyPI would need to be in the business of parsing PGP packets during package upload. That's just not a good use of anybody's time.


> having its own infrastructure

This seems like a different brand of the keyserver network?

> PyPI doesn't need to run or operationalize anything

So it's not a new operational dependency because it's index metadata? That seems more like an implementation detail (aside from the imagined PGP keyserver dependency) that seems accommodatable given either system.

> like the fact that PyPI doesn't need to perform any online operations to validate Sigstore's signatures

I may be missing something subtle (or glaring) but "online operations" would be interactions with some other service or a non-PSF service? Or simply a service not-wholly-pypi? Regardless, the index seems like it must be a "verifier" for design consideration (2) from PEP 740 to hold, which would mean that the index must perform the verification step on the uploaded data--which seems inconsequentially different between an imagined PGP system (other than it would have to access the imagined PyPI keyserver) and sigstore/in-toto.

> ... PyPI would need to be in the business of parsing PGP packets during package upload.

But the sigstore analog is the JSON array of in-toto attestation statement objects.


> This seems like a different brand of the keyserver network?

It serves a vaguely similar purpose, if that's what you mean. That shouldn't be surprising, since this is all PKI-shaped problems under the hood.

To reiterate: the operational constraints here are (1) simplicity and reliability for PyPI, plus secure defaults for the signing scheme itself. Running a PGP keyserver would not offer (1), and PGP as an ecosystem does not offer (2). This is even before desired properties, like strong identity binding, which PGP cannot offer in its current form.

> "online operations" would be interactions with some other service or a non-PSF service? Or simply a service not-wholly-pypi?

In the PGP setting, that means PyPI would need to pull from a keyserver. That keyserver would need to be one that PyPI doesn't control in order for the threat model to be coherent.

In the PEP 740 setting, PyPI does not need to pull any material from anywhere besides what the uploader is providing: the signatures in the attestations uploaded are signed with an attacked ephemeral signing certificate, which has a trusted publisher as its identity. That signing certificate can then be chained to an already established root of trust, in "normal" X.509 PKI fashion.

You could approximate this design with PGP, but none of the primitives currently exist (or if they exist, are inoperational).

> But the sigstore analog is the JSON array of in-toto attestation statement objects.

Yes. Believe it or not, a big ugly JSON blob is simpler than dealing with PGP's mess of packet versions and formats.


> This is even before desired properties, like strong identity binding, which PGP cannot offer in its current form.

If the strong identity binding is OIDC then I disagree. It's convenient but no more evidence of identity than being able to unlock a private key.

> ... one that PyPI doesn't control in order for the threat model to be coherent.

This doesn't make sense unless the author key material was only ever published on the PyPI keyserver.

> PyPI does not need to pull any material from anywhere besides what the uploader is providing

> in "normal" X.509 PKI fashion.

What about checking for revocation?


> If the strong identity binding is OIDC then I disagree. It's convenient but no more evidence of identity than being able to unlock a private key.

Modulo the security of the OIDC provider, it's a very strong proof of identity. The world already assumes this in practice (via pervasive OAuth and OIDC in other contexts); all PEP 740 does is make the same assumption rather than trying to bolt strong identity onto PGP.

(I think there are good objections to OIDC, including the risk of centralization. But I've yet to see a better widely adopted system for publicly verifying identity claims.)

> This doesn't make sense unless the author key material was only ever published on the PyPI keyserver.

Is there any evidence that anybody is reconciling results from different PGP keyservers? I don't think anybody was doing this even at the peak of the SKS network.

(But even this assumes more sophistication than the average user is putting into verification: 99% of Python distribution installations aren't doing even hash-checking. Expecting that users will begin to do keyring curation isn't reasonable, and - critically - is not empirically supported by the last 2 decades of PGP support on PyPI.)

> What about checking for revocation?

PEP 740 assumes short-lived (~10 minute) signing certificates with ephemeral signing keys. Subscriber-level revocation hasn't scaled well for the Web PKI, so the underlying stack here (Sigstore) prefers limiting the scope of signing materials and enforcing transparency instead.


> it's a very strong proof of identity

My point is that it is not stronger than the corresponding ability to unlock a private PGP key. OAuth/OIDC is convenient and has more friendly tooling, I concede this easily. But to make a claim that it's a strong proof of identity would require the account provider behind the OAuth to follow some sort of "know your customer"-like verification to claim more than "the request came from a system which has access to the account."

> I don't think anybody was doing this even at the peak of the SKS network.

Maybe not? Most of my interaction with PGP was as an attestation of content, I really produced this text. That was relevant in sub-networks of contacts which had mutual trust via signatures, or fingerprints shared through another medium.

But that doesn't change my perspective on the threat model. If there is an "evil PyPI" and end users need to have trust of the author, it seems to me the same trust relationship graph needs to be constructed. And that sigstore provides some means to do that means it's less work, and that's great and compelling. But it doesn't actually change the number of points to inspect to conclude "trusted," which seems to fail a simple test of either being more or less difficult to consider as the same number of trust checks need to occur.

> But even this assumes more sophistication than the average user is putting into verification

Is there some user story for how an end user might "build warm fuzzies" about trusting packages in the post-PEP 740 world? If pip (or some tool) ends up with functionality to effect the verification, the "expecting users to manage keyrings" argument seems to go away, as that tool would also be the logical place to drop that functionality. Or is it that `cosign --verify` is expected to be a lighter lift to document than `gpg --verify`?

I see how the ephemeral keys mean revocation of the signing keys is less of a concern. And even how transparency provides an avenue to discover misuse, so long as it is monitored. It seems like PyPI doing some consistency monitoring on behalf of authors would be required to make a claim of trustworthiness, which strikes me as an expanded operational concern.


Maintaining this capability isn't free, it is of dubious benefit and there are much better alternatives.

On a cost benefit analysis this is a slam dunk.


What are these "much better alternatives"?


https://www.sigstore.dev/

The emerging standard for verifying artifacts, e.g. in container image signing, npm, maven, etc

https://blog.sigstore.dev/npm-public-beta/ https://www.sonatype.com/blog/maven-central-and-sigstore


Emerging standard = not yet the standard


Nobody said it was. The point is that it's better.


And my point is that “it’s better” and “new standard” are not compelling emotional directives to me after hearing them Integer.MAX times.


I performed a similar analysis on RubyGems and found that of the top 10k most-downloaded gems, less than one percent had valid signatures. That plus the general hassle of managing key material means that this was a dead-end for large scale adoption.

I'm still hopeful that sigstore will see wide adoption and bring authorial attestation (code signing) to the masses.


I agree, where is the LetsEncrypt for signing? Something you could download and get running in literally a minute.



I don't think Sigstore is a good example. I just spent half an hour trying to understand it, and I am still left with basic questions like "Does it require me to authenticate with Github & friends, or can I use my own OIDC backend?": it seems like you can, but there are cases where you need to use a blessed OIDC provider, but you can override that while self-hosting, and there are config options for the end user to specify any IODC provider? But the entire trust model also relies on the OIDC backend being trustworthy?

The quickstart guide looks easy enough to follow, but it seems nobody bothered to document what exactly is happening in the background, and why. There's literally a dozen moving pieces and obscure protocols involved. As an end user, Sigstore looks like a Rube Goldberg trust machine to me. It might just as well be a black box.

PGP is easy to understand. LetsEncrypt is easy to understand. I'm not an expert on either, but I am reasonably certain I can explain them properly to the average highschooler. But Sigstore? Not a chance - and in my opinion that alone makes it unsuitable for its intended use.


The important difference is that sigstore enables a "single click" signing procedure with no faffing around with key material. How it works is much less important than the user experience, which is vastly better.


> How it works is much less important than the user experience, which is vastly better.

I disagree. If it requires a Magic Trust Box which can be Trusted because it is made by Google and Google is Trustworthy, it has exactly zero value to the wider community. It doesn't matter how convenient the user experience is when it isn't clear why it provides trust.

Let's say I created an artifact upload platform, where the uploader can mark a "This file is trustworthy" checkbox, which results in the file being given a nice green happy face icon in the index. It is incredibly convenient and provides a trivial user experience! And it's of course completely legit and trustworthy because *vague hand waving gestures*. Would you trust my platform?


Specifically, the CA signing the code certificates (that are valid for 10 minutes) is https://github.com/sigstore/fulcio.


Does it matter much if the key can be verified? I mean it seems like a pretty big step up security wise to know that a new version of a package is signed with the same key was previous versions.


> I mean it seems like a pretty big step up security wise to know that a new version of a package is signed with the same key was previous versions.

A key part of the rationale for removing PGP uploads from PyPI was that you can't in fact know this, given the current state (and expected future) of key distribution in PGP.

(But also: yes, it's indeed important that the key can be verified i.e. considered authentic for an identity. Without that, you're in "secure phone call with the devil" territory.)


> A key part of the rationale for removing PGP uploads from PyPI was that you can't in fact know this, given the current state (and expected future) of key distribution in PGP.

I find that hard to believe. The public key or at least its fingerprint should be in the signature itself.


The fingerprint is always in the signature (when the signature packets are well-formed). The public key is almost always not. I think it would behoove you to read the post linked in TFA, which explains this in detail[1].

[1]: https://blog.yossarian.net/2023/05/21/PGP-signatures-on-PyPI...


In general in Software you have to start with an idea of what you want to accomplish. Then you have to break that down and determine what is feasible within the constraints you are working. Ideally you then point out the constraints that are limiting your solution. I feel like all of that is missing here. All I can see there is just an immense sea of details.


I've authored a proposal to deprecate the expectation of PGP signatures for future CPython releases. We've been providing Sigstore signatures for CPython since 3.11.

https://peps.python.org/pep-0761/


On the other hand PGP keys were widely successful for cpan, the perl5 repo. It's very simple to use, not as complicated as with pypi.


I dunno. I mean, sure, it's a worldwide-mirrored, cryptographically secure, curated, hierarchically and categorically organized, simple set of flat files, with multiple separate community projects, to test all packages on all supported Perl versions and platforms, with multiple different frontends, bug tracking, search engines, documentation hubs, security groups, and an incredibly long history of support and maintenance by the community.

But it's, like, old. You can't make something new be like something old. That's not cool. If what we're doing isn't new and cool, what is the point even?


> Of those 1069 unique keys, about 30% of them were not discoverable on major public keyservers, making it difficult or impossible to meaningfully verify those signatures. Of the remaining 71%, nearly half of them were unable to be meaningfully verified at the time of the audit (2023-05-19).

A PGP keyserver provides no identity verification. It is simply a place to store keys. So I don't understand this statement. What is the ultimate goal here? I thought that things like this mostly provided a consistent identity for contributing entities with no requirement to know who the people behind the identities actually were in real life.


You're thinking one step past the failure state here: the problem isn't that keyservers don't provide identity verification, but that the PGP key distribution ecosystem isn't effectively delivering keys anymore.

There are probably multiple reasons for this, but the two biggest ones are likely (1) that nobody knows how to upload keys to keyservers anymore, and (2) that keyservers don't gossip/share keys anymore, following the SKS network's implosion[1].

Or put another way: a necessary precondition of signature verification is key retrieval, whether or not trust in a given key identity (or claimant human identity) is established. One of PGP's historic strengths was that kind of key retrieval, and the data strongly suggests that that's no longer the case.

[1]: https://gist.github.com/rjhansen/67ab921ffb4084c865b3618d695...


The SKS keyserver thing was 5 years ago. It seems to be working. Was uploading a key somewhere a requirement for submitting to PyPi? Why were the keys not available from PyPi?

It just seems to me that there wasn't anything here in the first place. Something something PGP keys. Perhaps they were hoping for someone to come along and make a working system and no one ever did.


Could you clarify: which part seems to be working? The SKS servers certainly aren't, and the keyservers that are currently online don't appear to gossip or share keys with each other. That's why the post's dataset comes from querying the biggest/most popular ones manually.

> Was uploading a key somewhere a requirement for submitting to PyPi?

Where would "somewhere" be? If it was PyPI itself (or a server controlled by PyPI), replacing the key material would be trivial and would largely defeat the purpose of having signatures instead of just hashes.

In the past, "somewhere" could have been a gossiping SKS server. But that would tie PyPI's reliability and availability to that of the SKS network, which was never great even at its prime.

> Why were the keys not available from PyPi?

For the reason mentioned above: if PyPI is trusted to distribute the key material, then an attacker can simply replace the keys used to sign for the package. This makes it no better than having PyPI distribute hashes (which it already does), but a lot more complicated.

To my understanding, the reason PyPI originally accepted PGP keys is because someone asked for it and baseline expectations around security were more laissez-faire at the time: there was no baseline expectation that millions of people might be using `pip` (or `easy_install` at the time), and that all of them deserve the same integrity and authenticity properties as a small core of expert users. Those expectations have shifted over time towards the belief that ordinary users should also have signatures accessible to them, and I'm inclined to believe that's a good thing.


We have sorta drifted into pure speculation...

Just to be clear, I reject your contention that there might be some sort of identity verification benefit associated with distributing the keys across the SKS network. Cryptographic identity is normally represented by a really long number of some sort (called a key fingerprint in the PGP case). a PGP identity contains an email address, but that is a completely unfounded assertion from the creator of the identity. That contention has to be supported in some way. Anyone can create as many PGP identities as they want with any email address in them. They can upload those identities anywhere they want and they might be replicated. An attacker can't "replace" a PGP identity because they can't create another identity that matches the key fingerprint.


> Just to be clear, I reject your contention that there might be some sort of identity verification benefit associated with distributing the keys across the SKS network.

I don't understand why you think I'm contending this. The claim is not that SKS distributes meaningfully verifiable identities, it's that there is no longer a reliable, scalable way to discover public keys at all in PGP. Whether or not you trust those public keys or the claimant human identities in them is a separate subsequent step; the entire argument is that the failure mode happens before then, and that this makes PGP an unsuitable serious choice for scaling signatures on PyPI.

(PGP fingerprints are a red herring -- it doesn't matter how forgeable or unforgeable a key's fingerprint is if I can't discover the key. No key, no verification, full stop.)


These keys could have related signatures from other keys, that some users or maintainers may have reason to trust.

(But for 30% of keys this was not even theoretically possible, while for another 40% of keys it was not practically possible, according to the article.)


Do you mean like how Debian signs the keys of maintainers? My point is that nothing like that was done in this case. It is hard to know what went wrong when there didn't seem to be any policy in the first place.


Should be tagged 2023.


Most people do security badly so let’s not do it at all.

Right.


Unfortunately we live in a world of limited time and resources, and priorities need to be adjusted accordingly.

Honestly, I would put the blame on PGP. It has a … special UX. I tried to use it in 3 separate occasions, and ended up doing something else (probably less secure) because I would couldn’t manage to make the damn thing work. I might not be a genius, but I am also not completely stupid.


What’s the current best solution for associating a public key to an identity or person?

This is not related to cryptography protocols.

OpenPGP key server verifies email. Keybase was a good idea but seems dead. Maybe identity providers?


> Maybe identity providers?

That's essentially all Sigstore is: it uses an identity provider to bind an identity (like an email) to a short-lived signing key.


There is also Keyoxide[1]. The goal is similar to Keybase (online identify proofs), but Keyoxide uses a decentralized approach where the identity proofs are stored on the key themselves.

[1] https://keyoxide.org


Discussed at the time:

Removing PGP from PyPI - https://news.ycombinator.com/item?id=36044543 - May 2023 (187 comments)


Wouldn't another very good answer be for PyPI to have a keyserver and require keys be sent to it for them to be used in package publishing?


Wouldn't that make the maintenance burden worse? Now PyPI has to host a keyserver, with its own attack service. And presumably 99.7% of the keys would be only for PyPI, so folks would have no incentive to secure them. The two modes that work are either no signing, or mandatory signing like many app stores. Obviously the middle way is the worst of both worlds, no security for 99+% of packages, but all the maintenance headache. And mandatory signing raises the possibility that PyPI would be replaced by an alternate repository that's easier to contribute to. The open source world depends to a shocking degree on the volunteer labor of people who have a lot of things they could be doing with their time, and a "small" speed bump for enhanced security can have knock-on effects that are not small.


Sure, it all hinges on whether the signatures provided any value. And it seems to be the conclusion that it didn't.

Without something showing "keyservers present an untenable risk" and Debian, Ubuntu, Launchpad, others have keyserver infrastructure, it seems like too far of a conclusion to reach casually. But of course, it adds attack surface for the simple fact that a public facing thing was stood up where once it was not. Though that isn't the kind of trivial conclusion I imagine you had in mind.

I don't see why there's a binary choice between "signing is no longer supported" and "signing is mandatory" when before that wasn't the case. If it truly provided no value, or so small a value with so high a maintenance burden that it harmed the project that way, then it makes sense--but that didn't seem to be the place from which the article argued.


From here: https://caremad.io/posts/2013/07/packaging-signing-not-holy-... which is linked to the article since PyPI has so many packages and that everyone can sign up to add a package it would be extremely unmanageable.


That's fair and I appreciate that detail even without having followed the link in the original article. But while not being "the holy grail" why must the perfect be the enemy of the good, if it was providing a value?

I certainly allow for the "if it was providing a value" to be a gargantuan escape hatch through which any other perspective may be removed.

But by highlighting the difficulty in verifying signatures and saying it was because they keys were hard to find (or may have been expired or other signing errors per the footnote) a fairly straight forward solution presents itself: add keyserver infrastructure, check it when signed packages are posted, reject if key verification fails using that keyserver.

All told it seems like it wasn't providing a value, so throwing more resources at the effort was not done. But something about highlight how "keys being hard to find" helped justify the action doesn't quite pass muster to my mind.


I feel like there is a broader issue being pushed aside here. Verifying a signature means you have a cryptographic guarantee that whoever generated an artifact possessed a private key associated with a public key. That key doesn't necessarily need to be published in a web-facing keystore to be useful. For packages associated with an OS-approved app store or a Linux distro's official repo, the store of trusted keys is baked into the package manager.

What value does that provide? As the installer of something, you almost never personally know the developer. You don't really trust them. At best, you trust the operating system vendor to sufficient vet contributors to a blessed app store. Whoever published package A is actually a maintainer of Arch Linux. Whoever published app B went through whatever the heck hoops Apple makes you go through. If malware gets through, some sort of process failed that can potentially be mediated.

If you're downloading a package from PyPI or RubyGems or crates.io or whatever, a web repository that does no vetting and allow anyone to publish anything, what assurance is this giving? Great, some package was legitimately published by a person who also published a public key. Who are they exactly? A pseudonym on Github with a cartoon avatar? Does that make them trustworthy? If they publish malware, what process can be changed to prevent that from happening again? As far as I can tell, nothing.

If you change the keystore provider to sigstore, what does that give you? Fulcio just requires that you control an e-mail address to issue you a signing key. They're not vetting you in any way or requiring you to disclose a real-world identity that can be pursued if you do something bad. It's a step up in a limited scope of use cases in which packages are published by corporate entities that control an e-mail domain and ideally use their own private artifact registry. It does nothing for public repositories in which anyone is allowed to publish anything.

Fundamentally, if a public repository allows anyone to publish anything, does no vetting and requires no real identity disclosure, what is the basis of trust? If you're going to say something like "well I'm looking for .whl files but only from Microsoft," then the answer is for Microsoft to host its own repository that you can download from, not for Microsoft to publish packages to PyPI.

There are examples of making this sort of simpler for the consumer to get everything from a single place. Docker Hub, for instance. You can choose to only ever pull official library images and verify them against sigstore, but that works because Docker is itself a well-funded corporate entity that restricts who can publish official library images by vetting and verifying real identities.


I dunno, not all projects are equally important or popular, so it seems to me that the number of downloads which had keys is the better metric to look at.

But, if there are fundamental issues with the key system anyway, the percentages don’t matter anyway.


You're absolutely right that the number of downloads is probably a more important metric! But also yes, I think the basic "can't discover valid keys for a large majority of packages" is a sufficient justification, which is why I went with it :-)

The raw data behind the blog post is archived here[1]. It would be pretty easy to reduce it back down to package names, and see which/what percent of those names are in the top 500/1000/5000/etc. of PyPI packages by downloads. My prediction is that there's no particular relationship between "uploads a PGP key" and popularity, but that's speculative.

[1]: https://github.com/woodruffw/pypi-pgp-statistics


I am curious why we still need PyPI to hold packages: it may be better to install from github.

Github provides much better integrated experience: source code, issues, docs, etc.


I don't think this is that terrible of an idea, actually. Before PyPI disabled searching, I'd say that the value of centralization was from that, and possibly due to security, but I think any claim of security from a central repo is deluding ourselves these days. There are so many opportunities for supply chain attacks that maybe this isn't actually worse. Requiring pip to refer to a github owner/repo might eliminate some of the squatter problems we have, too.


Not Invented Here!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: