thadt 39 minutes ago

Starts reading: "fantastic, this is what we've been needing! But... where is code signing?"

> One problem that WAICT doesn’t solve is that of provenance: where did the code the user is running come from, precisely?

> ...

> The folks at the Freedom of Press Foundation (FPF) have built a solution to this, called WEBCAT. ... Users with the WEBCAT plugin can...

A plugin. Sigh.

Fancy, deep transparency logs that track every asset bundle deployed are good. I like logging - this is very cool. But this is not the first thing we need.

The first thing we need, is to be able to host a public signing key somewhere that browsers can get and automatically signature verify the root hash served up in that integrity manifest. Then point a tiny boring transparency log at _that_. That's the thing I really, really care about for non-equivocation. That's the piece that lets me host my site on Cloudflare pages (or Vercel, or Fly.io, or Joe's Quick and Dirty Hosting) that ensures the software being run in my client's browser is the software I signed.

This is the pivotal thing. It needs to live in the browser. We can't leave this to a plugin.

  • doomrobo 5 minutes ago

    I'll actually argue the opposite. Transparency is _the_ pivotal thing, and code signing needs to be built on top of it (it definitely should be built into the browser, but I'm just arguing the order of operations rn).

    TL;DR you'll either re-invent transparency or end up with huge security holes.

    Suppose you have code signing and no transparency. Your site has some way of signaling to the browser to check code signatures under a certain pubkey (or OIDC identity if you're using Sigstore). Suppose now that your site is compromised. What is to prevent an attacker from changing the pubkey and re-signing under the new pubkey. Or just removing the pubkey entirely and signaling no code signing at all?

    There are a three answers off the top of my head. Lmk if there's one I missed:

    1. Websites enroll into a code signing preload list that the browser periodically pulls. Sites in the list are expected to serve valid signatures with respect to the pubkeys in the preload list.

    Problem: how do sites unenroll? They can ask to be removed from the preload list. But in the meantime, their site is unusable. So there needs to be a tombstone value recorded somewhere to show that it's been unenrolled. That place it's recorded needs to be publicly auditable, otherwise an attacker will just make a tombstone value and then remove it.

    So we've reinvented transparency.

    2. User browsers remember which sites have code signing after first access.

    Problem: This TOFU method offers no guarantees to first-time users. Also, it has the same unenrollment problem as above, so you'd still have to reinvent transparency.

    3. Users visually inspect the public key every time they visit the site to make sure it is the one they expect.

    Problem: This is famously a usability issue in e2ee apps like Signal and WhatsApp. Users have a noticeable error rate when comparing just one line of a safety number [1; Table 5]. To make any security claim, you'd have to argue that users would be motivated to do this check and get it right for the safety numbers for every security-sensitive site they access, over a long period of time. This just doesn't seem plausible

    [1] https://arxiv.org/abs/2306.04574

jmull 3 hours ago

It would be helpful if they included a problem statement of some sort.

I don't know what problem this solves.

While I could possibly read all this and deduce what it's for, I probably won't... (the stated premise of this, "It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." is not true.)

  • miloignis 3 hours ago

    For me, the key problem being solved here is to have reasonably trustworthy web implementations of end-to-end-encrypted (E2EE) messaging.

    The classic problem with E2EE messaging on the web is that the point of E2EE is that you don't have to trust the server not to read your messages, but if you're using a web client you have to trust the server to serve you JS that won't just send the plain text of your messages to the admin.

    The properties of the web really exacerbate this problem, as you can serve every visitor to your site a different version of the app based on their IP, geolocation, tracking cookies, whatever. (Whereas with a mobile app everyone gets the same version you submitted to the app store).

    With this proposed system, we could actually have really trustworthy E2EE messaging apps on the web, which would be huge.

    (BTW, I do think E2EE web apps still have their place currently, if you trust the server to not be malicious (say, you or a trusted friend runs it), and you're protecting from accidental disclosure)

    • jmull 2 hours ago

      It doesn't seem like there's much difference in the trust model between E2EE web apps and App Store apps. Either way the publisher controls the code and you essentially decide whether to trust the publisher or not.

      Perhaps there's something here that affects that dynamic, but I don't know what it is. It would help this effort to point out what that is.

      • fabrice_d 2 hours ago

        On the web, if your server is compromised it's game over, even if the publisher is not malicious. In app stores, you have some guarantee that the code that ends up on your device is what the publisher intended to ship (basically signed packages). On the web it's currently impossible to bootstrap the integrity verification with just SRI.

        This proposal aims at providing the same guarantees for web apps, without resorting to signed packages on the web (ie. not the same mechanism that FirefoxOS or ChromeOS apps used). It's competing with the IWA proposal from Google, which is a good thing.

    • knowitnone3 an hour ago

      everyone gets the same version that sends your secure messages to another server? I'm impressed.

  • CharlesW 2 hours ago

    > I don't know what problem this solves.

    This allows you to validate that "what you sent is what they got", meaning that the code and assets the user's browser executes are exactly what you intended to publish.

    So, this gives web apps and PWAs some of the same guarantees of native app stores, making them more trustworthy for security-sensitive use cases.

AndrewStephens 4 hours ago

As a site owner, the best thing you can do for your users is to serve all your resources from a server you control. Serving javascript (or any resource) from a CDN was never a great idea and is pointless these days with browser domain isolation, you might as well just copy any third party .js in your build process.

I wrote a coincidently related rant post last week that didn't set the front page of HN on fire so I won't bother linking to it but the TL/DR is that a whole range of supply chain attacks just go away if you host the files yourself. Each third party you force your users to request from is an attack vector you don't control.

I get what this proposal is trying to achieve but it seems over complex. I would hate to have to integrate this into my build process.

  • doomrobo 4 hours ago

    You're right that, when your own server is trustworthy, fully self-hosting removes the need for SRI and integrity manifests. But in the case that your server is compromised, you lose all guarantees.

    Transparency adds a mechanism to detect when your server has been compromised. Basically you just run a monitor on your own device occasionally (or use a third party service if you like), and you get an email notif whenever the site's manifest changes.

    I agree it's far more work than just not doing transparency. But the guarantees are real and not something you get from any existing technology afaict.

    • EGreg 3 hours ago

      If they want to make a proposal, they should have httpc://sha-256;... URLS which are essentially constant ones, same as SRI but for top-level domains.

      Then we can really have security on the Web! Audit companies (even anonymous ones but with a good reputation) could vet certain hashes as being secure, and people and organizations could see a little padlock when M of N approved a new version.

      As it is, we need an extension for that. Because SRI is only for subresource integrity. And it doesn't even work on HTML in iframes, which is a shame!

      • ameliaquining 3 hours ago

        The linked proposal is basically a user-friendlier version of that, unless you have some other security property in mind that I've failed to properly understand.

everdrive 2 hours ago

I improve the trustworthiness of js by blocking it by default.

some_furry 6 hours ago

This is really cool, and I'm excited to hear that it's making progress.

Binary transparency allows you to reason about the auditability of the JavaScript being delivered to your web browser. This is the first significant step towards a solution to the "JavaScript Cryptography Considered Harmful" blog post.

The remaining missing pieces here are, in my view, code signing and the corresponding notion of public key transparency.

zb3 5 hours ago

Ok (let's pretend I didn't see the word "blockchain" there), but none of this should interfere with browser extensions that need to modify the application code.

  • some_furry 4 hours ago

    EDIT: Disregard this comment. I think there was a technical issue on my computer. Keeping the original comment below.

    -----

    > let's pretend I didn't see the word "blockchain" there

    There's nothing blockchain about this blog post.

    I think this might be a rectangles vs squares thing. While it's true that all blockchains use chains of hashes (e.g., via Merkle trees), it's not true that all uses of append-only data structures are cryptocurrency.

    See also: Certificate transparency.

    • JimDabell 4 hours ago

      They specifically suggest using a blockchain for Tor:

      > A paranoid Tor user may not trust existing transparency services or witnesses, and there might not be any other trusted party with the resources to self-host these functionalities. For this use case, it may be reasonable to put the prefix tree on a blockchain somewhere. This makes the usual domain validation impossible (there’s no validator server to speak of), but this is fine for onion services. Since an onion address is just a public key, a signature is sufficient to prove ownership of the domain.

      • some_furry 4 hours ago

        Oh, weird. I didn't see that (and a subsequent Ctrl+F showed 0 results) but now it's showing up for me?