20

HTTP

I'm aware that HTTP sends plain text over the network which can be sniffed and modified if a MITM is performed.

HTTPS

On the other hand, HTTPS sends encrypted text over the network that can neither be sniffed nor modified.

Other?

I'm wondering if there is an in between where the traffic can be sniffed, but not modified. I was thinking the server could just sign every packet using a CA.

I'm also aware of manually verifying hashes of files downloaded, but seeing as those hashes are served over a modifiable means (HTTP), it doesn't seem that this really provides any authenticity as the hash could modified to match the modified file. As @mti2935 suggested, the hash could be sent over HTTPS, but I'm looking for a preexisting protocol to handle all this.

Why

I'm sure this question begs the question why. So here are a few example scenarios.

  1. A user wants to allow their network security device to scan files downloaded for malware without having to modify their trust store.
  2. I'm a ham radio operator and I'd like to stream movies over ham bands, but I'm not allowed to encrypt. I do care about the video maintaining it's integrity, but I don't care about someone else snooping.
  3. Sites that only distribute data and don't need encryption but do need data integrity.
11
  • 6
    The hash could be served by HTTPS and the file by HTTP, if integrity is a concern but secrecy is not. This might save some processing overhead, as the hash would typically be much smaller than the file. But having said that, HTTPS is very inexpensive to deploy these days.
    – mti2935
    Commented Oct 15, 2020 at 21:52
  • 19
    WHY in the first place? This looks like a XY problem for me, i.e. you ask for a protocol with specific properties but you don't present a real-world use case where such a protocol is even useful. And because of this it is not even clear if the use case you might have in mind would not be better solved by another, existing protocol anyway. "I want everyone on the network to be able to see what each user is doing." - Also WHY. And this would require everybody to have access to all traffic of all others in the first place. Commented Oct 16, 2020 at 4:17
  • 2
    Authoritarian governments would also find this useful, and "authenticity" is the compromise. No standards organization would listen to them if they promote plaintext communication but if they have the aim to upgrade all plaintext communications to signed plaintext to strengthen national security then someone would come up with a standard, which they could then use to replace encrypted communications for national security.
    – MCCCS
    Commented Oct 16, 2020 at 18:30
  • 5
    @MikeSchem: Thanks for the clarification. Although many would likely disagree with at least "Sites like Wikipedia, don't need encryption, they only need integrity,". Having everybody see what topics you visit can be pretty invasive into the privacy. Wikipedia not only provides technical information, but also political, information about diseases, pharmaceutics ... - and it says a lot about a person and about their interests and problems which Wikipedia pages they visit. And I have similar but not as harsh objections to the two other examples. Commented Oct 16, 2020 at 20:10
  • 20
    "Sites like Wikipedia, don't need encryption" - What on earth would lead you to that assumption? Even apart from the fact that Wikipedia offers a Signup/Login service, I surely don't want anyone to know which specific Wikipedia pages I visited when. Commented Oct 16, 2020 at 22:06

7 Answers 7

40

SSL/TLS before 1.3 has some 'with-NULL' cipher suites that provide NO confidentiality, only authentication and integrity; see e.g. rfc5246 app C and rfc4492 sec 6 or just the registry. These do the usual handshake, authenticating the server identity using a certificate and optionally also the client identity, and deriving session/working keys which are used to HMAC the subsequent data (in both directions, not only from the server) but not to encrypt it. This prevents modification, or replay, but allows anyone on the channel/network to read it.

These cipher suites are very rarely used, and always (to the best of my knowledge) disabled by default. (In OpenSSL, they not only aren't included in DEFAULT but not even in the otherwise complete set ALL -- to get them you must specify (an) explicit suite(s), the set eNULL aka NULL, or the set COMPLEMENTOFALL, which last grates horribly to any mathematician!) I very much doubt you'll ever get any browser to use them, and probably not most apps or even many packaged servers. But if you control the apps at both ends of an HTTPS connection -- or perhaps proxies for the apps -- this does meet your apparent requirement.

TLS 1.3 changes how cipher suites are used, and no longer has this functionality. As time goes on, 1.3 will become more widespread, and it is likely 1.2 and 1.1 will be dropped in the foreseeable future. (1.0 already has been dropped many places, though not all. SSL3 is badly broken by POODLE, and dropped essentially everywhere.)

Belatedly found dupe, from before 1.3: Can TLS provide integrity/authentication without confidentiality

13
  • 7
    @user253751 since it's a two-way negotiation, in an ideal world, both would disable it, but in practice getting users to stop using legacy clients is much harder than configuring servers.
    – James_pic
    Commented Oct 16, 2020 at 12:24
  • 30
    @user253751 whilst it's true that there's nothing the server can do in the case of the "the client is malicious" threat model, "the client is an idiot" is nonetheless a common enough threat model that basic mitigations on the server (like disabling insecure cipher suites) are worthwhile.
    – James_pic
    Commented Oct 16, 2020 at 12:36
  • 8
    Apparently (and back in 2016), Windows Update used null cipher TLS: twitter.com/SwiftOnSecurity/status/745333018697469953
    – Ben S
    Commented Oct 16, 2020 at 20:57
  • 2
    @user253751 "Surely it should be the client's job to not accept bad security parameters." - In some protocols, the cipher negotiation phase is not well-protected against tampering. Suppose the client sends supported: [NULL, AES-256], and an attacker modifies this in-flight to supported: [NULL]. If the server accepts this, free eavesdropping! This is known as a downgrade attack. SSL/TLS has had its share of vulnerabilities with this, hence the advice to simply refuse insecure ciphers. Why allow them anyway?
    – marcelm
    Commented Oct 18, 2020 at 11:23
  • 1
    Does TLS 1.3 only forbid these cipher suites (as in "no common client program would accept it") or does it has changes, which make it impossible (as in "I cannot build a client, which accepts it")?
    – allo
    Commented Oct 18, 2020 at 14:27
12

Yes, Signed Exchanges (SXG)

Such a mechanism does exist, although it is very new and somewhat controversial.

  • SXGs are supported in Chromium browsers (Chrome 73, Edge 79, Opera 64).
  • Mozilla considers SXGs harmful; Firefox does not support them.
  • Safari does not support them. Apple has apparently expressed "skepticism" about the proposal, although I couldn't find anything authoritative.

SXGs are controversial because some view the proposal as an attempt by Google to impose a standard on the community in support of Google's also controversial AMP project. In short, SXGs were designed to allow browsers to display a publisher's URL in the URL bar even though the content was actually hosted by Google.

Editorial: This is a rather unfortunate situation since the proposal does have technical merits. While I do find AMP entirely distasteful, a spec that enables secure caching of HTTP resources at the LAN level is highly interesting. The SXG spec itself is also generic enough to be used in other use cases.


A SXG is a binary format that encapsulates a HTTP request and response (headers and payload) and signs it with a certificate issued to the origin domain. The SXG file is not encrypted and can be distributed in any way, including over plain HTTP or even on a flash drive.

The certificate used to sign a SXG is mostly similar to a standard X.509 cert used for HTTPS, but the cert must be issued by a trusted CA with a CanSignHttpExchanges extension if it is to be trusted by browsers. (These are not widely available yet.)

6
  • 1
    Wow, that's an uncomfortable use case, but thanks that does answer my question.
    – MikeSchem
    Commented Oct 18, 2020 at 15:17
  • I wonder what the pros and cons would be of a protocol which encapsulates within a URL a hash of a file header which would in turn be expected to contain a hash for the entire file or a list of hashes for segments thereof? One would use https:// to receive a page or script containing the URL, but the page itself could be served by anyone. If the document was updated, the page containing its URL would need to change, but there would be no danger of a cached copy of the referred-to page being mistaken for a fresh one.
    – supercat
    Commented Dec 24, 2020 at 20:54
  • 1
    @supercat you've basically just described BitTorrent or IPFS
    – josh3736
    Commented Dec 24, 2020 at 21:55
  • @josh3736: Except that BitTorrent (not sure about IPFS) use a completely different protocol and addressing scheme. I was envisioning something that would be processed like http, but with browsers deferring processing of received content until the hash could be validated, and with the URL containing a hash, but also containing information sufficient to locate the information in a manner not reliant upon the hash.
    – supercat
    Commented Dec 24, 2020 at 22:35
  • @supercat Subresource Integrity? Commented May 6, 2023 at 2:02
3

There could be, but is not

Such a protocol would be plausible. However, it does not have substantial advantages over HTTPS and there has not been a strong business need to facilitate the adoption of such a protocol, so it has not been implemented or adopted by the makers of mainstream browsers and servers.

It seems to me that the only use case for such a protocol would be in unusual niche conditions - so it seems appropriate that a custom niche protocol should be developed and used (likely by creating a modified version of open source servers and browsers) in scenarios where there is a desire for everyone in the network to see what each user is doing but still enforce authenticity of HTTP transfers; as this is an anti-feature for mainstream consumer use-cases.

I'd expect mainstream browsers to intentionally refuse including support for such a protocol, as the ability to downgrade user connections to a less secure option is considered as a security risk, and it's considered preferable to have security (including privacy) not as a default option, but as mandatory - users should not have an easy option to reduce security, as it would be abused. Certain aspects like that were proposed and discussed during the development of SSL/TLS/HTTPS standards, and intentionally left out of the final standard.

1
  • 1
    The development of a protocol for transmitting data packets securely over a network has little to do with those who create browsers and servers. Besides this is all encapsulated and processed on different levels, using unrelated software libraries. The communication part and the signing and / or encryption uses different code that isn't part of the server or the browser! The browser parses and renders HTML etc, the server stores various forms of code that gets translated into HTML or serves static files. You're mixing way too many unrelated topics in your answer.
    – Sederqvist
    Commented Oct 16, 2020 at 6:22
3

In addition to the practise of pgp-signed webpages, mentioned by Albert Goma, there is an http draft from httpbis working group for HTTP signing:

Signing HTTP Messages

Note: This draft expired a few days ago (12th Oct), but as a result of a working group, derived from 7 years old cavage-http-signatures one, I expect it to be renewed shortly.

3

I am not aware of any standardized protocol that implies signing the content only, but it seems like a necessity for publishers to protect it with their own private keys rather than SSL/TLS certified ones nowadays that mass surveillance and authoritarianism are becoming trendy, and centralized Certificate Authorities may have become attacker accomplices, knowingly or not.

Apparently there have been people signing HTML pages with PGP since the early 2000's and currently there's a Firefox and Chrome extension that verifies them (source code). Of course, in order to do it on one of these pages you need to have the correct public key, which needs to be acquired over a secure channel, and be configured manually on the extension, which is far from usable at scale.

Nevertheless, as that extension is under a BSD license, anyone could modify it in order to make it acquire the public keys from a verifiable source, like the distributed hash table TOR uses for its onion services, and later publish a Request For Comments to have the protocol standardized...


UPDATE: Those who'd like to dig deeper into verifiable sources may want to have a look at the Solutions section of the Zoko's triangle trilemma Wikipedia article.

1
  • 5
    Ugghhh, Tor's distributed hash table is not a "verifiable source". It cannot be used for distributions of PGP public keys. The reason that Tor's DHT works is because you cannot choose your own .onion domain name, but what it does is basically just moving the problem of distributing "trustworthy" public keys to distributing "trustworthy" .onion addresses. But that's not even that, DHT is used for service discovery, not to distribute trusts/public keys/domain names. It doesn't distribute trusts in the same way that traditional PGP key servers don't distribute trusts.
    – Lie Ryan
    Commented Oct 16, 2020 at 12:06
1

Yes, peer-to-peer file sharing protocols such as BitTorrent

The BitTorrent protocol is a commonly used protocol that provides data integrity, but not encryption. Like most modern peer-to-peer file sharing systems and peer-to-peer networks systems in general, BitTorrent uses cryptographic hashes to prevent both accidental and malicious modification of content.

One common application where we want data integrity but we don't need encryption is in broadcasting publicly-available files to everyone that wants it.

Transmitting the files unencrypted allows web accelerators, content delivery networks, and other intermediate caches to speed up commonly-accessed files to end users and puts less load on the originating servers, so it lowers costs for everyone.

Many web browsers are already set up so that when I click on a link to, say, the small torrent file for the latest Ubuntu ISO image, the web browser downloads that file from a standard ("secure") HTTPS web server and then uses a BitTorrent client to quickly download the complete Ubuntu image.

Many BitTorrent clients support web seeding, allowing a single client to download the pieces mentioned in a single torrent file and assemble an uncorrupted image from many standard ("unsecure") HTTP web servers, even if some of those servers have been replaced by corrupt and malicious servers.

0

There are some variations of request signing protocol which is used by many cloud providers, e.g.: [1][2]

This protocol normally used over HTTPS, but it can be easily decoupled and used on top of HTTP.

The idea is to sign a selected set of headers (including Date to mitigate replay attack) and request body and also send a key ID (e.g. see keyId in [2]). The server then looks up for a public key based on keyId field and verifies that signature is valid and matches that public key.

[1] https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html

[2] https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/signingrequests.htm

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .