-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distrusting the web server #538
Comments
I have barely really skimmed through the explainer or draft for Webpackage, however the (browser?) functionality you are looking for could potentially make use of it. There is also this discussion which is somewhat related: https://discourse.wicg.io/t/alternative-to-webpackages-to-gurantee-integrity-document-hash/3026 |
@Malvoz This looks very good. |
Some time ago, I wrote an extension for the browser that would generate hashes from all loaded sources, create a hash of hashes and check against the hash of previous loads and, in case they don't match, check the signature of the final hash against the public key downloaded and stored on the initial visit to a website. This would guarantee, that a) the user may decline to continue to use a website if the source has changed and b) verify if the changes to the website are authentic. This worked very well and protected the user even if the website itself would have been compromised. I didn't pursuit this development when Mozilla started to require hoops-jumping in order to get extensions going, but I think it is (and was) a very elegant solution to protect the integrity of any web-site. Still don't quite understand why we don't have any standards that would accomplish something like that. |
Indeed! TLS should be providing authenticity, what I am concerned about is individual version upgrades for web applications, I don't like that a website administrator can decide to give me some other code without me being explicitly notified about such an update. I believe this could be implemented by monitoring agent cache, on page load (before any script executes), if the web server overrides a file that was cached, the user is notified and must accept the upgrade to continue with site data, otherwise, site data can be cleared and the newer version executes. I realize JavaScript can load code on the fly with I'm trying to think of how it could fit best in the web eco-system, first the web server must be willing to offer such an ability to users, as without the administrator's cooperation, such a thing cannot work (dynamically generated content etc). The web server would be sending headers to explicitly encourage supporting agents to perform integrity checks.
The agent visits the integrity.map
The agent then checks individual files against their hashes specified in the integrity map. However: The agent cannot practically prevent JavaScript from subverting these restrictions, if a web user sees JavaScript code in an upgrade of the web application that would possibly subvert the restrictions, they should immediately notify the wider community of the issue. If any of the hashes do not match the linked-to resources, or that some sub resources are missing SRI checks, the agent should warn the user that the web server is failing integrity checks and that it is not possible to continue because it is dangerous. It should also mention that the web server could be misconfigured. If the ApplicationIntegrityFileHash changes, then the web application has been updated, the agent should ask the user if they wish to continue with current site data, if they wish to clear site data and then continue, or if they do not wish to continue. There should be no possibility for running the older version of the application, that would be too hard to manage for a web server that has no intention of providing backwards compatibility. The agent should remember about these ApplicationIntegrity headers so that they are checked against on every site visit. If the web server stops sending the ApplicationIntegrity headers, then the agent should ask the user if they wish to clear application integrity information and site data then continue, or if they do not wish to continue. Please comment if you want to suggest any changes to the global idea of this proposal. |
@Leo-LB thumbs up. That is exactly what we need. Additionally, it could (and should) include the CSP headers and of course would have to disallow eval. However I would like to suggest an addendums (addenda ?): Should the script-code change, users should not have the option to continue with the "old / cached" code because that may simply break things - especially if the new code is a bug-fix and/or makes changes to some data structures. It therefore must be "take it or leave it" . Furthermore, we should investigate the option of signing the hashes ( or the hashes of hashes) so that authorized changes to the script could be distinguished from much more dangerous "unauthorized" changes. |
@mischmerz It indeed should include the CSP headers, however, I do not exactly know how to achieve that, or if there's more headers that should be included. I don't think blocking eval is good, eval can be used on script generated by or included in the web application, where it is dangerous is on remote content. And there's not only eval, someone could implement a turing complete script interpreter in JavaScript without using eval, and it would achieve the same result, it is an endless race to potentially unsafe JavaScript features here, and I do not think it is appropriate to run after them. I think web users can very well audit served code and notify the community if there's such concerns, nullifying the website's claims of being more secure due to the Application Integrity feature. I assumed there was no option to run the currently cached version when facing an update, and I agree it should be explicitly mentioned. However, should we be adding another level of authenticity when TLS already provides it? Agreed, TLS, requiring private keys to lay on the web server, is not ideal. Especially when you have third parties such as Cloudflare providing reverse proxies. I am worried that implementing authenticity checks increases complexity by a lot (key revocation, expiration, warn about/block weak cryptography, ..). |
I think that CSP pinning is what we want when including CSP headers in the Application Integrity verification. |
@Leo-LB You are right of course - there are many more ways to modify script - not just eval. I personally would never use it as I think it's just too dangerous, but other peoples mileage my vary. The only reason I was thinking about signed fingerprints is the less intrusive way of day to day software updates. I often refactor code. bug-fix or change certain elements - it would scare the living you know what out of me knowing that each change triggers some scary warning leading to the help desks to light up with concerned users. Now - the user-agent should of course inform the users about the change (I do that anyway because I want them to re-load), but if the changes were signed, we could do that in a less scary way - maybe even with a developer note explaining what happened. We could tie the pubkey to the domain's name server (like SPF) giving the domain owners full authority. |
@mischmerz A software update should not be scary, only mismatches or application integrity information clearings should be. Software updates just need to happen in a more transparent relationship with the user. I am against including a developer note about what happens, that could be too easily abused in phishing scenarios, if it is done, I would prefer it hidden behind a second button "Show notes", for example. And, I don't think a DNS record is the right way to transfer the information, DNS information is relayed by intermediaries, and each of these could impersonate your signing key. I think that TLS with HSTS, HPKP and Expect-CT header, is able to provide a solution to these issues, without the need of another authenticity mechanism. You are trusting the providers that allow you to exist on the web for not causing such an "unauthorized change", you inevitably need to. |
And to add, you should definitely separate your web servers serving static content and these exposing API or WebSocket endpoints. |
@Leo-LB Now I am confused. :) If any (or all) scripts are changed, the user should get a notification with the option to continue to use the web services or to cancel. This by itself is very scary. Imagine something like "The web-site has changed. Do you wish to continue" on your bank web-site. And that happens every time a bug fix is applied. Because the new hashes (or hashes of hashes) are different to the ones the user-agent has stored on previous visits. If we simply use an integrity map and compare hashes, changes done by malicious actors - or made under distress - would not be discovered by the user because the actors would most likely update the integrity map as well. What I suggested was a way to somehow distinguish between authorized and unauthorized changes. |
You definitely should not make frequent changes when using such a mechanism, they should be batched together in an update that happens less often. I think it is possible to add different icons or colors to give different signals to the user on the gravity of things. You are the one that should be scared if your server is compromised, as I said in a previous message, to reduce the probability of your static content being updated by a malicious actor, you can separate the web servers that handle static content and expose APIs. I don't think we have a way, ever, if a malicious actor impersonates what should be identified as being you, to distinguish between somehow "unauthorized" and authorized changes, access to the server should be managed with a strict policy, run the web server software and any other network enabled service with readonly access to files. |
We could have a TOFU (Trust On First Use) signing key being transferred in a header that is saved long term, with predefined renewal periods? That would happen only on the first request, subsequent requests would ignore the header, and after the renewal period (say; 3 months), it could accept the same or a new key by blindly trusting it as well. Advantage being that you can sign updates from separate machines than the web server (TLS private keys being on the web server, signing happens there), and that it reduces the ability for a malicious actor to mount a successful attack. |
Or, we could allow the ApplicationIntegrityFile to be stored on another origin, malicious actor being less likely to control both. However, I am thinking that this could be abused to take a website offline by simply taking the ApplicationIntegrityFile offline, and not very optimized because two TCP connections would need to be initiated (HTTP2 tried to solve that bottleneck, it would be sad to add another one). |
YES! TOFU +1 .I was about to suggest the very same thing. Though in my version the web-devs would be free to change the key at any time (key do get lost) triggering a warning to the users and the key's expiration time would be much longer. So the work flow would be: Stop the web-service -> update scripts -> sign script/hashes -> restart web-service. I like that very much. This would address malicious changes even if the web-server would be fully compromised. It would also allow devs to simply delete the key if they fear being pressured by hostile entities. In this case, he/she wouldn't been able to modify content even under distress. |
Sounds good to me! The web server would be sending headers to explicitly encourage supporting agents to perform integrity checks.
The agent visits the
The agent then checks individual files against their hashes specified in the integrity map. However: The agent cannot practically prevent JavaScript from subverting these restrictions, if a web user sees JavaScript code in an upgrade of the web application that would possibly subvert the restrictions, they should immediately notify the wider community of the issue. If any of the verifications fail, or that some sub resources are missing SRI checks, the agent should warn the user that the web server is failing integrity checks and that it is not possible to continue because it is dangerous. It should also mention that the web server could be misconfigured. If the ApplicationIntegrityFile changes, then the web application has been updated, the agent should ask the user if they wish to continue with site data or if they do not wish to continue. The dialog should also mention a button where the user can read about the changes included in the update, these details are contained in the ApplicationIntegrityFile, before the signature, and one empty line after the hashes. There should be no possibility for running the older version of the application, that would be too hard to manage for a web server that has no intention of providing backwards compatibility. All verifications should be performed again, signature, hashes and SRI checks. If the ApplicationIntegrityPublicSigningKey changes ApplicationIntegrityKeyRenewalWindow or more days before the cached ApplicationIntegrityKeyExpiration date then the agent warns the user about continuing, it should display the fingerprint of the new signing key so that the user can check it with third party sources before continuing. The user should be given the choice of clearing site data then continue, or to not continue at all. If the ApplicationIntegrityPublicSigningKey changes within ApplicationIntegrityKeyRenewalWindow days before the cached ApplicationIntegrityKeyExpiration date or after the ApplicationIntegrityKeyExpiration, then the new key is trusted blindly in a TOFU (Trust On First Use) model. The agent should remember about these ApplicationIntegrity headers so that they are checked against on every site visit. The ApplicationIntegrityKeyExpiration and ApplicationIntegrityKeyRenewalWindow do not get updated in the agent's cache unless the ApplicationIntegrityPublicSigningKey changes. If the web server stops sending the ApplicationIntegrity headers, then the agent should ask the user if they wish to clear application integrity information and site data then continue, or if they do not wish to continue. |
I think I have finished editing my previous message to a mature version, tell me what do you think? Also, what about developing a browser extension, necessary development tools to sign web application files and make the web server send the new headers, to test such a feature in practice? This would further confirm that such a thing works, motivate changes in our proposal so it is ready for production, and increase legitimacy of the feature for standardization. |
@Leo-LB I think you nailed it. Great work. I suppose all key generation and signing can be done with openssl. The new headers can simply be generated by server-side script. As to the extension development - let me check my schedule. I am currently a bit busy with a project, but maybe I find some time or can ask around for some development help. I'll let you know. |
@mischmerz With apache2 httpd for example, we can use a |
I previously suggested something along these lines here: w3c/webappsec-subresource-integrity#66 The idea is simply to add subresource integrity support to service workers. Since service workers are JavaScript workers which can proxy all requests to a given origin, they would allow site operators to implement arbitrary verification policy in a service worker. This would establish a TOFU-like relationship where the service worker and integrity data is cached on the first visit to the site. Since this would use the subresource integrity mechanism, the service worker JS file would be referenced by a cryptographic hash, so it couldn't easily be changed. But that worker could implement logic to verify a chainloaded JS file via other, arbitrary means, implemented in JS (e.g. cryptographic signature). Such a worker could implement upgrades, replacing itself with a new service worker and integrity hash, in the same way. This would be a lot simpler and more flexible, and would be a fairly minor change. (It would also create scope for browser extensions which maintain a list of known service workers for added security, akin to HTTPS Everywhere, to close the TOFU loophole.) |
@hlandau I don't exactly know how service workers work, how can this service worker enforce a policy from the first visit? It needs to be loaded from the website first, no? What prevents the website to stop loading the service worker on a given origin? How can the user be alerted of it? Is the service worker permanently registered to proxy requests to a given origin from first visit? |
Yes.
My basis for proposing this is that the use case for this kind of "distrust the server" approach is when the server is transmitting JavaScript to the client. It would provide a basis for better securing browser-based crypto, for example. What exactly do you want to distrust if not JavaScript...? |
One may want to audit the newer code without running any JavaScript, or one may want to serve ISO files on a web server and use such a mechanism to automate integrity verification some currently do with PGP, or that Tails from The Tor Project even automated with a browser extension. ( https://tails.boum.org/contribute/design/verification_extension/ ) For the case of Tails or ISOs, it's not the perfect example, because they are often downloaded once only, and then the ISOs auto-update and perform key verification themselves. I am simply uneasy at requiring JavaScript for providing such a feature, also, performance of JavaScript implementing cryptography is rather bad, especially when your browser has JIT disabled. It would therefore increase page load times by a lot if the service worker had to validate an application that does not use cryptography (and thus does not suffer from such a latency). |
@hlandau Here's the problem: We don't really have immutable ways to TOFU store data when using service workers. All databases available to script may expire, may be deleted through hard reloads (or clear cache) or may even be deleted through script itself. Yes - your approach looks elegant. But I think it's the overall job of the user-agent to provide security independent from the code/script itself. |
@mischmerz The service worker wouldn't need to store any data; the cryptographic identities it trusts would be embedded in the script itself. The only thing that would need to be permanently stored is (origin, service worker JS URL, integrity hash) tuple. |
So - what happens if I instruct script to update the service worker (after I changed it to evil version with evil) keys? the service worker is not immutable either. |
@mischmerz You can't run scripts on the origin unless they were loaded by the service worker, because the service worker intermediates all HTTP requests. |
Found this: https://github.com/tasn/webext-signed-pages @mischmerz Seems like someone has made that extension already. |
FWIW, I just remembered this related discussion on SRI for |
@Leo-LB
I developed a working plugin and it worked great. It was focused on
script code and made it impossible to add, remove or modify any script
of a site or service without the developers consent or otherwise the
whole web-site wouldn't load anymore. I filed (and received) a
provisional patent on the subject about 5 years ago. I wasn't interested
in pursuing a 'real' patent, just wanted to be sure that nobody else
could either.See attachments.
However - due to the changes in the plugin environments, my plugin
didn't work anymore and I didn't have time to re-develop it (and I am
not even sure that the current plugin universe allows to filter a script
before it is loaded into the DOM).
The extension you have suggested doesn't work with dynamic content,
because they are signing even the HTML code. Most larger sites however
generate dynamic content - e.g. to accommodate different languages or
lay-outs. So, that doesn't work either.
It would be comparably easy to make (script-) code-signing possible. As
for extensions, developers like me would need a (read-only) filter
allowing us access to the script code before it is loaded.
So - for the time being, the ball is no longer in my court.
Thanks for thinking of me :)
Michaela
On 7/17/19 11:06 AM, leo-lb wrote:
Found this:
https://github.com/tasn/webext-signed-pages/blob/master/README.md
@mischmerz <https://github.com/mischmerz> Seems like someone has made
that extension already.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#538?email_source=notifications&email_token=AA5YYIN47ROABJRMLNR2WO3P747PJA5CNFSM4GE64RYKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2E4WWA#issuecomment-512346968>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA5YYILSYVIQU2O2NL6DYTDP747PJANCNFSM4GE64RYA>.
--
Email is unsafe. Send confidential text or data via
packfrog: https://packfrog.com/bt?ee=9b26b31bd
|
Related: Web Applications should not have internet access by default (#578) |
However, a document hash could also be used to assess the integrity of a Web Bundle itself, as a Web Bundle is a single binary file, contrary to a PWA application. So a Web Bundle hash could be used to assess if the Web Bundle has not been tampered, by comparing it with the hash present in the manifest bundled with it and also with a hash hosted on a trusted website. |
I would like to discuss the standardization of a mechanism with which user agents can choose (through the User Interface), to trust or to not trust a specific version of a web page or a set of web resources forming a webapp. Example use cases for this would be Tutanota email service, or Protonmail email service. The users of these services could trust a version of the served webapp so that they can know these are not being modified on-the-fly potentially with added backdoors. An update for that webapp would have to be approved again by the user.
The text was updated successfully, but these errors were encountered: