May 6, 2015

Can I trust your app?

Probably not. But is it even possible to trust anything these days? What would it take to make computers wholly trustworthy?

The main property of any IT system is that it does something useful. That's why IT systems get built and funded. Security is non-core feature along with performance and reliability. It's not completely worthless though. In general software development, what we do is to apply security that is free or very cheap. It won't affect total budget visibly, but it does add to product value. This includes stuff like encrypted connections and salted password hashes or just publishing software as a website, utilizing browser's sandbox to improve user's security.

Some software is security-critical though and such software can use security measures that consume significant portions of development & operations budgets. This includes heavily used software like email, chat, telephony, browsers, cloud storage, and some security-related software. While lots of security features can be added to the software, in the end, there is still the issue of fundamental trust in developer's integrity. Trust turns out to be quite expensive and the hardest part of overall security to get right.

There is this nice concept of audit. If something is thoroughly audited by multiple competent and independent auditors, you can be reasonably sure it can be trusted. Getting your stuff properly audited costs a lot of money. Alternatively it takes a lot of popularity to gain attention of independent security researchers.

Opensource comes to mind. Can I trust opensource? Well, I mostly get things in binary form. Nobody audited the build process. Even if I get the source code, I can never be sure that it was audited. Auditors generally don't sign the code they review. Auditors might have been provided by special version of the code where security backdoor was removed. Or, conversely, I might have been provided by version of the code where backdoor was added.

I like the concept of certificate transparency. CAs that sign certificates for HTTPS sites were (and still are) a great source of security issues. They can secretly sign certificate they shouldn't. Certificate transparency is a mechanism where CAs have to publish the list of certificates they have signed. Multiple auditors monitor the process and check consistency of public certificate reports. In the future, browsers will reject websites whose certificates are not part of the publicly disclosed certificate registry. That way we can be sure we all see the same certificates and dirty CA practices can no longer be hidden.

Similar concept can be used for opensource. Just have a public place where all versions of the code must be registered (or at least their hashes to keep things performant). It will be then very hard to sneak modified version of the code to selected users. Auditors can then check not only particular version they review, but they can also check the complete version history for any backdoors that might have been inserted and subsequently removed in the past. Unfortunately this level of source transparency is not supported in any of the common distribution channels. JavaScript nor mobile apps.

While it is conceivable that operating systems might compile everything from source, this is not the case with today's mobile apps and it's downright impossible with operating systems that must be distributed in compiled form in order to be able to bootstrap themselves. We can however extend source transparency into binaries by having perfectly deterministic build (like Gitian) that always translates given source code into the same binary. Anyone can then verify that the build process wasn't manipulated. Of course, all security-critical aspects of the build process must be opensource and audited. Build environment itself must have the same or higher level of trust than the compiled app. Build system should ideally have an audited bootstrap chain that makes it independent of other build systems.

Given well-designed and redundantly audited core software, operating system, and build, what else could go wrong? Hardware. Hardware can contain backdoors at firmware level or logic level. It can additionally suffer from security weaknesses that allow anyone with physical access to breach software security. Firmware security can be handled like software security as far as hardware logic is designed to permit unrestricted reading and writing of firmware it contains. But logic backdoors and physical security are much harder problems.

Chip logic is actually software in compiled binary form except that it cannot be easily examined and definitely cannot be easily changed. Chips can be decapped and their logic scanned, although this is expensive process that requires specialized equipment, especially for the new CPUs with sub-100nm features. Care must be taken to detect any hidden logic that might escape standard decapping and imaging process. Decapping random samples is a way to safeguard against logic manipulation at mass scale, but checking one processor in a batch of one million is of no help when you are specifically targeted with modded chips or when you run large clusters where even one bad CPU out of a thousand can easily compromise your security.

It's much better idea to have transparent and audited manufacturing. It is much easier to check, it provides higher security assurance, and it does so for every single manufactured chip. Decapping can still be used as secondary security measure. Once the core chip logic is trustworthy, hardware security is reduced to protecting the hardware and any embedded software from tampering by people with physical access.

Tamper-proof hardware is tricky. Presently hardware suffers from ridiculous security issues like the infamous FireWire hack. Proper component sandboxing and on-bus encryption is essential before we can consider any tamper resistance techniques. Chips must be designed to withstand extreme inputs and power supply manipulation, perhaps with built-in filters and internal battery. They must be designed to prevent leaking core secrets (i.e. crypto keys) via side channels.

Once passive eavesdropping on hardware is ruled out, we can proceed with prevention of active hardware manipulation. For this to work, chips or whole devices would have to be encapsulated in a sensor mesh capable of detecting physical intrusion and powered by internal long-lived battery. If intrusion is detected, the chip must promptly delete all private keys in its possession, which renders the chip inoperable. The damaged chip can then get replaced with new trusted one.

The above would permit us to build essentially secure devices that run a set of core secure applications. With effective certificate transparency, we can then let these devices communicate securely with each other. But that's not how present day software works. We need to add cloud to the mix to make the apps really useful.

But how to trust the cloud? Cloud is essentially a huge heap of hardware running some standard system software. We can trust the cloud as far as we can trust the hardware it runs on and the software it uses. Above hardware and software security measures are sufficient to make the cloud trustworthy.

Once we have all the pieces secured, we can build chain of trust that leads from hardware through cloud and operating system to application. We can verify that whatever we see in the browser is a truthful representation of system state and we can be sure that whatever we enter into the application will be handled according to that application's published security policy. TPM attempted to do just that, but TPM failed miserably with no real security in the hardware and numerous weaknesses in the chain of trust that follows from it.

Perhaps one day, when I will be much older, a trustworthy computing model like this will become common practice. Meantime we are stuck with tightening a screw here, a bolt there, overlaying the insecure system with complex and leaky intrusion detection systems, sandboxing the mess to limit damage, and hoping for the best. We are waiting for another Snowden to show us how foolish we are.

No comments:

Post a Comment