Use Windows 10 Device Guard to Trust Your Software
As we noted in our Windows 10 Security overview, one of the exciting new features of Windows 10 Enterprise is Device Guard, an operating system feature for ensuring that only trusted code runs on your systems. At its best, Device Guard uses a signature based model for trusting executables and libraries. However, in an ecosystem with as much legacy as exists in Windows, Device Guard also gives the enterprise the means to handle legacy applications as well.
Whitelists and Trust Models
Just about any security professional will tell you that the best way to protect a system from malware is not to blacklist bad software, which is essentially what signature based antivirus software does, because you’re always a day behind the bad guys. Rather, you should maintain a whitelist, or some means of determining what software you can trust, and then only allow trusted software to run on the system. Maintaining a strict whitelist would be an administrative nightmare, so digital signature based methods of managing trust have evolved. For example, Kernel Mode Code Integrity (KMCI) was introduced with Windows Vista, requiring that all device drivers be signed with a trusted certificate. Device Guard adds User Mode Code Integrity (UMCI) to the mix with the ability to set policy on what signatures are required for applications to run and how to handle exceptions.
One of the early uses of signed code is in Java. Code is distributed in JAR files, which are really just zip files with a code signature. The code must be signed by a trusted certificate (meaning a specifically trusted publisher or issued by a trusted certificate authority) which authenticates the publisher of the software as well as guarantees the integrity of the archive.
Probably the most famous signature based trust model at this time is Apple’s. An iOS app (and an app in recent releases of OS X) must be signed by a developer certificate issued by Apple in order to run on the device. Note that an app is actually a directory hierarchy which contains a code signature identifying the source as well as the integrity of all components of the app. This works for Apple because iOS is a very young operating system, and the application distribution ecosystem is closed. All roads between a software developer and a user’s device go through Cupertino. So building a whitelist for iOS is simple: if the app is signed with a trusted certificate, it runs. If it’s not, it doesn't.
The Windows ecosystem is not so simple. The OS is mature, and enterprises have lots of legacy applications, both internally developed and from third parties. The ecosystem is open, so it can’t be whitelisted by trusting signing certificates issues by a small number of certificate authorities, nor can it be whitelisted by trusting only signed binaries. There are a lot of unsigned objects contained in legacy applications.
The key is that Device Guard is policy driven. It is up to the enterprise to make some basic policy decisions as part of the deployment process. For example, will your trust model be centered around trusted publishers (specific signing certificates) by the trusted certificate authorities that issue those certificates, or some other signature-based trust model?
Another great feature of Device Guard is that for machines with appropriate hardware support, the Code Integrity policy engines for KMCI and UMCI run in a separate virtual container, where they can’t be compromised even by kernel mode code.
Getting There: Implementing Device Guard
Planning and policy making
There are some excellent articles on deploying Device Guard, and reading them as part of your planning process would be a good idea. Some questions you need to ask as part of your planning and design are:
- How do we categorize our endpoints, for example?
- End User workstations
- Single purpose machines or kiosks
- Developer workstations
- What is the code integrity security posture for each?
- Audit only
- Full enforcement
For any class of machines where users are not empowered (either by policy or technology or both) to install software, your goal should be for full enforcement – no untrusted code is allowed to run on the machine. For machines, such as development machines, where users need to install software, you might want to take an Audit only posture, although that increases risk, since you will only detect untrusted code after the fact.
You also need to decide on your trust model. Do you want to trust specific software publisher certificates, or do you want to trust the major certificate authorities that issue those certificates? Or do you only want to trust the signatures on specific releases? If you don’t want to tear your hair out every month on Patch Tuesday, I would suggest that you trust at the publisher level rather than the release level.
Build and test your policies
Now you need to create policies for each of your machine classes. You start with some exemplar machines from each class, which each should have as complete a set of installed software as possible. Using code integrity tools, you scan the machines to build an XML policy file from your policy decisions and the contents of the machine. Whatever your policy is about signatures, you will want to tell the tools to use file hashes as a fallback because (surprise) there are a lot of unsigned executable and library files in your system. If you have more than one exemplar for each class, there are tools to merge the policy files. Finally, the XML policy files is compiled into a binary file. By default, the policy operates in audit mode.
To test the policy, it must it be enabled through either group policy or local security policy, as is the use of hardware virtualization. Enabling the code integrity policy locally may require a reboot (or two). Then you let it run and check the audit logs with Event Viewer. This will give you some things to tweak in your policy – hopefully not too many. If you want to add a modest number of rules to your policy, build the modifications and then merge them in.
Deploying and maintaining your policies
Once you have a policy that works, you can deploy distribute it with System Center or group policy and then enable it with group policy. Of course, it doesn’t end there. Applications get added all the time, but third party and enterprise applications. Rather than having to rescan every time you add our update and applications, Microsoft his introduced the concept of catalogs, which are collections of file hashes which are signed and stored in a catalog directory in the system. Part of the packaging discipline for your enterprise applications should include both signing the binaries and creating and signing a catalog to be installed by the MSI file. There is even an audit tool to help you create catalogs just by installing an application.
Once you have a policy that is not generating audit events for software that should be trusted, you can try turning on enforcement (which you do, paradoxically, by removing the audit rule from your XML policy file.) In enforcement mode, any untrusted executable or library will be blocked, which is the goal of whitelisting.
Gotchas
There are some interesting things that can bite you in deploying white lists through Device Guard:
- Even with a signature-based trust model, this will not be maintenance-free. New unsigned applications will need to be cataloged and the catalogs distributed. You need to introduce signing discipline into the development process for your enterprise applications.
- Various kinds of .NET assemblies pull in components that are not signed. For example, the various administrative snap-ins that run in the Microsoft Management Console are assemblies. You should run all the ones that are in use on your systems before your initial policy scan.
- Installers pull components into the system, mostly temporary, that are not all signed. Microsoft’s PackageInspector tool will watch the installation and ensure that every file involved in the installation is cataloged. Part of your SDLC needs to include creating catalogs for each new application and distributing them before shipping the installer files.
- While the policy building process can fall back to file hashes for unsigned files, hashes will not be collected for files with invalid signatures. For example, I found an application recently that was signed with a brand new SHA256 signed certificate but used MD5 (long discredited) in the signature itself. So the application had a signature that Windows would not validate, but its hash had not been collected either. Such applications must be cataloged.
- Antivirus updates may get picked up as untrusted code. We tested a bunch of popular AV packages and found one that created audit events when referencing updated components. The others updated cleanly, although one (Symantec) had a single DLL with an invalid signature. (See table below.)
Antivirus Package | Clean Updates |
---|---|
BitDefender | |
Kaspersky | |
McAfee | |
Sophos | |
Symantec |
Conclusion
Device Guard is an exciting new technology that will really up the game in prevented malware from being able to execute on Enterprise systems. Because it uses a separate virtual container for policy decisions, it’s also highly tamper-resistant. It’s not clear to us that this technology is really for prime time, and we anticipate growing pains for early adopters. However, the time to invest is now, by putting a digital signature based discipline into your software development, packaging, and deployment processes and running Device Guard in audit mode. That way, when the kinks are worked out, you are ready for lockdown rather than just starting the process.
About Andy Sherman
Andy Sherman, Eden Technologies’ security practice lead has a PhD in physics from Rensselaer Polytechnic Institute and started his career in the academic world. He then went to AT&T Bell Laboratories where he discovered the power – and hazards – of large distributed computer networks. It was also at Bell Labs, during the early days of the Internet, that Andy became interested in the security problems associated with public networks. From Bell Labs Andy moved to the financial services industry. There he worked on a large range of infrastructure design, deployment, and management projects, but is best known for his 15+ years in information and technology security.