Search: Home Bugtraq Vulnerabilities Mailing Lists Jobs Tools Beta Programs
Digg this story   Add to del.icio.us  
A Roundup Of Leopard Security Features
Thomas Ptacek, Matasano 2007-10-29

Our friend Rich Mogull suggested that Leopard is “perhaps the most significant update in the history of Mac OS X - perhaps in the history of Apple”. Good little fanboy that I am, I had Leopard installed this weekend. Let’s evaluate the security advances.

The Good

Leopard gets a few things right.

Sandboxing

What It Is.

The XNU kernel now has a role-based access control system, applied at the system call - mach call layer. You can write flexible policies about what any given program can or cannot do.

Why You Care.

Mail.app should not be allowed to add accounts to your system. Safari should not be allowed to load kernel extensions. iChat should not be allowed to install a backdoored SSH server. Smart security people assume that every application they run contains some hidden bug that will allow attackers to upload their own code into the running program. With kernel-enforced access control, even if an attacker does that, you still stand a chance.

What Leopard Gets Right.

Do not be deceived (and, like me, embarassed) by the sandbox(7) documentation. Leopard sandboxes are flexible and interesting. They’re apparently compiled from Scheme programs (sandbox-compilerd embeds TinyScheme) that live in /usr/share/sandbox. You can break sandbox-compilerd open in TextEdit and read the compiled-in Scheme code; they’ve got a lot of the bases covered, including obscure stuff like SYSV IPC, the BSD sysctl interface, and signals.

What Leopard Gets Wrong.

Three things.

1.

They didn’t document any of this. You can’t officially use this API to secure your own code. We can’t read their code or specifications to test whether it’s secure.

2.

The existing profiles suck. For instance, the Leopard “Quick Look” feature is billed as a test case for sandboxing, because it automatically opens and parses content in your download folder. But all Quick Look sandboxing does is restrict network access. Who cares? A Quick Look exploit is just going to install a trojan somewhere else, and that trojan won’t be governed by sandboxes.

3.

Almost nothing you care about is sandboxed. For instance: Mail, Safari, and iChat.

My Verdict.

It’s not a pretty win, but I’ll take it. Sandboxes are better in a variety of ways than what off-the-shelf Vista provides. Apple should provide an “Instruments.app”-style interface on this feature, like they did for DTrace, and let a community develop around hardening Darwin.

Input Manager Restrictions

What It Is.

Input managers are bundles that are loaded into all running programs. Originally intended for the mundane tasks of internationalization and accessibility, they’ve mutated into a generic plugin facility for all Cocoa programs. If you’re using things like SIMBL, Saft, SafariStand, Sogudi, or Pith Helmet, you’re abusing input managers. Leopard “breaks” them.

Why You Care.

Input managers are terrifying. They’re arbitrary blobs of code that get injected into almost every Mac application. They are a “UI extension interface” in the same way that Back Orifice 2k is a “remote system administration facility”.

What Leopard Gets Right.

Input managers in Leopard are only loaded from “/Library/InputManagers”, not from the user’s home directory, and are only loaded if they’re owned by “root”.

What this means is that you can still get them to work, but the most likely code injection exploits in Safari can’t, because they can’t write to “/Library” and they can’t make files be owned by root.

My Verdict.

A clean win. Apple straightforwardly closed off a major attack vector with minimal disruption to existing third-party plugins.

The Bad

I’m prepared to be wrong about these.

Guest Account

What It Is.

The Leopard “Guest” account erases itself at logout, providing an ostensible “clean” environment for people to use your machine without cluttering it with garbage, accessing your personal information, or betraying their own personal information.

Why You Care.

Sometimes people want to use your computer. In Tiger, if you let them, they can hijack your machine.

What Leopard Gets Right.

The idea of a secure guest account is useful.

What Leopard Gets Wrong.

Everything but the idea of a secure guest account.

For example:

  • Leopard Guest users can install cron jobs. These are scheduled background tasks, run out of launchd, that will execute even if the Guest user is not logged in. Leopard Guest cron jobs persist after logout.

  • Leopard Guest users can change the wireless network you’re connected to. Even after logout, when you switch to your “real” account, your Guest’s wireless network selection appears to persist.

  • Leopard Guest users can mount remote filesystems. Even after they log out, the mount mount in “/Volumes” remains.

The long and the short of it? Leopard Guest users can remain resident on your machine, even after their home directory has been deleted by the Leopard log out process. They can install daemons that listen on network ports to allow themselves back in. Or they can wait in the background for the next “Guest” to log in and steal all their information.

My Verdict.

Pretend like this feature doesn’t exist.

Address Space Randomization

What It Is.

A huge portion of all low-level attacks involve bugs that let attackers corrupt program memory. Most of the time, if you can corrupt memory, you can divert the program from its own code and into code of your choosing.

The most common exploit technique for these bugs is called “ret-to-libc”. Without the gory details, the exploit effectively allows a blob of data the attacker writes into your program to be used as a scripting language, invoking (“ret”) all the functionality the OS exposes to programs (“libc”).

Ret-to-libc attacks require the attacker to know where the various facilities are, so they can be scripted. On OS X Tiger, that was easy: every 10.4.9 Mac kept those facilities at the same locations in memory. In Leopard, the OS randomizes the locations, to make them harder to predict. This feature is called “ASLR”.

Why You Care.

This feature gets top billing as “protection against buffer overflows”; it’s also a first line of defense against heap overflows, uninitialized variable attacks, and integer overflows.

What Apple Gets Right.

Some library offsets are in fact randomized.

What Apple Gets Wrong.

The dynamic linker library (dyld) is not randomized. From what I can tell, ten different Leopard macs booted at ten different times will have the same offset to dyld.

You care because dyld is full of useful functionality. Like, dynamically linking new libraries into memory, or recovering the base addresses for existing libraries.

Can I say right now that you can exploit this to take over a Mac? No. But ASLR is either something you get right, or is simply a speed bump for attackers. Some other things to know about Leopard ASLR:

  • Library offsets don’t change between invocations of the same program.

  • Library offsets don’t change between invocations of different programs (Safari and iChat have CoreServices at the same location).

  • I haven’t seen my library offsets change once, although I have observed that they are different from a seperate Leopard install.

So, assuming for a second that dyld and the Objective-C runtime (vastly more complicated than the standard C runtime, or even the C++ runtime) don’t monkeywrench Leopard ASLR: if I can run code on your box for any reason, I can probably walk past ASLR features in any of your programs. If any of your programs leak information (a far more common problem than buffer overflows), I can probably collect enough information to beat ASLR in every other program.

My Verdict.

This feature removes a talking point argument about Microsoft Windows Vista’s superior security, but it doesn’t address the underlying point of that argument. Cocoa programs running in Darwin are less secure than Win32 programs running under NTOSKRNL, and aren’t even in the same ballpark as Managed C++ or C# programs.

The Irrelevant

None of these features are going to stop your Macbook from getting compromised.

Filevault Encryption

Filevault in Tiger used 128-bit AES keys. Now it uses 256-bit AES keys. There’s a Fields Medal waiting for the person who breaks 128-bit AES. That person can’t necessarily break Leopard Filevault. Awesome.

The funny part about this is that the Filevault keys are still protected by the weakest key in the system: your user password.

Application-Aware Firewall

The Tiger firewall provided all-or-nothing policies for what kinds of connections would be allowed or disallowed into your computer. The Leopard firewall breaks them into per-application policies. That would be great, but the interface offers only a blanket all-or-nothing policy on inbound connections; you can’t tell it not to let iTunes connect outbound.

If you care about about application security policy —- and you should —- someone like Unsanity is inevitably going to rig up an OS X intrusion prevention system built on Sandboxes, and that system will give you fine-grained control over what iChat and iTunes are allowed to do. I’ve never bothered configuring the OS X firewall, and I’m not going to start in Leopard.

Tagged Downloads

Yes, it’s true, when you download a program with Safari and run it for the first time, you will get a dialog box warning you that you downloaded the program and are running it for the first time. I give the average Leopard user approximately 6 hours before clicking “OK” on this dialog becomes a function of their autonomous nervous system.

Digital Signatures

You can sign executables in Leopard. If you allow a signed executable to run after you download it, you ostensibly won’t be bothered with dialog boxes for future downloads from the same author. Awesome. That will shave 45 milliseconds off the download process for the average Mac user.

This feature gets prominent billing in some Leopard round-ups, but it shouldn’t. I haven’t found a signed program to verify yet. From what I can tell, nothing shipped with Leopard is signed (possibly excepting kernel extensions). I hexfiended NOPs into a variety of programs, thus invalidating any conceivable RSA signature on them. They ran just fine.

Even if third-party code is signed, if every binary on the system they depend on remains unprotected, code signatures won’t do anything to stop trojans on your system.


Comments


The information, views, and opinions contained on this page are those of the author and do not necessarily reflect the views and opinions of SecurityFocus.






 

Privacy Statement
Copyright 2009, SecurityFocus