Digg this story   Add to del.icio.us  
Back to the Insecure Future
Richard Forno, 2002-11-13

Web services, such as Microsoft's .NET platform, represent a return to centralized computing. They also pose some serious security issues.

Each month, I present a lecture to senior military officers here in Washington, DC. The lecture, entitled “The Red Pill”, takes an unconventional look at information technology, security, and policy, and imparts to the class the need to take a macro view of these items instead of rushing to blindly embrace the latest and greatest quick fix.

One of the major portions of the lecture is what I call “Back To The Future” in which I discuss the history of computing and how we are going back to the days of centralized computing and the security implications of that dynamic.

In the early days of computing, users accessed centralized mainframes through terminals that were physically wired to the mainframe. User data and applications were stored remotely and the security (or permissions) governing its access and use was dictated by a third party. The mainframes were brilliant, but the terminals were just dumb machines, unable to do anything except provide a conduit to the centralized server. Information on those servers was generally kept private from other users, but the server administrators – and their vendors - always had the means to access it.

Decades later, thanks to Moore’s Law and the Internet, we’ve got brilliant desktop (or laptop) clients that can talk to brilliant servers – or other clients – anywhere in the world, and our applications or data can be used (or stored) locally or remotely as needed. In fact, our desktop clients can even double as a server in some fashions through P2P technology or basic file sharing like FTP. Importantly, the information on our desktops can be kept relatively secure from third parties. Using security methods such as firewalls, anti-virus, and encryption (heck, some people even turn off their systems at night) information owners have positive control over their information in a way that users in the mainframe days could only dream about.

However, this is changing. We are in danger of reverting to a more centralized model. The current trend is to have applications (and data) that are network-aware and can span multiple systems, networks, and continents. Services like Microsoft’s .NET concept seek to bring centrally distributed software functionalities to everyone and everything, and expand on the Application Service Provider (ASP) concept of the late 1990s. Under the .NET concept, most applications, user data, and user input (such as identification and authentication) will be dependent on server-side software components located on a centralized server farm instead of local desktop computers or file servers, as they are now. This presents some interesting challenges to current security models.

Today’s information environment, which generally consists of the Internet and local networks, is generally designed to be redundant and resilient. If one portion fails, it’s relatively easy to fix and move on, since users have positive control over their information assets. Equally important, when a security incident takes place, it can be identified by removing the victim system from any networks it may be connected to. Under the architectures proposed in .NET, much of this built-in redundancy would be eliminated, since the centralized servers providing application and information resources for .NET users would operate from a fixed position with a finite number of network and physical infrastructures supporting it. Not only does this create single points of failure for the architecture, but users will be unable to maintain positive local control over their information resources like they do today, something that’s critical if or when problems arise.

Centralized Networks and Security

So how will this new dynamic affect the fundamental principles of security, i.e. the confidentiality, integrity, and availability data in the information environments? Fundamental to this analysis is my prediction that emergent laws and hardware changes will significantly diminish the ability of users to use their computers and information for anything but ‘officially sanctioned’ activities. In other words, the PC of the future will become nothing more than a glorified (and quite expensive) dumb terminal.

Microsoft .NET services (and related products, such as future generations of Windows based on digital rights management protocols) will include intellectual property controls that require user identification and authorization (I & A) to gain trusted access to a central server to verify to access electronic goods and services. Central servers will ensure that users’ current copy of applications such as Microsoft Word are paid-for and currently licensed. Furthermore, they will ensure that the user, for example, is authorized to rip and use the latest Avril Lavigne album on their computer, but only in accordance with the record company’s wishes that are pre-programmed into the system for that particular album.

These centralized servers will be located outside a company data center or house, and thus provide a finite set of servers (and locations) for users to authenticate against, which would have obvious consequences for security. For instance, in order to cause significant electronic disruption, an adversary wouldn’t need to attack the nearly infinite number of corporate or consumer systems on the Internet such as with a virus or DOS attack; rather, he or she would only need to attack the finite number of centralized servers providing I&A services, which would affect the availability of information. A similar incident occurred in January 2001, as this IT World article explains. Furthermore, if an adversary was able to compromise a user’s system, they could potentially masquerade as someone else with that person’s permissions and perhaps modify data, an infringement upon the integrity of data. Of course, given that Windows is still a closed-source product, nobody really knows what’s contained in the code, and what ‘other’ data might be reported to a third party during such checks, which would compromise confidentiality.

Clearly, such a set-up places all three tenets of security at risk of compromise.

The Phone-Home Model and Security

The Microsoft-led push towards centralization raises another significant security problem: the fact these network-based applications must constantly “phone home” to ensure license rights or to utilize remotely stored shared code entails a dependency on stable, reliable, broadband links to the vendors to enable the operation of the applications. If .NET-type services become entrenched in our information economy, an adversary wanting to target one particular entity could simply attack an organization’s broadband pipes and effectively cause significantly greater economic damage than in the current environment.

This centralized network model may also create a jurisdictional conflict between federal law and contract law in the area of information security. In other words, who’s in charge when it comes to granting permission to access information: the owner of the data or the software vendor? Will opening up trusted backdoors become the accepted exception to implementing strong information security and protecting the confidentiality, integrity, and availability of our systems? What if that trusted backdoor is potentially exploited and a virus or other tool is given free reign to attack systems via this “approved” backdoor?

This has already been brought to light. For example, emerging health care industry regulations on computer security (particularly HIPPA) state that remote third-party access to certain systems is prohibited. Yet, by using certain versions of Windows, users agree to allow Microsoft to obtain trusted root-level access to their systems for both software maintenance (in the case of Windows Update) and intellectual property rights verification (Windows Media Player). These are rights granted to Microsoft by existing federal laws on intellectual property and software license agreements that indemnify the corporation from liability if things break. Medical professionals are naturally concerned about the consequences this may hold for systems that uses specialized software to provide critical patient care, as this September InfoWorld article explains. This concern will only be exacerbated in the .NET schema.

Microsoft Security and the Tooth Fairy

These are just a few security concerns I have with the way the technology industry is taking away the self-determination of the end user. Perhaps the most frightening aspect of this is that we’re being asked to simply trust a for-profit third party with our ability to conduct business and live on a day-to-day basis. In this scenario, users must hope or pray that the vendors will be extremely diligent in their software testing, competent in the design of their centralized server farms, and merciful in their enforcement of digital restrictions management for folks unfamiliar with this new way of doing things.

I’m sure this is a noble goal and has a chance of succeeding. But then again, I also believe in the Tooth Fairy.

Richard Forno is the coauthor of Incident Response (O'Reilly) and The Art of Information Warfare (Universal). He helped to establish the first incident response team for the U.S. House of Representatives, and is the former Chief Security Officer at Network Solutions. Richard is currently writing and consulting in the Washington, DC area.
    Digg this story   Add to del.icio.us  
Comments Mode:
Back to the Insecure Future 2002-11-13
Back to the Insecure Future 2002-11-26
Bob Radvanovsky


Privacy Statement
Copyright 2010, SecurityFocus