Kirk Evans Blog

.NET From a Markup Perspective

Using SyndicationFeed to Access a Secure Feed

Using SyndicationFeed to Access a Secure Feed

  • Comments 2

Prasad asks how to use the SyndicationFeed type in .NET 3.5 to access a secure feed.  I have to say, it depends on what you mean by "secure feed."

RSS and Atom are typically delivered through HTTP GET with text/xml as the content-type.  I have to assume that you are securing that HTTP GET endpoint with either basic, digest, or Windows Integrated Security.  In these cases, you simply have to provide credentials to access the endpoint and yes, this is completely possible.  Security credentials are not something you set directly on the SyndicationFeed type or in the Load method, but something you would coerce through the XmlReader overload method.  Actually, you need to set this on an XmlResolver that is set in the XmlReaderSettings that is applied to the XmlReader class.

XmlReaderSettings settings = new XmlReaderSettings();            
XmlUrlResolver resolver = new XmlUrlResolver();
resolver.Credentials = new NetworkCredential("user","pass@word1");
settings.XmlResolver = resolver;
XmlReader reader = XmlReader.Create(@"http://blogs.msdn.com/kaevans/rss.aspx",settings);
SyndicationFeed feed = SyndicationFeed.Load(reader);

Straightforward, it ain't... but at least there's a way to do it.

If you are going to access something via Forms authentication, then this becomes a bit tricker.  The reason is that the Forms UI is going to typically provide a username and password textbox via HTML.  If you enter a username and password pair that authenticate, then you are returned an HTTP cookie before being redirected to the actual resource that you want to grab.  For this case, you would need to do some creative HTML screen scraping, likely using the System.Net.HttpWebRequest type to tightly control the HTML request/response message pairs.  You would also need to use the System.Web.HttpCookeContainer type to locally store the HTTP cookie so that it can be added to subsequent requests. 

The scarier case is something like Joe Gregorio's approach to securing a private feed using Blowfish and GreaseMonkey. You can do this with .NET as well, but the code is going to be a bit more involved.

I have seen many cases for the former (network secured resources), I have yet to see a case for the latter (encrypted feeds).  Where I see the former is typically through SharePoint lists where the SharePoint list being syndicated requires authentication.  That's not to say that there's not value in the latter case, since this is easy to envision once you look at WS-Security and realize that RSS and Atom are little more than XML dialects that constrain the data they contain (the rss chanel and items syntax can conceptually be replaced by soap:Envelope and soap:Body). 

One of the reasons that POX APIs have been so successful is because they are simple to use, averting all the cryptography stuff in favor of simple tokens.  That's not to say that unsecured with HTTP POST requests that contain username and password pairs as POX APIs is the right answer in all cases... if it were right, there wouldn't be so much work around things like CardSpace and OpenId.  It is just easier... and easier means more adoption. 

I love Dare's point about "private feeds"...

How to properly support private and authenticated feeds is a big problem which Niall highlights but fails to go into much detail on why it is hard. The main problem is that the sites providing the feed have to be sure that the application consuming the feed is secure. At the end of the day, can Bank of America trust that RSS Bandit or Bloglines is doing a good job of adequately protecting the feed from spyware or malicious hackers?

More importantly, even if they certify these applications in some way how can they verify that the applications are the ones accessing the feed? Niall mentions white listing user agents but those are trivial to spoof. With Web-based readers, one can whitelist their IP range but there isn't a good way to verify that the desktop application accessing your web server is really who the user agent string says it is.

 [via http://www.25hoursaday.com/weblog/2006/09/18/NiallKennedyOnPrivateAndAuthenticatedFeeds.aspx]

I actually said recently that I was beginning to favor AJAX APIs over SOAP APIs, and I felt dirty saying it.  I have not hidden that I am a huge believer in WS-*, but I always had a datacenter / enterprise vision of services.  I wasn't considering how to provide APIs to my customers, because frankly most of my customer base isn't considering that, either.  When my customers do consider how to provide public APIs, they are much less concerned with monetizing efforts from developers as they are concerned with monetizing services through advertising.  Go to dev.aol.com and use their Truveo video search... when you find a video, you go back to their site to actually watch the video.  Instead of taxing the developer, they tax through advertising.

In my single-lensed, WS-* view of the world, it took me awhile to figure out that I was being just as dogmatic about architecture as the people I talk to that have standardized on J2EE for all development.  The same people I used to shake my head in disbelief at... I was guilty of a similar crime.  They aren't right (especially because they are throwing money down a continually and increasingly flushing toilet), but I wasn't right to try to apply the bigger web services hammer to everything.  Sometimes HTTP GET is OK, and I am finally on the bandwagon now that we have tooling that actually supports the model.

Blasphemy, you'll say... WS-* is the right architecture for security.  OK, seriously, try to use WCF to use secure conversation with a token that was issued via WS-Trust.  There's a reason that there are like 2 public articles on how to do it versus a million articles on how to use the basicHttpBinding... the security stuff is still too damned complicated for anyone to seriously adopt.  Hell, I talk about the goodness of SOA and WCF all the time, I give demos on almost a daily basis on WCF and security, and I can't yet write a WS-Trust implementation.  It's too damned hard, and hasn't been widely adopted because it's too damned hard.  CardSpace for self-issued cards is too damned hard for most people to wrap their heads around, and certainly a managed token is out of the grasp of most who actually ship software.  SSL is more than good enough on the web, especially because it's simple.

Dare points out that none of the major Web 2.0 companies are taking a big bet on WS-*.  Maybe that's partly true, but part of it isn't true.  Some of those Web 2.0 companies are using WS-* behind the scenes to access secure resources that are transformed and made publicly available.  Use JSON and REST on the web to gain reach, use WS-* within the firewall to provide services with composable architectures.  If you want people to use your stuff, then make it as easy as possible for them to use it.  If you want to limit adoption, use opaque and barely documented concepts and APIs like WCF does with WS-Security.  It really is as simple as that... for widespread adoption on the web, use JSON and REST... for lots of composable capabilities within the firewall, use WS-*.  Those are 2 very different goals, and Dare doesn't disagree... he just wants to keep that "shit off the web."  I don't think we should keep REST out of the enterprise anymore, but I still think you are doing yourself a disservice by ignoring SOAP toolkits that solve real business problems.  There's room for both... time to take off the blinders.

Leave a Comment
  • Please add 6 and 8 and type the answer here:
  • Post
Translate This Page
Search
Archive
Archives