14 months ago…

 

It was 14 months ago that a bunch of us were sitting in a conference room with Dave Campbell reviewing some things he had been thinking about and some stuff we had been playing with. Our team had been sitting with some internal and external customers (Access, PowerPivot & Excel users predominantly), interviewing them about how they worked with data. We were looking at what kinds of data people were trying to leverage, where it was stored, what formats and representations were predominant and what tools they wanted to use it in. We were also working with the Access and Excel teams on where they wanted to go with “External Data Scenarios” (scenarios where people pull data into Excel or Access). As we were working on this we kept on landing on a recurring theme which was that we, Microsoft, had done a lot of great work in persisting data and in working with data (Excel, Access, PowerPivot, Crescent…) but there was still this gap in terms of reducing the friction inherent in the acquisition and shaping of data. We started forming a picture of First-mile data sources (the original data sources) and Last-mile tools (the tool where you want to consume the data). We ended up sketching the following on the board:

 

We called this the infofabric (yikes!) the idea that we could provide a ton of value in terms of services for acquiring, shaping and managing your data in a self-service way and then making it available for people to consume.

Dave challenged us to go figure out what it would take to create a solution that “would wring the friction out of data acquisition”. With that challenge, our team went off to go figure out how to define and build such a solution. The last 14 months have been a fun ride. Our little internal project name was Project Montego and we started building out a runtime and client user experience to deliver a solution for pulling data from different data sources and formats, mashing it up and filling tools like Excel or Access or republishing the data as an OData feed. Along the way we met up with another team who was working on a project called “Orlando”… Orlando was a project that was working on leveraging some work out of MSR to do semantic classification of data sets (both the schema and the instances) in order to help people get better insights into their data. This classification engine would also help with creating recommendations and building connections between other data sources and services. The marriage between the two projects seemed quite timely and opportune as we could bring insights, discovery and transformation all together into a single solution. From there we have been actively working to deliver an integrated solution across our technologies. We have been validating the customer scenarios and the experiences that we are aiming for with a small set of customers and are eager to get more feedback from the community.

If you were not at SQL PASS 2011 you may have missed our little cameo in the day 1 keynote. We are now public and are referred to as “Microsoft Codename Data Explorer”. We are going to start posting some previews of features and design thoughts on our team blog so that people can give us feedback before we deliver a broader preview later this year. We also have a sign up form for getting in the queue for activation codes of the service when we do go live.

Check us out…. Give us some feedback.