PASS Summit Recap from BI Architect Aneal Roney

PASS Summit Recap from BI Architect Aneal Roney

  • Comments 3

This post is from Microsoft BI Architect Aneal Roney. Aneal is a Principal Consultant with Catapult Systems which is a national Microsoft-focused IT consulting company that provides application development, enterprise solutions and infrastructure services. In addition to working with clients on Business Intelligence architecture, implementation and strategy, Aneal also serves as the Denver, Colorado BI and Data Management Practice Lead for Catapult Systems, the Catapult Microsoft Technical Liaison for BI Solutions, and helps run the Denver Microsoft BI Users Group. You can follow Aneal on Twitter here (@anealroney).

It’s not every year that I am fortunate enough to attend the annual SQL PASS Summit. A week away is not always in the cards with project deadlines, production support issues, and time away from the family. However, I am happy to say I have been fortunate enough to attend the summit shortly before the last four major releases of SQL Server (2000/2005/2008R2/2012). Reconnecting with old friends and colleagues at the PASS 2011 Summit in Seattle, WA got me thinking about some of the past summits and associated themes I remember from those summits past. How far we have come as a relational database management and BI Platform with Microsoft SQL Server!

SQL 2000

Although back in the late 90’s I was wearing more of a Database Developer hat, I distinctly remember some of the Database Developer and DBA centric features that got us all excited. How about AWE and 64GB memory support, Indexed Views, User Defined Functions, and the ability to run multiple instances of SQL Server on the same box!? There was also some improvement on the BI side of the house – namely the replacement of OLAP Services (7.0) with a new service called SQL Server Analysis Services.

SQL 2005

Perspective is somewhat notable here, and by the middle of the decade I was firmly fitted with a new hat – BI Developer. Finally the ad hoc queries I ran in query analyzer and Proclarity to apply complex logic, mine data, and create pretty reports was widely known as the formal discipline of Business Intelligence. What great timing to change your perspective! We in the SQL community had been waiting 5+ long years for the next version of SQL Server, and it was obvious as we got our first taste that Microsoft had made major investments on the Business Intelligence side of the house. The BI Paradigm on the SQL Server platform was shifted – SSAS no longer had a relational database as a backend and was pure XML, we were introduced to a new development tool called BIDS (Business Intelligence Development Studio), DTS made way for the power of SQL Server Integration Services, Reporting Services was given additional Self Service BI capabilities via SSRS Report Builder, and a dozen Data Mining algorithms were now at our fingertips via Analysis Services. The Enterprise BI Platform had arrived!

SQL 2008 R2

In my opinion SQL 2008 shifted back to some great enhancements on the data management side of the street. Mirroring performance improved dramatically, we could now compress backups, policy based management was introduced, and Change Data Capture (CDC) was a new option for incremental loads to data warehouses and data marts. SQL 2008 R2 threw a couple of big bones to us BI Dawgs by introducing Power Pivot and Master Data Services.

SQL 2012

So if the prior releases had a wider lane on one side of the street than the other (Data Management Lane vs. BI Lane), which side of the median would be cheering the loudest with SQL Server 2012? After attending the 2011 PASS Summit, it is apparent to me that Microsoft has unleashed an evenly distributed 8 lane super highway! SQL 2012 is without a shadow of a doubt the most feature rich release to date! If you haven’t already begun to do so, below is my partial list of notable features you may want to start getting acquainted with:

Data Management in SQL Server 2012 (To name a few)

High Availability - SQL Server Always On – GeoClustering, Active Secondaries, Failover Groups

Performance - Column Store Index

Cloud Computing - SQL Azure enhancements (+SQL Azure Reporting)

Rapid Deployment - SQL Private Cloud (with System Center and Hyper-V)

Business Intelligence in SQL Server 2012 (To name a few)

Interactive Reporting – through PowerView (formerly Project Crescent)

Collaborative BI – PowerPivot enhancements include hierarchy and KPI creation

Single Model – BI Semantic Model bridges BI from the desktop to the data center

Holistic Data Integration – Manage with SQL Master Data Services, cleanse with SQL Data Quality Services, and Integrate with SQL Integration Services

Big Data - ODBC Driver Support for Hadoop

Data Warehouse Workload Optimization – SQL Server Parallel Data Warehouse and Fast Track

Mobile BI – Cross platform mobile BI in PowerView (iOS, Droid, Windows) on the horizon for 2nd half of 2012.

Reporting in SharePoint – SQL Server Reporting Services as a full SharePoint Service Application

SQL Server 2012 is by my account a MAJOR release!

While I was trying to keep my excitement in check and get my head around what all the features would mean to my current projects and future business scenarios, I was treated with a blip of the future in the Day 1 PASS Summit Keynote via Tim Mallalieu and Nino Bice. If you have not heard of Project “Data Explorer” you can get more information here and view the Keynote demonstration here. As I was watching, I felt like I was given a window into what Business Analysts dream about at night when they are sleeping.

“Artificial” Analysis

The demo started with a very typical scenario – integrating data to perform some analysis – but quickly revealed some twists and turns.

The scenario called for analyzing potential store locations for a new Frozen Yogurt chain. First off Nino pulled in some data from a SQL Azure database which had a list potential store locations with some geospatial data attached to each location. It also contained a normalized score of how those locations were performing. What came next was a bit of a surprise. When the data was integrated, it was semantically matched to additional “suggested data” allowing the analyst to pick related data sets. The “Data Explorer” experience actually determined the analyst’s intent, and recommended the analyst additional data sets that may be relative to the analytic path the analyst started marching down (Think: “Customers who performed this type of analysis also found this data set useful”).

From there we went back to humdrum. The team integrated an Excel file that contained some candidate store locations with some existing store locations. The option for a “mash up” was revealed. This option allowed for overlaying (mashing up) data in a data set via a drag and drop experience. While we tend to think in joins via t-sql, business analysts tend to think of “lookups” or “mash ups”.

Next, the Azure Data Market place made a recommendation based on the current data mash up. A demographic data set in the cloud was recommended to be brought into the current analysis. The demographic data was then mashed in, and the result was a cross join between an Azure Data Market Place oData feed, Excel, and SQL Azure. This was all completed in approximately 3 minutes.

The results gave a good indication of which demographic concentration would indicate a potentially profitable frozen yogurt store location, but as with any good analyst, that was not enough to fully answer the question of what the best Frozen Yogurt store location would be.

Nino and Tim explored a hunch that large concentrations of high school aged young adults would impact store performance. How did they prove that theory? They mashed up another data set – this time by seamlessly making a web services call to the Bing Services phone book to pull in the number of high schools mashed up with the geographic locations they already had in their data explorer data set. Since the service call also had sentiment data from the web service (thumbs up/thumbs down per location) that data was brought in as well.

Some quick formatting of the report yielded the result to the analysis – a list of the best potential store locations based on store performance, area demographics, number of high schools in the area and the public (web) sentiment of the neighborhood. The exciting part of this demo was not the integration of the data from 5 very different data sources, or even the ease of “mashing it up” with zero code or joins. The beauty for the analyst lies in the fact that data was actually recommended to the analysts throughout the process. To me, this is like the synergy of 2 analysts “bouncing hypotheses” off each other – the difference is one of those analysts was replaced by the powerful and insightful recommendations of Project “Data Explorer”!

The time of the information worker making use of the “Artificial Analyst” in Project Data Explorer is coming, which is one more reason why I certainly won’t miss the next annual SQL PASS Summit!

Leave a Comment
  • Please add 3 and 3 and type the answer here:
  • Post
  • Master piece, I realy feel lucky to see such a wonderful article that is <a href="www.mlfhardwoodflooringltd.ca ">hardwood floor refinishing toronto</a> what i want.

  • Thanks for sharing such a nice information.

  • Awesome ! Great site. I appreciate with you.

    www.novaproducts.com.au

Page 1 of 1 (3 items)