In my post for tabular projects on the Analysis Services team blog, I gave a high-level overview of what tabular modeling is all about. That post gives a basic overview of how to get started writing models to be deployed to the VertiPaq engine. But how do the tools work? And why is the modeling experience so different from multidimensional (OLAP/UDM) models? Read on.

When we set out to write the pro tools, we had an interesting challenge. We needed to create a fully functional modeling tool inside the VS shell (for source control integration, build system integration, etc) in a tight timeframe. Fortunately, we already had a rich modeling client in PowerPivot. So what we did was refactor the PowerPivot codebase, share the designer infrastructure with BIDS, and ship. It was really cool – it took a small (but very talented) team of less than 10 people to go  from prototype to functioning modeling tool in less than 5 months, which included time to fix something like 300 bugs. Now, because of the shared infrastructure, when modeling features are added they are almost automatically added to both modeling environments (with the obvious exception of menu/ribbon entry points), so we are able to pretty rapidly evolve both tools.

Because of our shared ancestry, PowerPivot and the tabular designer share a similar architecture. Dave Wickert wrote about the PowerPivot client architecture, which shows the interface with the VertiPaq engine. Here is a similar block diagram for the tabular designer:

image

Just as in PowerPivot, AMO and ADOMD are used to talk to the VertiPaq engine. The underlying model is the same as PowerPivot – we still serialize dimensions, measure groups, etc. All the formulas are DAX.

One big difference from PowerPivot is that the engine is running outside of the Visual Studio process on a live Analysis Services server, either locally or remotely. This is the workspace database server instance. When you do a File->New Project and then create a tabular project, a new database is created on the server. This is the workspace database. At first, the workspace database is empty. But as soon as you import some data, the “Model” cube is created (same as the one in PowerPivot), and the model is live on the Analysis Services server. When a metadata change is made, an Alter is sent to the server. When you scroll, sort, filter, etc, a query is sent to the server. When you “Refresh” in BIDS (think “process”), the workspace database is processed. Finally, when you Analyze in Excel, we create an ODC file and connect you to the workspace database. If you are really interested in what the client does, start Profiler and spy on it.

I have received a lot of feedback about the requirement for a workspace database server, why did we do this implementation. One of the first obstacles we hit in our initial prototyping – there is no 64 bit version of Visual Studio. Working with data from multiple models in a single 32 bit address space is quite limiting, if you have worked with larger data volumes in 32 bit PowerPivot you could quickly see the limitations of an in-proc architecture on a 32 bit system. We considered doing the slave process thing Rico mentioned in his post. However, we were concerned about the security of the workspace database. The nice thing about using the out of proc engine is that the engine already has a robust security model that we can rely on. This allows client tools instant access to changes made in Visual Studio, while keeping unwanted people out. Another benefit of using the out of proc model is that we enable the use of a remote workspace database server. So if you want to have large data volumes visible inside VS while you are modeling, but you don’t have a beefy dev box, you can use a remote server with more memory and modeling against this large data set is no problem.

That’s it for basic infrastructure, in the future I will talk more about the project structure, how we treat your data, where it gets saved between editing sessions, how we choose what to load when opening files, etc.