Offical team blog for SSDT, a tool for on and off-premise database development
Several questions have been raised on the forum regarding use of composite projects. This post discusses composite projects in general and then describes a technique you can use with Schema Compare for synchronizing composite projects with changes that have been made to the database.
Composite Projects 101
SSDT can combine a database project with one or more referenced database projects or dacpacs to describe a single composite database schema. Using a composite project allows a large database to be broken down into more manageable chunks, allows different people or teams to have responsibility for different parts of the overall schema, and enables reuse of database object definitions in multiple databases.
Scenarios might include a central team defining a set of common types, tables and procedures that are released to other teams as a dacpac for inclusion in other databases. Or an extended version of a database could be defined that adds extra test procedures and other objects to enable the referenced database to be tested – the extended test project is deployed while you are testing, and the referenced main database project is deployed otherwise.
First, some composition basics. Schemas are composed by adding Database References to projects or dacpacs you wish to compose into another project. A referenced project/dacpac can describe a fragment of schema that you want to compose or a complete database definition.
To compose projects and dacpacs into a single database the references must be defined as 'Same database' references on the Add Database Reference dialog (below).
References must be strictly hierarchical with no circular dependencies. A referenced dacpac can be built from a project or extracted from a database. The hierarchy of references can be arbitrarily deep, although you need to add references in each project to all its lower-level projects or dacpacs, whether directly or indirectly composed.
In the sample CompositeDb solution shown below you can see several levels of composition. First a BaseObjects project and a peer CommonUtilities project are composed into the MainDatabase project. Then the TestDatabase, composes MainDatabase and introduces some additional objects for testing purposes. You can see how TestDatabase references BaseObjects and CommonUtilities as well as the MainDatabase project.
Not visible in the screen shot is that the references to BaseObjects from the TestDatabase and MainDatabase projects are to the dacpac generated from the BaseObjects project. Why reference the dacpac instead of the project? Well, if the referenced schema is large and doesn’t change frequently, then referencing the dacpac allows you to exclude the project that builds it from the normal build. Doing that means the dacpac is not recreated and resolved with every build or clean which can save a worthwhile amount of time. And of course, the dacpac could have been provided by another team working from another solution, or could have been extracted from a database in which case you would likely reference it at some appropriate share.
The project dependencies and build order can also be inspected using the Project Dependencies dialog accessed from any project in the solution. In the screenshot below you can see how the dependency on the BaseObjects dacpac does not get translated into a project dependency. Of course if you take a dependency on a dacpac then you need to ensure it is rebuilt or refreshed from its source whenever the source is changed before rebuilding projects that reference it.
In each of the projects I defined discrete SQL schemas in which the objects are defined and used the convention of grouping objects in schema-specific folders – Base in the BaseObjects project, Utilities in CommonUtilities, and Main in the Main project. Using schemas like this isn't required but it's going to be useful in a moment when we use Schema Compare, it helps me organize the overall composite project structure, and would work well if I want to give different groups responsibility for different part s of the database definition.
Once you have set up the database references in this manner you can refer to objects defined in a referenced project or dacpac as if they are defined locally, and get full Intellisense support. So in TestDatabase, for example, TestProcedure1 has full visibility to any object in MainDatabase, including objects supplied by BaseObjects or CommonUtilities.
To deploy a composite project you must set the Include composite objects option on the project you're deploying from.
This is available as an Advanced option in Publish and on the Debug properties tab, and as a general option in Schema Compare. Note that in Schema Compare the option only affects publishing to a target database. Without this option set the source project will be deployed without the referenced projects or dacpacs. If you deploy to an existing database and forget to set this option but have set options to delete objects in the target that are not in the source, you risk deleting important content from the database. Schema Compare will highlight such delete actions very clearly, but it will be less obvious in Publish and will happen silently in Debug/Deploy. Be careful!
If you are referencing other projects in the same solution you can disable their automatic deployment on Debug/Deploy (F5/Ctrl+ F5) with the Build > Configuration Manager dialog. In this dialog you can uncheck Deploy for each project that you don’t intend to be deployed as a free-standing database. Below you can see I have configured my solution to independently deploy the MainDatabase and TestDatabase projects but not the lower-level components.
Using Schema Compare with Composite Projects
Like Publish or Debug Deploy, Schema Compare can be used to deploy the composition to the target database. A single resolved model is created from the composition which is then compared to the database and differences presented in the results grid. While that is straightforward it’s a little trickier when working in reverse, i.e. if you need to update the projects from the database. The difficulty is that if Schema Compare targeted a resolved composite model it would be unclear where (i.e. into which project) new objects should be added. At this time Schema Compare does not resolve composite projects when they are the target, so you have to compare the database to each project in turn. However, when you compare the database to a project that is describes just a part of it Schema Compare will try to add all the other objects which is not what you want. So how do we fix that?
First, let’s take a look at the problem. The screen shot below shows the Schema Compare results after comparing my TestDatabase with its corresponding TestDatabase project. In this case I have added a new table and changed a test procedure. While the change shows up clearly, the table being added (TestTable2) is lost in the noise. Schema Compare wants to add all the objects defined in my other projects to the TestDatabase. If I were to accept the default actions each of the projects in the composition would end up containing a full definition of the database.
While Schema Compare lets me exclude objects from the update, how do I quickly identify the right objects to exclude and do this repeatedly and reliably? Now we will see the benefit of organizing objects in each component into discrete schemas. If I group the Schema Compare results by schema (below) objects that belong to schemas defined in the other database projects are easily seen. This quickly highlights the candidate actions to apply to the TestDatabase project (in this case the change and add of objects in the dbo schema). And with the objects organized into groups like this it’s a simple matter to exclude all members of a group using the context menu option on the group (also shown below).
With the extraneous objects excluded and their schemas collapsed (below) you can concentrate on the changes that apply to the test project and decide which of these you want to take.
Having set up the Schema Comparison like this you can save the comparison definition as an .scmp file and include it in the target project. The saved scmp file includes the source and target schema designation, the options used, and importantly, the list of exclusions. You will need to do the same for each project in the composition, and remember to update and save these files periodically as objects are added to projects or dacpac in the composition. Double-clicking on the scmp file in each project will now open a pre-configured Schema Compare which can be used whenever you need to bring in changes from the database to that project.
I just wrote an article on SSDT Schema Comparison here: dattatreysindol.com/.../ssdt-schema-comparison-in-sql-server-using-sql-server-data-tools
Hmmm....that's a useful tip but its a shame that it requires the use of schemas. I'd prefer it if there were the ability ti simply include composite projects in the target. Please consider that a feature request.
Is there a way to include composite objects in the .dacpac built by the project? I am unable to deploy my composite project from the .dacpac, as I get missing reference errors.
This all almost works for me, except that all changes from the composite project cannot be included in the update. They are detected, but all greyed out and won't be included in the update script.
Much like reported here
The solution there though won't work.
Not sure if this is a SQL server version issue or something else, very frustrating though as it makes composite DBs useless at the moment.
Thanks very much for the interesting article, how does it work when i need pre and post deployment scripts to be applied (for example in the lifecycle of the database I will have several releases which likely will need data migration etc.) To keep these changes consistent over all the environments I would need them to be applied to the base objects. However it looks as if only pre/post deployment scripts for test/main database would be created/executed.
@Niclas, rendering this to be useless is an underrated statement.
I found it very much useful and very easy! You can workaround the schema compare issue by deploying it to local DB or a DACPAC file and then compare from there. Sure it's not perfect but this tool is great and it helps me a lot at my project! Besides i'm pretty sure MS will make it better in the future.
Thanks so much MS for creating this amazing tool!
Not only are the SSDT very slow, and full of errors, it degrades productiveness to almost zero when you have a model anything larger than a simple demo. 100+ tables, procedures, triggers etc are unmanagble. SSDT lacks tools that help you create and check the DDL that you need to enter. You have to type everything, including all nitty gritty details repeated on every object over and over yourself and to make things even worse, you have to put all objects into separate source files, one object per file, grouped together into sub folders per object type instead of logically grouped together per table/subject or whatever logical grouping. This makes entering and maintaining your model a real nightmare, with SSDT doing nothing to aid, if not doing everything to make it harder for the developer... And where it should be helpfull when comparing models amongst each other or against a database, it has a terrible user interface that lacks any multi selection possibilities or usefull grouping, so that here again it is a major productivity killer.
We used to work with Power Designer, which has a lot of features that do aid in being more productive by generating a lot -if not all- of the code from a model you create by mostly dragging, dropping and naming your objects in a graphical environment. Also PD does help in enforcing general correctness in your model, by automatically performing all sorts of logical checks on your model. We decided to go with MS's SSDT because generating the change scripts from the model would be so much faster than it it was from PD. How wrong we were: we've lost huge amounts of time (years!) because of all the functionality we now miss in SSDT and the many things we simply can not do anymore. Going back to PD is however not an option anymore; our model is now locked in into source files proprietary to MS's tools only. If you haven't made your mind up on a data modelling toolset yet: steer clear of SSDT!
The developer experience at my company was originally 85% hated SSDT while 15% thought it improved our ability to manage our database schema, track changes to schema objects and simplify our deployment process. We were using the "DB Changes" scripts which incrementally "morphed" the database to the latest definition but it was a painstaking process with a large team of developers.
@Richard, I will agree with you that SSDT on a large model (315+ tables, plus stored procedures, views, triggers) can be very slow. I also disagree that productivity is reduced to almost zero. I still use SQL Server Management Studio to design new entities on a database that was deployed using the SSDT's dacpac model. If I have several items, I will either group them into a single script or separate them out depending on the best way to organize them. Then I simply import one or more of these into the DB project and let the DB Project update existing files, automatically checking them out, merging the changes (like triggers), etc.
What we found to be extremely useful in speeding up performance was to add a couple of low-risk exclusions to our company's anti-virus policy. Maybe these might help others.
1. Add devenv.exe to the Scan on READ exemption list (low-risk).
2. Add files with the following extensions to the exclusion list only when in the %TMP% or %TEMP% folder paths: *.TMP, *.TEMPBIN and *.MODEL.
3. Add an exclusion for *.DBMDL files (not specific to a path since they're associated with the project's path).
In addition, some developers have configured a 3.5GB RAM DRIVE and pointed their %TMP% and %TEMP% paths to it within a .CMD file which launches visual studio. This causes the building/parsing of the DB Projects to use the RAM drive. I opted to install a small 64GB SSD that's only used for my temp folder paths.
Our SSDT projects (315+ tables, etc.) no longer "hang" near the end of the model parsing for 15, 20, 30 or 45 minutes (depending on the developer). It typically parses and is ready for use within 3-5 minutes which could be considered lengthy, but it's been consistent regardless of the number of changes to the project and it's never "got stuck" again.