Microsoft Dynamics NAV 2013 R2 was dispatched with a brand new feature that introduces big challenges to all of the Microsoft Dynamics NAV channel: Multitenancy. In simple words, multitenancy allows partners to deal with hosting scenarios on-premises and in cloud services in an easier and more flexible way than in the past.
Before Microsoft Dynamics NAV 2013 R2, partners and customers could only use a single-tenant scenario (here also called Legacy mode).
Below a short explanation how table synchronization used to work in earlier versions.
A synchronization failure would typically be reported with an error like “The Object Metadata does not exist. Identification and values … “ when running the Microsoft Dynamics NAV Windows client.
The multitenancy feature has also changed the design how Microsoft Dynamics NAV developers has to deal with object changes, overall related to table objects. Multitenancy implies that the table structure definition has to be stored in the application database and this needs to be applied on one or more separate storage databases called Tenants. From a development perspective, this means that any modification that are made to a table object in C/SIDE are NOT DIRECTLY applied to the SQL Server structure but there is a need of performing a secondary action to apply and made persistent these modification at SQL Server side: this process is called Synchronization. Microsoft Dynamics NAV 2013 R2, then, comes with a time decoupling between table metadata creation (C/SIDE) and data structure changes (SQL Server).
In order to simplify the current design, the Microsoft Dynamics NAV development team decided to handle single- and multitenant scenarios in the same way (roughly speaking a single-tenant / Legacy mode is handled as a multitenant scenario with a single tenant database constantly mounted against an application database).
Below a short explanation how this is working in practice.
Prompting for Synchronization happens when
- Performing ANY Microsoft Dynamics NAV client action.
For example, if a user opens a Microsoft Dynamics NAV Windows client, then Microsoft Dynamics NAV Server is starting applying the relevant structure changes to SQL Server, and the Microsoft Dynamics NAV Windows client is not shown until all the changes are done on SQL Server side.
- Running the Sync-NAVTenant Windows PowerShell cmdlet.
Setting the “Prevent data loss from table changes” C/SIDE switch to “No” has been intended to be used as last resource in a pure multitenancy scenario and in Test or Staging environments when partners does not have any business data database mounted against the application database. All other usages that deviate from this statement might lead to unpredictable results and even undesired data loss scenarios in upgrades or, even worse, production environments.
Never change for any reason this parameter to “No” when developing against a single-tenant / Legacy mode database.
Since no validation is made against SQL Server (“Prevent data loss from table changes” was set to “No”) there might be chances that this will result in:
Same as Scenario 1 for point 1. and point 2.
When prompting for SYNCHRONIZATION, changes will be triggered and applied to the SQL Server data structure.
Prompting for synchronization in a pure multitenant deployment happens when
- Performing ANY Microsoft Dynamics NAV client action
- Running the Sync-NAVTenant Windows PowerShell cmdlet
- Mounting a tenant database
Based on the scenario depicted above, there might be risks of data loss and/or missing synchronization issues if handling C/SIDE development (namely dealing with Table objects) in a way that deviate by the prospected paradigm.
Data loss issues:
These might arise typically in one of the following scenarios:
thinking about potential data loss and synchronization issues is a brand new big challenge in the development environment, and so some consideration and following best practice might be advisable. These applies to developing solutions for both single- and multitenant deployments.
Recommended Column Filters:
Bottom line. Worth mentioning that if a Microsoft Dynamics NAV Client hang / disconnect happens due to a missing synchronization issue or there were a synchronization transaction running behind the transaction rollback SQL Server side will take a higher amount of time in comparison with the same committed transaction, depending on the type of changes, resources available, etc.
Just in case you fall back in this situation, it is warmly advisable to do not stop nor restart Microsoft Dynamics NAV Server and check through a SQL Server Profiler trace and/or via SQL Server Management Studio if the transaction has successfully rollback.
Another blog post will follow this one, related to synchronization challenges and best practice while upgrading to Microsoft Dynamics NAV 2013 R2 from previous versions.
These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.
Gerard Conroy - Microsoft Dynamics UK
Abdelrahman Erlebach - Microsoft Dynamics Germany
Duilio Tacconi - Microsoft Dynamics Italy
Jasminka Thunes - Microsoft Dynamics Norway
Microsoft Customer Service and Support (CSS) EMEA
A special thanks to Jorge Alberto Torres & Jesper Falkebo from the Microsoft Dynamics NAV development team
Also wenn ich die lächerlichen Hinweise unter "Development Environment best practice" lese kann ich nur den Schluss ziehen dass NAV 2013 R2 ganz offensichtlich eine Fehlentwicklung ist & bleibt. Tabellen nur noch einzeln importieren? Leute, ihr habt doch 'n Rad ab!
Hi NAV Anwender,
maybe the point related to Table import has not been clear enough and I will try to reformulate.
As long as you have a NAV Server that validate changes then you are on the safe side. Remember that every modification you made CSIDE is applied to the SQL Structure only when you prompt for synchronization and if you have to made changes on several tables (and e.g. these tables contains million of records and you have to create indexes) then it might be advisable to perform this action in chunks in order to reduce the downtime related to synchronization. And in case of tranasaction rollback you do not have to wait for a discrete amount of time.
I deeply perceive your frustration and I am also aware that for legacy mode this decopuling in synchronization might look technically useless but this reduces complexity and caveats in having 2 different type of synchronization mechanism for single and multi tenancy.
We are running the original RTM build 35473 in a true multitenant production environment. When I import table changes to production I am forced to set "Prevent data loss" to No because if I don't I get the message "There are multiple tenants mounted on the server. Validating table layout changes to protect against data-loss is only supported with a single tenant mounted on the server."
If we go to Build 36310 will I continue to see this message in a true multitenant environment or will I finally be able to set "Prevent Data Loss" to Yes?
Can I ask why it's taken almost six months for this information to come out? If the relationship between the development environment and the underlying SQL database changes as a part of 2013 R2, would it not have been useful to include that in the 'What's New: Developer and IT Pro Changes for Microsoft Dynamics NAV 2013 R2' document that we rely on for information about architectural changes in the system? Did the development team not anticipate that this might cause issues? Genuinely curious.
By the way, the current link to that document still does not mention NAV -> SQL database changes, nor contain any mention of this new synchronisation paradigm. Should probably update it to mention these issues when you get a chance.
My apologies for not having made it clear in the "What's New" content for Microsoft Dynamics NAV 2013 R2 that you might need to change the way that you think about development. As always, we appreciate your feedback and are including it in our plans for the next release. I'll see if I can squeeze this into our upcoming update tot he content in the MSDN Library as well, so thanks for pointing that out!
I hope you will like working with NAV 2013 R2, though, because there are many good changes in the box that can make life easier for many of us in the NAV world, such as an easier upgrade path.
NAV UA team
if a fob-import results on data loss in a customer db, then "change the way that you think about development" is not really the right wording here.
Microsoft Visual F# MVP
And one small addition: Please don't call this big bug and total loss of trust a "new table synchronization paradigm".
Hello Eva, Hello Microsoft,
no. Just... no. I don't enjoy working with NAV2013R2, nor do I feel well when a customer wants it. And frankly, I dont see any good changes in the box. Who cares about easier upgrade paths? That's broken since 2008, with the advent of NAV2009. An easy upgrade path would be: make a *real*, *logical* backup, import it into a new database, do a conversion run with the new application, done. That's gone for a long time now. Ditto for GUI and Reporting. Now, we're talking about *general*, *basic* fitness for the intended purpose. And it isn't, I can tell you. With this additional knowledge, I couldn't sleep as a NAV partner whith live customers on a 2013R2 product. I simply couldn't. Keeping your data turns out to be a lottery! Such a design like it's described above is simply asking for trouble. I hope you understand what's happening now. Your most loyal customers are losing their trust in the product and in your ability to SOLVE THE PROBLEM. Period.
with best regards
Hi Eva\Microsoft - Could I get an answer to my question above (looks like we have 2 different Bens posting...)? With Build 36310 in a true multitenant environment will I always be able to leave "Prevent Data Loss" set to Yes?
(Part 2 of answer)
Therefore – and here comes the answer to your original question:
When running with a single tenant, C/SIDE will ask the server to check the single tenant before saving the metadata. Immediately after the metadata (table object) has been saved by C/SIDE following a successful check, the server will do the actual schema sync in SQL.
This however is limited to the scenario of having only a single tenant, since that is the only situation where we can guarantee that we can check the database (we do not have any offline databases or 10.000 databases to check before we can say ok).
The “Protect against data loss” switch you see in C/SIDE therefor controls if a check is required by the server before saving the metadata. If you turn this off, the server will attempt the change asynchronously and depending on how the tenant is mounted will either fail the mount or forcefully delete a field if so chosen.
For the reasons above, we cannot support check before save, if you are running with multiple tenants – you might dynamically add/remove tenants before/after the change and we would not be able to guarantee a successful check. Furthermore the time between the check and mount could be hours (even days) if you did operations on 10.000 tenants.
So the short answer to your question:
We support checking for emptiness in a classic single tenant mode (business as usual), but as soon as you step into true multitenancy – we do not recommend doing changes to a live system and we do not support checking. Instead we recommend a structured deploy scheme where checks can be made in upgrade code if needed.
CTO / Principle Architect
Microsoft Dynamics NAV
(Part 1 of answer)
Before NAV 2013 R2, there was only one database and C/SIDE could access the database with ease. If you tried to delete a field, C/SIDE would make a query to the SQL server verifying that the field was empty before issuing the command to actually delete the field from the table. Finally C/SIDE would save the updated table object in the object table (2000000001).
When we implemented multi-tenancy, we had to make a choice about the database. Do we continue with only one database and just put both tenant and company name in front of the table name, or do we go with one database per tenant?
The choice was an easy one. We went with one database per tenant. There are several reasons for this choice, but most important was the implicit separation of data (protection) that you get from separate databases, but also, clear separation of locking, ability to easily back up one of more tenants data, the possibility to scale the system by adding multiple SQL servers each serving one or more tenants etc etc.
We support thousands of concurrent tenants and just imagine the number of tables that would generate in a single database if we had chosen to stay with a single database – we are talking about millions of tables in a single database.
Now, if we support multiple databases (one per tenant), what should happen if a single tenants database is “missing” when the server starts? Should we accept the situation and support the other 999 tenants, or fail the entire server start? Of course we need to support the situation where a tenant is missing (offline) for whatever reason – BUT this also means that we need to support a situation where a tenant database is “behind” if somebody did changes to the metadata (deleted a field) while the database was offline. – or maybe somebody restored an old version, thereby introducing a “behind” situation.
Both these situations lead to a problem. Who should fix the database if the server discovers that a database is “behind”? C/SIDE might not even be available and only the NST knows about all mounted tenants…
Keeping the sync code in C/SIDE would require C/SIDE to be able to “see” all possible tenant databases at the time one does any change leading to a fall behind situation – clearly that was not possible either.
We solved these issues by allowing a database to be “behind” and/or offline and at the same time put all the information into the tenant database which would allow any NST to sync it up to current version when mounted. We did introduce protective mechanisms to avoid a random database to be mounted on a “foreign” server, but even that can be overridden.
This meant that the sync code moved from C/SIDE into the server itself – allowing the server to do any outstanding sync requests “just in time” e.g. when the tenant database was mounted. Moving the code to the server however, “broke” the classic “delete-a-field-with-check-for-non-empty” scenario that C/SIDE used to offer.
...continued in part 2
I am dev and a long time NAV person and I have to admit I am confused after reading this.
Can this be summarized in a simple way that idiots can understand since this has relevance to actual customer instances.
If I run single tenant, do I need to do anything different than before? Can I just make changes in live, or import objects in live without an issue (losing data or whatever other mysterious thing) and without ever worrying about understanding what this says?
And if I ever get to multi tenant (assuming either me as partner or my prospects don't give up on this mad and so far illogical ride that MS has taken us) does this mean that simply we should never import changes in live environment? And if not, what is the path to making updates to a multitenant deployment? Bring everyone down/disconnect everyone and then do it?
Thomas thanks for your response much more readable than above. However while I understand the logic it is not always possible to maintain identical copies of both live and test. In the scenario below with 10000 tenants it would suggest a thorough test of preventing data loss would be mounting each tenant in turn. I think the interpretation of all is to conduct testing and providing this is successful then in a multi-tenant you have to set Prevent Data Loss to No. I cannot see any other way of importing any changes into a live multi tenant environment (unless the behaviour of the latest dev env build has changed). Currently we test on our test system with all tenants (luckily not 10,000) but this is not as up to date as production. We do this by making the change or importing the FOB then running the powershell sync and we can see this then throws the data loss error if we have an issue with one of the tenants.
We are going to write some code to dig into the metadata table on test and 'interprete' that to SQL and then run a check on live before it gets close. I do understand it cant be done synchronously but I can't see why some code couldn't be written to do something similar. Out of all the multi-tenancy changes table changes seem to be the only 'weak' area and there is no particularly best practice steps (the sentence saying you should never change the Prevent Data Loss seems ill thought out as I cannot see how to introduce any changes to live short of dismounting all then mounting each singly!).
Sorry I can't imagine 10000 tenants environment in 102000 NAV Customers world (source: www.erpsoftwareblog.com/.../how-many-companies-use-microsoft-dynamics-erp).
Please focus on real scenarios. Synchronization must be a part of development / deployment and must not be triggered from client by any action. Please review synchronization solution in AX.
Just one stupid question - how many multi-tenant NAVs a "live" worldwide? Seriously.
All this crap is only because of useless "multi-tenancy".
Never had such problems before.