LinkedIn | FaceBook | Twitter
Most mature development shops use various code diagrams to give a symbolic representation of high-level and database code structures. Standards such as Business Process Model Notation (BPMN), Entity Relationship Diagrams (ERD) and the Unified Modeling Language (UML) are a few I use all the time.
In the Distributed Computing (Cloud Computing) paradigm, these three diagrams (or their equivalent) become essential. In the past, I’ve been able to rely on a single architecture where my code will run. I understand the servers, the networking and the path the code takes between the client and the components within that architecture.
With Distributed Computing (DC), the architecture changes. In fact, the reason I use the term “Distributed Computing” instead of “Cloud Computing” most often (except in the title of this post, as you can see) is that I feel it’s more technically accurate about how we write code. I don’t view DC coding as an “all or nothing” exercise – I view it as just another option to solve a computing problem. A “hybrid” approach, where I mix in the strengths of a cloud provider is often a great way to leverage the best cost, performance and other advantages of each part of your solution. It can also help keep data secure, provide options for High Availability and Disaster Recovery, and more.
To gain these advantages, we have to think more about the components of the application rather than a monolithic stack of components in a single architecture. And that brings us to the title of this post…
For us to correctly identify code components, database objects, security paths and other elements, we have to be able to conceptualize them. And that’s where those diagrams come into play. Starting with some sort of business or organizational need, we can use BPMN or UML Actor diagrams to explain what the program needs to do. That helps segregate the security and location requirements. For instance, if the BPMN shows a data access to Private Information, we can evaluate the need for an on-premise system that is federated to a DC provider. If the business users need global access, we can decide whether to set up a VPN to allow access to an on-premise system or whether a login component can be used on the web.
After determining the flow of the program, move on to the data the system will store. In the case of Windows and SQL Azure, there are several options for storing data. In the past, I’ve often selected a single storage type, such as an RDBMS, and stored program data there. Now we can store in multiple formats, in multiple locations and more. The ERD is pivotal, because it defines data types, which can help decisions around where things go. Another important aspect to the data decision which is not covered in an ERD (but perhaps should be) is the estimated size and growth of a datum, since that can also drive the decision on where to put a data component.
From there, the UML document helps me understand where each computing element can live. There are strengths for each type of computing, and using the UML diagram I can place each code component in the best environment for speed, security and other considerations.
So in the new Distributed Computing world, these graphical documents do much more than just help design the application – they can help define the architecture as well.