December, 2007

  • Patterns for Great Architecture

    Patterns used for navigating between pages in ASP.NET:

    • 1 Comments

    ASP.NET offers different APIs and ways (like Server.Transfer, Response.Redirect etc.) through which you can navigate between pages in your web applications. But before choosing any of these ways, we must have an understanding about the pattern that we want to follow. Depending on the pattern used, our choice about selecting an API changes.

    Broadly, there are two patterns used for navigating between pages depending on the requirement:

    1. Loosely Coupled Architecture:
    In this pattern, each page has its own processing unit. So if you have a page that has some input controls, its code behind will have all the logic to form SQL Queries based on input and get the result also from database. Then when it comes to displaying the result, another page (the results page) is invoked and the data of previous page are passed to the new page. This is called loosely coupled architecture since each page is given a responsibility and it performs its work without being dependent on any other parts of the system. So one page is given the responsibility to gather user input and prepare the result and another page is given the responsibility to display data. The picture given below illustrates this architecture in a better way:

    fig1

    When to use this pattern:

    Now when we know that what this architecture means, we can appreciate the scenarios where we can use it. The scenarios are too wide, and one possibility that I can think of is the scenario where you want a decentralized approach. Whenever we have scenarios where processing logic is different for each page, we go for this approach since there is no point in delegating the processing logic to results page. If we delegate the processing logic to results page, we will have many results page so to avoid this, we choose to keep the processing logic in the input gathering page itself.

    How to implement:

    The default model of ASP.NET provides this facility where each page posts to itself until we specifically configure the controls to post to another page. The steps involved are:

    a) Implement the input page with all the data preparation logic in its code behind. The controls on this page will post to itself.
    b) Invoke another page when the processing is done and pass the processed data to the target page. This second step can be achieved in many ways depending on the requirements. Below I show few techniques that can be used.

    a) On the first page’s code behind (having input controls), when all the processing is over, add the data to be transferred to next page into the “Context” collection (Context.Items.Add() method can be used for adding into Context collection) and then invoke the target page using Server.Transfer() . On the target page (the results page) retrieve the data from the Context Collection simply by using Context.Items[] collection and typecast it if needed. Context collection remains into scope this ways on the target page as well (remember we cannot use response.redirect() method if we want to use Context collection to store our data since context dies the moment the request thread leaves the server).

    b) Another API to achieve this is to use Request property. This basically gives you reference to the current context and then using Request.Form we can access the source page.

    c) If you want to use the data between multiple requests, you can store data into Session collection instead of context collection. Everything else remains same (and yes we can use response.redirect() method if we are using session to store our data).

    d) A third way could be to get the reference of the source page on the target page. Whenever we use Server.Transfer() method to navigate to another page, we can access the source page from Context.Handler property. So all we need to do is to create storage on the first page (for example by creating a property called data on source page) and store the result in that property. Then, on the target page access the properties (in fact everything that is public on the first page including controls) by using Context.Handler. This approach is used in cases where we have many discrete data elements to be transferred to the target page. So instead of adding them to Context Collection on first page, we add into the page itself by creating properties. Then we can access all the properties of source page on the target page. Another good thing in this approach is that we do not need to typecast in contrast to Context/Session approaches. More can be found here:
    http://msdn2.microsoft.com/en-us/library/6c3yckfw(vs.71).aspx

    2. Coupled Architecture (Centralized approach):
    In this pattern, we designate a central page for processing the input and displaying the results. We form separate pages for gathering input. The input pages gather data from end user and post it to central processing unit (the results page). This is the approach where you post the source page directly to another page (the cross post scenarios). The following figure illustrates this:

    fig2

    When to use this pattern:

    This approach is used whenever we want to reuse the code. If the logic can be reused across multiple pages, then there is no point in writing the same logic on multiple pages and making code unmanageable. In that case we create a central processing unit (the results page) and all other pages just collect data from end users and post themselves to the results page.

    How to implement:
     
    We can use PostbackURL property of controls to specify the target page. In this approach, on the source page we specify the target page URL in the PostbackURL property of controls (for example a submit button). This ways, when end user clicks on the submit button all the data of the input page is sent to the target page directly. So no need to write any code behind on source page. A very clean way to implement this pattern.

    Yet another model can be followed which involves, first posting to the source page itself and then source page transferring to the target page but no processing is done at source page. We can do this by using Response.Redirect() and then passing the input data using query strings.

    Rahul's

  • Patterns for Great Architecture

    ADO.NET Entity Framework

    • 8 Comments

    ADO.NET Entity Framework is Microsoft's step towards its Data Platform Vision. In this blog, I am going to talk about the architecture of the ADO.NET Entity Framework and the way it will change application development style. Though it is in Beta Phase right now but is expected to be part of Visual Studio 2008 during mid of year 2008.

    The Vision:

    If we closely look at the new development initiatives from Microsoft, we can clearly see where Microsoft Development Platform is heading towards. Be it the new "language features" in C# 3.0 like LINQ or new "components" in framework like Communication Foundation, Workflow Foundation or the new features in "products" like Biztalk, SQL Server etc., one thing is common in all of them: "Abstraction". By the term abstraction I mean hiding all the complex details and exposing simple interfaces to achieve one goal of allowing the developers to concentrate on "what" do they need rather than "how" do they need.

    Take example of LINQ. Simply put, the primary aim of LINQ is to develop a common querying language within the programming language so that developer does not have to learn new querying languages (like SQL, XPath etc.) varying on the kind of data source they are dealing with. Today with LINQ, you don't need to know SQL or XPath to search data from a database or XML data source. You just need to know LINQ and search every data source with it.

    Now look at the structure of a typical query expression in LINQ. You will find the structure similar to SQL having keywords like "Select", "From", "Where" etc. along with a fleet of operators to work with. It is typically the smell of any 4GL language. SQL is a 4th Generation Language, similarly LINQ is a 4th Generation Language. And what is a 4th Generation language in anyway? Any construct that offers me the ability to specify "what" without asking me "how" is a 4th Generation language.

    Take another example of WCF. You can now create generic services in WCF within no time that can work on REST principle (REST is like backbone of today's web) and that can also work on a completely proprietary protocols with very little efforts. In WCF we call them "Service" which is as generic as we have "object" type in any OO Programming Language. The same service can behave as a web service, can use message queues, can use TCP with just simple changes in configuration. Developer does not have to worry about intrinsic details. In fact whenever I personally design an architecture for Microsoft Partners, I either think of WCF or Biztalk whenever communication comes into picture. These two technologies are key to enable communication framework within software components as well as enabling integration between different disparate systems.

    The examples can go on and on since the technologies offered by Microsoft are many. But the point that I am trying to make is that Microsoft has this vision to introduce abstraction to such an extent that things become extremely simple and we can produce great software within no time, no cost.

    With ADO.NET Entity Framework, the Microsoft vision remains the same: To abstract data access with programmers so that they spend their time in writing "what" rather than "how". Imagine if the 4GL concept spans across all layers, application development will be over in the requirement gathering phase itself since the moment we specified 'what" do we need from application, application is ready. ADO.NET Entity Framework is a little step towards that vision.

    Introduction:

    A typical application has those 3 layers: Presentation, Business and Data Access. ADO.NET Entity framework will change the way Data Access layer is written. One way of looking at it is to consider as an ORM (Object Relational Modelling) framework. Any ORM framework aims to bridge the gap between the way data is represented in database (or better data source) and the way it is represented inside the application. An ORM framework does that by providing a framework that does the mapping between your classes and properties within it with database tables and their columns. This framework is an extra layer between the data access layer and data source. So developer do not have to write code against database, fetch data in some format, convert it to fit with objects in application and then use the data. Instead, developers do all their modifications on objects only and the changes are translated by the ORM framework into appropriate database calls. ADO.NET Entity Framework is also an ORM framework that provides this functionality. ORM frameworks are not new concepts, we have plenty of them (like nHibernate) in market. But what is different with ADO.NET Entity Framework is that it is much more than simply an ORM framework. Not only does it bridges gap between representation of data within application and database, but also it aims to improve upon performance (unlike other ORM frameworks that hits to performance) and more importantly it integrates with the programming language itself (we have LINQ to entities for the same reason) making this integration more flawless and easy to use.

    Architecture:

    I would take the picture (shown below) from one of the "data points" column of MSDN Magazine that clearly shows the high level developer architecture of ADO.NET Entity Framework. Let us dissect each element to appreciate the architecture better.

     

    EntityFrameworkArchitecture

    Entity Framework Layers (Logical + Mapping + Conceptual):

    The three entity framework layers shown in above diagram are the core of whole functionality. These layers are also called as EDM (Entity Data Model). Entity framework engine basically takes decisions on the basis of the EDM. The first step that we have to do in order to use the entity framework is to generate (optionally edit based on requirements) these application specific layers. But do not worry, generating these layers (EDM) is not more than a matter of clicks. You can generate an EDM using a database as a starting point. You can then modify the XML manually (or possibly using a modelling tool that may be available in a future release of Visual Studio). When you add an ADO.NET EDM to your project, the wizard walks you through the process of creating the EDM.

    The Logical layer contains the entire database schema in XML format. This represents the story at database side.

    The Conceptual Layer is again an XML file that defines the entities (your custom business objects) and the relationships (the custom ones) that your application knows. So if you say that each Customer can have many associated orders, that relationship is defined in the conceptual layer even if this relationship does not exist at database side.

    The Mapping layer is also an XML file that maps the entities and relationships defined at conceptual layer with the actual relationships and tables defined at logical layer. This is the core of entire Entity Framework that bridges the gap.

    Within application we always interact with Conceptual layer. To talk to Conceptual Layer we can use

    • A new language called Entity SQL directly (actually Entity SQL queries are fired using Entity Client provider which is very similar to ADO.NET model).
    • LINQ to Entities, which is again another language integrated querying facility.
    • Using Object Services which in turn uses an ObjectQuery Object.

    Whenever we commit changes (like invoking SaveChanges() method) to databases from application, appropriate queries are fired (it also takes care of the relationships). We don’t have to think about relationships at application level since all the mapping is done at EDM layers. So, Entity Framework handles Relationships quiet neatly.

    Wrapping UP:

    The best wrap up that I can write is what John Papa gave in his article:

    "The Entity Framework allows developers to focus on the data through an object model instead of the logical/relational data model. Once the EDM is designed and mapped to a relational store, the objects can be interacted with using a variety of techniques including EntityClient, ObjectServices, and LINQ.

    While the traditional objects such as the DataSet, DataAdapter, DbConnection, and DbCommand are still supported in the upcoming release of ADO.NET available in Visual Studio "Orcas," the Entity Framework brings major additions that open ADO.NET to new and exciting possibilities."

    There are other Microsoft initiatives in the data platform vision like having a framework to expose data as service and tools for developing for an occasionally connected architecture. More on those initiative in my future blog entries once I get my hands dirty with them as I have done with Entity Framework :).

     

    Rahul's

Page 1 of 1 (2 items)