Alik Levin's

Clarity, Technology, and Solving Problems | PracticeThis.com 

May, 2010

  • Alik Levin's

    Solution Architecture For The Masses. Step 4: Design Your Presentation Layer – Part II

    • 0 Comments
     Alik Levin    This post is a follow up of Part I. I am following the Web Application frame outlined in Web Application Archetype. In Part I I covered Authentication, Authorization, Caching, Exception, Management, Logging & Instrumentation. In this post I will cover the rest of Web Application categories – Navigation, Page Layout (UI), Page Rendering, Presentation Entity, Request Processing, Session Management, and Validation

    Quick Resource Box

    Navigation

    To visualize navigation I am using ASP.NET’s built-in treeview control bound to web.sitemap file and SiteMapPath [bread crumb] control for visualization. The navigation controls located in dedicated ACSX files that are placed inside formatted Master Page – more on that in Page Layout (UI) section. Follow these steps:

    • Add site map xml file to the root folder. Right click on the root folder of the web project and add new item, site map xml file. Site map located under Web category:

    image

    • Edit the XML site map file to reflect the desired site structure, for example:

    <?xml version="1.0" encoding="utf-8" ?>
    <siteMap xmlns="http://schemas.microsoft.com...." >
        <siteMapNode url="deafault.aspx"
                    
    title="Home" 
                    
    description="">
            <siteMapNode url="Restricted/UC1.aspx"
                        
    title="UC1"  
                        
    description="" />
            <siteMapNode url="Restricted/UC2.aspx"
                        
    title="UC2" 
                        
    description="" />
        </siteMapNode>
    </siteMap>

    • Create two ASCX controls in Controls folder, Header.ascx and Sidebar.ascx.

    image

    • Double click the Header.ascx file, switch to design mode, and drag SiteMapPath on it:

    image

    • Double click Sidebar.ascx file, switch to design mode, and drag TreeView control on it, follow the wizard to configure it to use the XML site map:

    image 

    • Add output caching to the both ASCX controls:

    <%@ OutputCache Duration="3000" VaryByParam="None"%>

    Page Layout (UI)

    Page layout is based on ASP.NET built-in Master Pages. Think of Master Page as a “server side frameset”. The page assembled on the server from the Master Page and the controls it hosts, rendered as a single HTML output and sent for rendering in the browser. The advantages are:

    • Maintainability. Visual components implemented as a separate ASCX files can be reused and changed separately from the rest of the UI improving maintainability.
    • Performance. Breaking the UI into separate parts allows to partially cache relatively static components like side bar, header, search control, and other. The technique known as partial Output Caching, more info -  Caching Explained. The fact that the page assembled and cached on the server, either partially or as a whole, helps to avoid heavy manipulation on the client using JavaScript and CSS which in some cases can lead to sever responsiveness [performance] problems, more info - Understanding Internet Explorer Rendering Behaviour.

    Follow these steps to implement page layout based on Master Page:

    • Right click on the root folder and add new item – Master Page. Master Page located under Web category. Call it Base.Master:

    image

    • Implement the HTML layout using HTML table. Note, though that from rendering perspective table based formatting is slower than using CSS. Keep in mind that the fact that the page is broken into visual components will allow to further implement it using CSS without breaking it. From the guide:
      • Use Cascading Style Sheets (CSS) for layout whenever possible.
      • Use table-based layout when you need to support a grid layout, but remember that table-based layout can be slow to render, does not have full cross-browser support, and there may be issues with complex layout.
      • Use a common layout for pages where possible to maximize accessibility and ease of use.
      • Use master pages in ASP.NET applications to provide a common look and feel for all of the pages.
      • Avoid designing and developing large pages that accomplish multiple tasks, particularly where only a few tasks are usually executed with each request.
    • While in the Master Page, from the menu choose Table->Insert Table. Make it two rows and two columns with one of them spanning over the two rows. Notice the ContentPlaceHolder – this is where the actual dynamic content of the pages will be rendered – more on it in Page Rendering section below:

    <table class="style1">
        <tr>
            <td colspan="2" align="left" valign="top">
            </td>
        </tr>
        <tr>
            <td  align="left" valign="top">
            </td>
            <td  align="left" valign="top">
        <asp:ContentPlaceHolder ID="ContentPlaceHolder1"
                                runat
    ="server">
        </asp:ContentPlaceHolder>
            </td>
        </tr>
    </table>

    • Switch to the Design mode and drag Sidebar.acsx on the left side of the table and the Header.ascx control to the upper part of the table. The result should look as follows:

    image

    • ContentPlaceHolder depicted above will hold the actual pages that “inherit” from the Master Page.

    Page Rendering

    Page rendering is the process of generating and displaying HTML in the browser. It usually involves displaying dynamic data from some data source such as SQL Server.

    For example, if I am required to present all transactions I have made, I’d follow these steps:

    • Add new page to Restricted folder [access to the page should be authorized – see Authorization in Part I]. To do so right click on the Restricted folder, then Add ->New Item and choose Web Form  using Master Page. Name it Transactions. Click Add. Specify available Master Page. Click OK.
    • From the tool box, under  Data category, drag ListView control on the placeholder area. 
    • Performance degrades with the number of items presented in the grid. Use paging to improve the responsiveness for the pages that present many rows. More info on paging - The DataPager Control, How To: Page Records Using AJAX, How To: Page Records in .NET Applications. Add DataPager control. The result should look similar to this:

    image

    • Configure the DataPager to control  the ListView

    <asp:DataPager ID="DataPager1"
                  
    runat="server" 
                   PageSize="3"
                   PagedControlID="ListView1">

    • Next step is to add the actual data to this view, the presentation entity as described in the next section.

    Presentation Entity

    Presentation entity is the data being presented on the page. In my case it is TransactionInfo that was implemented together with its business services in Step 2. In my case I get the list of the transactions generated by corresponding business services component and then bind it to the ListView control for presentation. Also I handle the paging event by calling ListView1_PagePropertiesChanging function [with little help from this post]. Code behind looks similar to the one that follows this paragraph. Notice IsPostBack check to make sure I do not run this code in case of postback [POST]. In this case the presentation re-generated from the ViewState so I save on CPU cycles and improve performance with regards to response time and resource utilization:

    protected void Page_Load(object sender, EventArgs e)
    {
        if (!IsPostBack)
        {
            BindListView();
        }
    }
    protected void ListView1_PagePropertiesChanging(
                        object sender,
                       
    PagePropertiesChangingEventArgs e)
    {
        this.DataPager1.SetPageProperties(e.StartRowIndex,
                                          e.MaximumRows,
                                          false);
        BindListView();
    }
    void BindListView()

        List<TransactionInfo> transactions =
              TransactionServices
    .GetCurrentUserTransactions();
        ListView1.DataSource = transactions;
        ListView1.DataBind();
    }

    In this specific case all the input was available immediately in the request. In fact GetCurrentUserTransactions would use IPrincipal and IIdentity interfaces to access HttpContext.Current.User to locate the actual current user internally as described in Part I – that is why I do not pass any parameters to it. In some cases I will need to provide parameters submitted from different controls such as in the case of search scenario. In that case the parameter submitted by one ASCX control and the rendering of the result performed by other control or page. This is covered in the next section, Request Processing.

    Request Processing

    When looking at the guidelines for request processing the recurrent theme is separating user interface, processing, and the data. The patterns that mentioned are MVC and MVP. Following these patterns achieves loosely coupling between the components which increases maintainability. In other words, when one component changes it does not affect the other one. One recurrent scenario is search. I followed these steps to implement my search:

    • Create new user ASCX control, SearchInput.ascx, by right clicking on the Controls folder, Add-> New Item…-> Web User Control.
    • Place the text box and the button on the SearchInput.ascx.
    • Double click the button and add the following code to the even handler. That way anyone who’d need to use the search criteria would just check the Page.Items collection without even knowing who put it there. It follows the principle of decoupling [or loosely coupling, or pub/sub specifically]:

    protected void btnSearch_Click(object sender, EventArgs e)
    {
        string searchCriteria = txtSearch.Text;
        Page.Items.Add("SearchCriteria", searchCriteria);
    }

    • Add new page under Restricted folder, name it AccountSearchResult.aspx, switch to design mode.
    • Drag onto it the SearchInput.ascx control.
    • Drag onto it Label control.
    • Drag onto it DatGrid control. The result should look similar to this:

    image

    • Add the following code the page’s prerender event:

    protected void Page_Prerender(object sender, EventArgs e)
    {
        string searchCriteria =
                       Page.Items["SearchCriteria"] as string;
        if (null != searchCriteria) 
        {
            Label2.Text =
                     
    string.Format("Your search for {0}
                                    generated the following
                                    results: "
    ,searchCriteria);
            List<AccountInfo> accounts =
                    AccountServices.FindAccountsBySearchCriteria
                                    (searchCriteria);
           GridView1.DataSource = accounts;
           GridView1.DataBind();
        }
    }

    • The code assumes there is a search criteria in page’s items collection, and if it’s there then it executes the search and binds the results to the grid. The rendered result should look as the following:

    image

    In this example the page shows the results that and it’s completely decoupled from the search input control that accepts the input.

    Session Management

    The guide stresses the importance of session management with regards to scalability and performance: “When designing a Web application, an efficient and secure session-management strategy is important for performance and reliability.” Here are variations and implications:

    • Use in-proc Session would require me to configure my load balancer for sticky session. Generally it’s the most common approach and works fairly well for most scenarios. The biggest downside is when one of the servers dies the state of user dies too, and the user loses his state. For example, if the user was in the middle of transaction – it all lost.
    • Use out of proc state. This approach would eliminate the risk of losing the state since in this case the state is usually stored in DB. The downside is that the performance gets hurt as a result of extra N/W hope, serialization, and security checks. I observed few cases where the performance hit was significant.

    For my case I will be using in-proc Session state. It’s easy to implement, I do not plan to store large data to avoid memory pressure, serialization cost, and recycles. I am taking the risk of the case when the end user loses the state due to the server failure since my scenarios do not assume massive data input.

    Validation

    Input and Data Validations are extremely important aspects that affect system’s security and reliability. Making sure only sanitized input gets in prevent unnecessary exceptions that eat up CPU cycles. It also helps prevent injection attacks. Making sure the system produces encoded output prevents Cross Site Scripting [XSS] attacks that usually end up with identity theft. For input and data validation I will use:

    • ASP.NET built in validation controls.
    • My encoding security services I have implemented in Step 3.

    After reviewing the code I have produced so far, following are the changes that need to be done:

    • Add HTML encoding when echoing the input in AccountSearchResults.aspx:

    Label2.Text =  EncodingServices.HtmlEncode( string.Format
                   ("Your search for {0} generated the following
                    results: "
    ,searchCriteria));

    • Encoded the output in Transactions.aspx:

    <%# EncodingServices.HtmlEncode(((Entities.TransactionInfo)Container.DataItem).TransactionAccount)%> ||

    <%# EncodingServices.HtmlEncode(((Entities.TransactionInfo)Container.DataItem).TransactionAmount.ToString())%> ||

    ...

    • Added validation control to the search input control, SearchInput.ascx:

    <asp:RegularExpressionValidator 
                                ID
    ="RegularExpressionValidator1" 
                                runat="server" 
                                ControlToValidate="txtSearch"
                                ErrorMessage="Invlaid input" 
                                ValidationExpression=
                           "&quot;^[a-zA-Z'.\s]{1,40}$&quot; ">
     </asp:RegularExpressionValidator>

    • To make sure the outcome of the validation control is taken into account I need to add the Page.IsValid in my AccountSearcResults.aspx before rendering the results in Prerender event:

    if (!Page.IsValid)
    {
        //THE BETTER WAY WOULD BE TO USE 
        //VALIDATION SUMMARY CONTROL
        throw new ApplicationException("Invalid input 
                                        provided."
    );
    }

    Related Books

  • Alik Levin's

    Inspecting Solution For Performance

    • 2 Comments
     Alik Levin    In this post I’d like to share my approach to managing performance throughout SDLC (Software Development Life Cycle). Before recently joining the Solution Engineering team I worked as a field consultant with MCS [Microsoft Consulting Services].

    Quick Resource Box

    My assignments included delivering performance workshops, reviewing architecture and design for performance, conducting performance code inspections, inspecting solution deployment for performance, and resolving performance incidents in production. In short, I was required to inspect the solution for performance at any phase of the development lifecycle.

    The Challenge

    The challenge I was constantly facing is how to efficiently and effectively communicate my recommendations to a customer. By customer I mean Business Sponsor, Project Manager, Solution Architect, Developer, Test Engineer, System Engineering, and End User. Different roles, different focus, different languages. If I am not efficient, I’d be wasting customer’s time. If I am not effective, my recommendations won’t be used.

    Performance Language

    What worked for me is establishing a common performance language across the team. First we’d agree on what performance is, and that is:

    • Response Time. Time that it takes for a server to respond to a request.
    • Throughput. Number of requests that can be served by your application per unit time.
    • Resource utilization. How much server and network resources are consumed.
    • Workload. Total number of users and concurrent active users.

    This simple definition helped when communicating a performance goals with decision makers in the inception stages of a project.

    Then we’d agree on what affects performance the most. I could not find any better categorization than Performance Frame:

    • Caching. Per user, application-wide, data volatility.
    • Communication. Transport mechanism, boundaries, remote interface design, round trips, serialization, bandwidth.
    • Concurrency. Transactions, locks, threading, queuing.
    • Coupling/Cohesion. Loose coupling, high cohesion among components and layers.
    • Data Access. Schema design; Paging; Hierarchies; Indexes; Amount of data; Round trips.
    • Data Structures/Algorithms. Choice of algorithm, Arrays vs. collections vs. DataSets vs. else.
    • Resource Management. Allocating, creating, destroying, pooling.
    • State Management. Per user, application-wide, persistence, location.

    This simple frame helped during architect/design phase when working with Solution Architects on creating a blueprint of the solution. It also helped during coding phase when working with developers on improving their code for performance. During production incidents I’d usually run series of questions trying to narrow down to a specific category. Once identified, I’d go off and use specific performance scalpel to dissect the issue at hand.

    Following are two quick case studies of how the well established performance language helped to effectively and efficiently improve performance - one during the architecture phase, and the other when solving production performance incident.

    The Case Of Over-Engineered Architecture

    I was responsible for performance as part of architecture effort for one of our customers. Based on the requirements the team came up with the design that supports high level of decoupling. The reason for it was enabling future extensibility and exposure to external systems. The design looked similar to this [the image is from Chapter 9: Layers and Tiers]:

     image

    The conceptual design recommended using WCF as a separate physical layer exposing the functionality to both external systems and to the application’s intrinsic UI. It also assumed DataSet as a DTO [Data Transfer Object].

    Worth to note, that one of the quality attributes was very aggressive performance requirements in terms of response time. Using Performance Frame we reviewed the design for performance and identified that such design wouldn’t be optimal in regards to performance:

    • Communication. Introducing another physical layer would incur latency due to serialization costs and security checks.
    • Concurrency. Extra measures need to be taken for throttling communications between the layers that would affect the throughput (concurrency). Concurrency issues would introduce requests queuing.
    • Data Structures/Algorithms. DataSets are less controllable in terms what gets serialized and what’s not when working with WCF. That would lead to extra dev effort or all-or-nothing serialization that would lead to extra latency.

    Next round of design improvements produced the conceptual design that exposed its functionality to it’s UI without going through the separate physical services layer, similar to this [the image taken from Chapter 9: Layers and Tiers]:

    image

    We all agreed that this time the design is simpler and should foster better response time for it’s native UI that account for 80% of the system’s workload.

    The Case Of Ever Growing Cache

    A customer complained that the application was throwing all active users periodically. The business was losing money as the application was responsible for getting new customers.

    We reviewed event logs for recycles and quickly identified that IIS was recycling and the reason is memory limits being hit. After quick interview with the team we assumed application was implementing caching in less than appropriate way or using data structures in a way that caused the recycles. We have taken few memory dumps and analyzed it using WinDBG. The culprit indeed was a datatable instantiated as a static variable serving for caching purposes. Since the datatable was instantiated as static variable it had no way to purge its values other than grow endlessly and cause the recycles. We were able to communicate our findings and recommendations back to the team using the performance language we established earlier.

    Conclusion

    Performance like Security is never ending story, it’s work in progress. It’s never too early to start improving both. The trick is setting clear goals and frame the approach that works best for you and the rest of the team.

    What works best for you and your team?

     

  • Alik Levin's

    Robust, Efficient, & Fast Data Access With LINQ to SQL

    • 2 Comments
     Alik Levin    In the post I have quickly captured the steps required to access a database using LINQ to SQL. I am reading a book LINQ in Action – good read, easy and practical. Love it a lot.

    Quick Resource Box

    General ORM Limitations

    In the book the authors specify key limitations of existing ORM [object relational mapping] tools:

    “Some of their [ORM tools] main limitations include the following:

    • A good knowledge of the tools is required before being able to use them efficiently and avoid performance issues.
    • Optimal use still requires knowledge of how to work with a relational database.
    • Mapping tools are not always as efficient as handwritten data-access code.
    • Not all the tools come with support for compile-time validation.”

    I’d summarize the summary as “ORM usually hit developer’s and/or code’s performance.

    Accessing Database with LINQ to SQL

    Summary of steps:

    • Step 1 – Create entity class
    • Step 2 – Write LINQ to SQL Query
    • Step 3 – Test your code

    The following section describes each step in details.

    Step 1 – Create entity class

    I am using Pet Shop database. I have created a simple ProductInfo entity [Table] class as follows:

    [Table(Name = "Product")]
    public class ProductInfo
    {
        [Column (IsPrimaryKey=true, Name="ProductId")]
        public string ID { get; set; }
        [Column]
        public string Name { get; set; }
        [Column (Name="Descn")]
        public string Description { get; set; }
        [Column (Name="CategoryId")]
        public string Category { get; set; }
    }

    Notice the annotations for each property. The annotations actually map the class’ properties to the table’s fields.

    Step 2 – Write LINQ to SQL Query

    Next is creating the DataContext object – effectively the connection to the database, and then building the query:

    DataContext db = new DataContext(@"Data Source=.\sqlexpress;
                         Initial Catalog=MSPetShop4;
                         Integrated Security=True"
    );
    var products=
        from product in db.GetTable<ProductInfo>()
        where product.Category.Equals("FISH")
        select product;

    Step 3 – Test your code

    To test the code I have dumped the values to the console and received the result:

    foreach (ProductInfo product in products)
    {
           Console.WriteLine("NAME {0} DESCRIPTION {1}",
                             product.Name,
                             product.Description);

    }

    image

    I have also ran a SQL Express Profiler to observe the SQL Statement issued against the DB":

    exec sp_executesql N'SELECT [t0].[ProductId] AS [ID], [t0].[Name], [t0].[Descn] AS [Description], [t0].[CategoryId] AS [Category]
    FROM [Product] AS [t0]
    WHERE [t0].[CategoryId] = @p0',N'@p0 nvarchar(4000)',@p0=N'FISH'

    Analysis

    In the book authors summarize the efficiency of the approach as follows:

    “Let’s sum up what has been done automatically for us by LINQ to SQL:

    • Opening a connection to the database
    • Generating the SQL query
    • Executing the SQL query against the database
    • Creating and filling our objects out of the tabular results”
    • [ALIKL] Closing/Disposing connection to the database

    As  performance guy I must also add the LINQ to SQL closes/disposes the connection automatically. In too many cases developers neglect closing/disposing the connection which usually leads to connection leak and as a result to unstable or less than optimal performance.

    Conclusion

    Seems like LINQ to SQL breaks the limitations I have mentioned in the beginning. For my Solution Architecture For the Masses series I am using old school database approach. I believe since the solution I have build utilizes layered approach and since the layers are abstracted one from another I will be porting the DAL [Data Access Layer] from ADO.NET to LINQ to SQL.

    Read the book LINQ in Action.

    Related Books

Page 1 of 3 (8 items) 123