Blog - Title

An Example Presented in Both Coding Styles

An Example Presented in Both Coding Styles

  • Comments 7

[Blog Map]  [Table of Contents]  [Next Topic]

Perhaps the best way to compare and contrast the imperative (stateful) coding style and the functional coding style is to present examples that are coded in both approaches.

This blog is inactive.
New blog: EricWhite.com/blog

Blog TOC
This example will use some of the syntactic constructs that are presented in detail further on in this tutorial.  Don't worry if this example contains code that you don't understand; it is presented so that you can see the big picture of the comparison of the two styles.  Later, after you have read through the rest of the tutorial, if necessary return to these examples and review them.  In this topic, we're more concerned with seeing the big picture.

The example will consist of two separate transformations.  The problem that we want to solve is to first increase the contrast of an image, and then lighten it.  So we want to first brighten the brighter pixels, and darken the darker pixels.  Then, after increasing contrast, we want to increase the value of each pixel by a fixed amount.  (I'm artificially dividing this problem into two phases.  Of course, in a real world situation, you would solve this in a single transformation, or perhaps using a transform specified with a matrix).

To further simplify the mechanics of the transform, for the purposes of this example, we'll use a single floating point number to represent each pixel.  And we'll write our code to manipulate pixels in an array, and disregard the mechanics of dealing with image formats.

So, in this first example, our problem is that we have an array of 10 floating point numbers.  We'll define that black is 0, and pure white is 10.0.

  • The first transform – increase the contrast: if the pixel is above five, we'll increase the value by 1.5 * (p – 5).  If the pixel is below 5, we'll decrease the value by (p – 5) * 1.5.  Further, we'll limit the range – a pure white pixel can't get any brighter and a pure black pixel can't get any darker.
  • The second transform – brighten the image: we'll add 1.2 to every pixel, again capping the value at 10.0.

When coding in a traditional, imperative style, it would be a common approach to modify the array in place, so that is how the following example is coded.  The example prints the pixel values to the console three times – unmodified, after the first transformation, and after the second transformation.

The following code is attached to this page (Example #1).

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
 
static class Program
{
    private static double Limit(double pixel)
    {
        if (pixel > 10.0)
            return 10.0;
        if (pixel < 0.0)
            return 0.0;
        return pixel;
    }
 
    private static void Print(IEnumerable<double> pixels)
    {
        foreach (var p in pixels)
            Console.Write(String.Format("{0:F2}", p).PadRight(6));
        Console.WriteLine();
    }
 
    public static void Main(string[] args)
    {
        double[] pixels = new[] { 3.0, 4.0, 6.0, 5.0, 7.0, 7.0, 6.0, 7.0, 8.0, 9.0 };
 
        Print(pixels);
 
        for (int i = 0; i < pixels.Length; ++i)
        {
            if (pixels[i] > 5.0)
                pixels[i] = Limit((pixels[i] - 5.0) * 1.5 + 5.0);
            else
                pixels[i] = Limit(5.0 - (5.0 - pixels[i]) * 1.5);
        }
 
        Print(pixels);
 
        for (int i = 0; i < pixels.Length; ++i)
            pixels[i] = Limit(pixels[i] + 1.2);
 
        Print(pixels);
    }

This example produces the following output:

3.00  4.00  6.00  5.00  7.00  7.00  6.00  7.00  8.00  9.00
2.00  3.50  6.50  5.00  8.00  8.00  6.50  8.00  9.50  10.00
3.20  4.70  7.70  6.20  9.20  9.20  7.70  9.20  10.00 10.00 

Here is the same example, presented using queries.  The following code is attached to this page.  (Example #2):

double[] pixels = new[] { 3.0, 4.0, 6.0, 5.0, 7.0, 7.0, 6.0, 7.0, 8.0, 9.0 };
 
Print(pixels);
 
IEnumerable<double> query1 =
    from p in pixels
    select p > 5.0 ?
        Limit((p - 5.0) * 1.5 + 5.0) :
        Limit(5.0 - (5.0 - p) * 1.5);
 

Print(query1);

IEnumerable<double> query2 =
    from p in query1
    select Limit(p + 1.2); 

Print(query2);

This example produces the same output as the previous one.

However, there are significant differences.  In the second example, we did not modify the original array.  Instead, we defined a couple of queries for the transformation.  Also, in the second example, we never actually produced a new array that contained the modified values.  The queries operate in a lazy fashion, and until the code iterated over the results of the query, nothing was computed.

Here is the same example, presented using queries that are written using method syntax (Example #3):

double[] pixels = new[] { 3.0, 4.0, 6.0, 5.0, 7.0, 7.0, 6.0, 7.0, 8.0, 9.0 };
 
Print(pixels);
 
IEnumerable<double> query1 =
    pixels.Select(
        p =>
        {
            if (p > 5.0)
                return Limit((p - 5.0) * 1.5 + 5.0);
            else
                return Limit(5.0 - (5.0 - p) * 1.5);
        }
    );
 
Print(query1);
 
IEnumerable<double> query2 =
    query1.Select(p => Limit(p + 1.2));
 
Print(query2); 

Because the second query operates on the results of the first query, we could tack the Select on the previous call to Select. (Example #4):

double[] pixels = new[] { 3.0, 4.0, 6.0, 5.0, 7.0, 7.0, 6.0, 7.0, 8.0, 9.0 };
 
Print(pixels);
 
IEnumerable<double> query1 =
    pixels.Select(
        p =>
        {
            if (p > 5.0)
                return Limit((p - 5.0) * 1.5 + 5.0);
            else
                return Limit(5.0 - (5.0 - p) * 1.5);
        }
    ).Select(p => Limit(p + 1.2));
 
Print(query1); 

This ability to just tack the second Select on the end of the first one is an example of composability.  Another name for composability is malleability.  How much can we add/remove/inject/surround code with other code without encountering brittleness?  Malleability allows us to shape the results of our query.

All three of the above approaches that were implemented using queries have the same semantics, and same performance profile.  The code that the compiler generates for all three is basically the same.

[Blog Map]  [Table of Contents]  [Next Topic]

Attachment: AnExamplePresentedInBothCodingStyles.cs
Leave a Comment
  • Please add 2 and 1 and type the answer here:
  • Post
  • In your example:

    if (p > 5.0)

      return Limit((p - 5.0) * 1.5 + 5.0);

    else

      return Limit(5.0 - (5.0 - p) * 1.5);

    "(p-5.0)*1.5+5.0" and "5.0-(5.0-p)*1.5" are perfectly equivalent!

  • 'All three of the above approaches that were implemented using queries have the same semantics, and same performance profile.  The code that the compiler generates for all three is basically the same.'

    I beg to differ.

    The 'algorithmic' code would look similar or the same, but the underlying metaphor between the first example and the next two is large.  By requiring that copies be made of the initial array, there is an underlying churn being induced in the heap.  A churn which requires garbage collection of those self-same constructs.  In this meager example those affects are relatively minor, but as things scale in complexity and size, those affects can become major.  That makes the difference larger than it might first appear.

    Memory churn scales with the number of compositions and the size of the intermediate result sets.  This is not unlike some of the issues exhibited by SQLs during execution, which have some marked (and quite nasty) side effects.

  • @R King:

    I think that maybe you misunderstood which queries I was referring to.  Example #1 is the algorithmic approach, which is has a completely different performance profile from examples #2, #3, and #4, which are implemented via queries.  (I've labeled the examples above so that what I'm referring to is clear.)  #2 is implemented with query expressions, which are translated by the compiler into calls to extension methods.  Example #3 is the same query expressed in method syntax.  Example #4 is the same as #3, except it has the last Select tacked onto the end.  #2, #3, and #4 have the same performance profile.

    It is true that the queries induce a larger number of short-lived objects on the heap.  The garbage collector is optimized for handling many short-lived objects.  I have regularly used code similar to the final results of this tutorial on a set of documents that are fairly large: > 200 documents, each approx 50K in size.  The query code executes for all 200 of the documents in about 2 seconds.  The performance is very good.

    Regarding #2, #3, and #4, they don't create intermediate result sets as such, due to lazy evaluation.  So even if the source array was extremely large, the amount of long-term memory used doesn't increase.

    I absolutely agree that there are certain scenarios where intruducing a large number of objects on the heap would result in unacceptible perf.  But in those scenarios, you might choose another technology, such as C or C++.  If you were processing XML and need good perf on extremely large XML documents, you may use a streaming parser such as SAX or XmlLite.

    One of the ideas behind LINQ is that we have these incredibly powerful computers, and in many circumstances, we can use the power of the computer to make the developer's job easier.  We don't care whether the resulting code runs in .02 seconds or in 2 seconds, if the developer was able to write the code much faster.

    Does this make sense?

    -Eric

  • Eric,

    It does indeed make sense.  

    Let me give you a little of my background..  I've worked for 25+ years, many of them doing very large scalable systems.  In the last 10 years or so I've been involved in hiring engineers, and I've run into a very large number of engineers that don't take these issues into account, even when performance of websites and such depends on such things.  I've worked in C++, Java, and for the last year C#.  At one level or another all these systems suffer from heap churn if you don't pay attention to what you are doing.  Its why I pointed out the issue.  

    Thanks again for your continued thoughtful responses, and your contribution to making the .NET environment and its underlying environment the easy thing to use that it is.  I frequent your blog and find it most illuminating.. :)

    Regards,

    RK

  • [Table of Contents] [Next Topic] To do functional programming in C#, it is important that we have a base

  • It should be mentioned that deferred execution makes last example the most effective. Printing result before and after the last selecting generates two iterations through the array while last example will scan array only once.

    Deferred execution generates another trap for novices: there is no need following style, presented in the third example. Removing intermediate printing from second example will create code equivalent to the third one.

  • Shorter code! :D

    double[] pixels = new[] { 3.0, 4.0, 6.0, 5.0, 7.0, 7.0, 6.0, 7.0, 8.0, 9.0 };

    Print(pixels);

    IEnumerable<double> query1 =

       pixels.Select(

                             p => (p > 5.0) ? Limit((p - 5.0) * 1.5 + 5.0) :  Limit(5.0 - (5.0 - p) * 1.5);

                               ).Select(p => Limit(p + 1.2));

    Print(query1);

Page 1 of 1 (7 items)