Diego Vega

Entity Framework news and sporadic epiphanies

  • Diego Vega

    Workaround for performance with Enumerable.Contains and non-Unicode columns against EF in .NET 4.0


    This week we announced the availability of EF6 alpha 2 (read the announcement post in our blog and Scott Guthrie's post for more information), which includes a great performance improvement for Enumerable.Contains. This reminded me of another improvement we did in EF5 (in EF core bits in .NET 4.5) to improve the translation of the query for Enumerable.Contains over lists of strings, so that the database server can take advantages of existing indexes on non-Unicode columns.

    I got asked the question many times of how to make this work faster with .NET 4.0. Fortunately there is a workaround.

    In .NET 4.0 we added the ability to EF to recognize when a comparison between a constant or a parameter and a non-Unicode column should not try to convert the column to Unicode, e.g. assuming that CompanyName is a non-Unicode string column, e.g. varchar(max), that for a query like this:

     var q = context.Customers.Where(c => c.CompanyName == "a");

    We can produce a translation like this:

    [Extent1].[CustomerID] AS [CustomerID],
    [Extent1].[CompanyName] AS [CompanyName],
    FROM [dbo].[Customers] AS [Extent1]
    WHERE [Extent1].[CompanyName] = 'a'

    Rather than this:

    [Extent1].[CustomerID] AS [CustomerID],
    [Extent1].[CompanyName] AS [CompanyName],
    FROM [dbo].[Customers] AS [Extent1]
    WHERE [Extent1].[CompanyName] = N'a'

    For simple comparisons and simple string constants we can detect the pattern and do this automatically. Even for cases in which we don't do this automatically we provide a method that can be used to explicitly indicate to LINQ to Entities that a constant should be treated as Non-Unicode. EntityFunctions.AsNonUnicode. For instance, you can write the query this way:

    var q = context.Customers.Where(c => c.CompanyName != EntityFunctions.AsNonUnicode("a"));

    But when Enumerable.Contains is involved, the workaround is more complicated because the source argument of Contains is a collection of strings and you cannot directly tell EF that each element has to be considered non-Unicode. E.g. let’s say that we want to write the following query, but we want EF to treat the strings in the list as non-Unicode strings:

    var values = new[] { "a", "b", "c" };
    var q = context.Customers.Where(c => values.Contains(c.CompanyName));

    The following helper class defines a method that can be used to build a predicate with all the necessary LINQ expressions:

    public static class PredicateBuilder
        private static readonly MethodInfo asNonUnicodeMethodInfo = 
        private static readonly MethodInfo stringEqualityMethodInfo = 
        public static Expression<Func<TEntity, bool>> ContainsNonUnicodeString<TEntity>(
                    IEnumerable<string> source, 
                    Expression<Func<TEntity, string>> expression) 
            if (source == null) throw new ArgumentNullException("source");
            if (expression == null) throw new ArgumentNullException("expression");
            Expression predicate = null;
            foreach (string value in source)
                var fragment = Expression.Equal(
                        Expression.Constant(value, typeof(string))), 
                if (predicate == null)
                    predicate = fragment;
                    predicate = Expression.OrElse(predicate, fragment);
            return Expression.Lambda<Func<TEntity, bool>>(predicate,


    var values = new[] { "a", "b", "c" };
    var q = context.Customers.Where(
        PredicateBuilder.ContainsNonUnicodeString<Customer>(values, a => a.CompanyName));

    The translation looks like this:

    [Extent1].[CustomerID] AS [CustomerID],
    [Extent1].[CompanyName] AS [CompanyName],
    FROM [dbo].[Customers] AS [Extent1]
    WHERE [Extent1].[CompanyName] IN ('a','b','c')

    As always, treat this sample code carefully. This is something I only tried on my machine and it worked for me. You should test it and verify that it meets your needs.

    What we are doing in this workaround is putting together a LINQ expression tree that is valid and that LINQ to Entities can parse and translate, but that would have been very hard (in fact I believe in this case it would not be possible) to get the compiler to produce directly.

    Hope this helps,


  • Diego Vega

    Tips to avoid deadlocks in Entity Framework applications


    UPDATE: Andres Aguiar has published some additional tips on his blog that you may also find helpful when dealing with deadlocks.

    Recently a customer asked a question about how to avoid deadlocks when using EF. Let me first say very clearly that I don’t actually hear about deadlock issues with EF often at all. But deadlocks are a general problem with database applications using transactions, and to answer this particular customer I collected some information that then I thought could be worth sharing in my blog in case someone else runs into problems:

    I won’t spend time in the basics of deadlocks or transaction isolation levels. There is plenty of information out there. In particular, here is a page in SQL Server’s documentation that provides general guidelines on how to deal with deadlocks in database applications. Two of the main recommendations that apply to EF applications are to examine the isolation level used in transactions and the ordering of operations.

    Transaction isolation level

    Entity Framework never implicitly introduces transactions on queries. It only introduces a local transaction on SaveChanges (unless an ambient System.Transactions.Transaction is detected, in which case the ambient transaction is used).

    The default isolation level of SQL Server is actually READ COMMITTED, and by default READ COMMITED uses shared locks on reads which can potentially cause lock contention, although locks are released when each statement is completed. It is possible to configure a SQL Server database to avoid locking on reads altogether even in READ COMMITTED isolation level by setting the READ_COMMITTED_SNAPSHOT option to ON. With this option, SQL Server resorts to row versioning and snapshots rather than shared locks to provide the same guarantees as the regular READ COMMITED isolation. There is more information about it in this page in the documentation of SQL Server.

    Given SQL Server’s defaults and EF’s behavior, in most cases each individual EF query executes in its own auto-commit transaction and SaveChanges runs inside a local transaction with READ COMMITED isolation.

    That said, EF was designed to work very well with System.Transactions. The default isolation level for System.Transactions.Transaction is Serializable. This means that if you use TransactionScope or CommitableTransaction your are by default opting into the most restrictive isolation level, and you can expect a lot of locking!

    Fortunately, this default can be easily overriden. To configure the Snapshot, for instance, using TransactionScope you can do something like this:

     1: using (var scope = new TransactionScope(TransactionScopeOption.Required, new 
     2: TransactionOptions { IsolationLevel= IsolationLevel.Snapshot }))
     3: {
     4: // do something with EF here
     5: scope.Complete();
     6: }

    My recommendation would be to encapsulate this constructor in a helper method to simplify its usage.

    Ordering of operations

    EF does not expose a way to control the ordering of operations during SaveChanges. EF v1 indeed had specific issues with high isolation levels (e.g. Serializable) which could produce deadlocks during SaveChanges. It is not a very well publicized feature, but in EF 4 we changed the update pipeline to use more deterministic ordering for uncorrelated CUD operations. This helps ensure that multiple instances of a program will use the same ordering when updating the same set of tables, which in turns helps reduce the possibility of a deadlock.

    Besides SaveChanges, if you need to have transactions with high isolation while executing queries, you can manually implement a similar approach: make sure your application always accesses the same pair of tables in the same order, e.g. use alphabetical order.


    The recommendations to avoid deadlocks in EF applications boils down to:

    • Use snapshot transaction isolation level (or snapshot read committed)
    • Use EF 4.0 or greater
    • Try to use the same ordering when querying for the same tables inside a transaction


    Hope this helps,

  • Diego Vega

    Exception from DbContext API: EntityConnection can only be constructed with a closed DbConnection


    UPDATE: After I posted this article we found that the plan we had to enable the pattern with contex.Database.Connection.Open() would cause breaking changes in important scenarios: the connection, the EF model, and potentially the database would be initialized any time you somehow acess the Database.Connection property. We decided to back out of this plan for EF 5, therefore we will revisit fixing this issue completely on EntityConnection in EF 6.0.

    In several situations we recommend opening the database connection explicitly to override Entity Framework's default behavior, which is to automatically open and close the connection as needed. You may need to do this if for example you are using:

    • SQL Azure and you want to test that the connection is valid before you use it (although the failures in this situation have been reduced with the release of an update to SqlClient in August).
    • federated database and you need to issue the USE FEDERATION statement before you do anything else.
    • TransactionScope with a version of SQL Server older than SQL Server 2008 and you want to avoid the transaction from promoting to twi-phase commit.
    • TransactuinScioe with a database - like SQL CE - that doesn't support two-phase-commit and hence you want to avoid the ambient transactions from being promoted.

    The code with the ObjectContext API usually looks similar to this:

     1: using(var context = new MyEntities())
     2: {
     3:     context.Connection.Open();
     4:     var query = 
     5:         from e in context.MyEntities
     6:         where e.Name.StartsWith(name)
     7:         select e;
     8:     EnsureConnectionWorks(context.Connection);
     9:     foreach(var entity in query)
     10:     {
     11:         // do some stuff here
     12:     }
     13: }

    If you try to use similar code with a DbContext in the current version of Entity Framework, i.e. if you try calling context.Database.Connection.Open(), things won’t work as expected. Most likely, you will get an exception with the following message:

    EntityConnection can only be constructed with a closed DbConnection

    The issue occurs because the connection object exposed in the DbContext API (context.Database.Connection) is not an EntityConnection but the actual database connection. We made that design choice on purpose because it allows us to remove the need to learn about the existence of a whole API layer in order to use Entity Framework. Unfortunately, the choice also kills the pattern of opening the connection explicitly.

    If you are not curious about the technical implementation details, you just need to know that the best approach available in the current version of EF to avoid this exception and still control the lifetime of the connection is to drop down to the underlying ObjectContext instance and open the EntityConnection on it:

     1: ((IObjectContextAdapter)context).ObjectContext.Connection.Open();

    If everything goes according to plan, EF 5.0 will include changes that will make this unnecessary so that simply invoking context.Database.Connection.Open() will work.

    If you do want to hear what happens under the hood and how things will work in EF 5.0, here are some more details:

    Similar to other classes in the EntityClient namespace, EntityConnection was designed to behave like the DbConnection of a real ADO.NET provider. But the implementation of EntityConnection wraps the underlying database connection object and takes over its state. On the other hand, any time an ObjectContext needs the connection to perform some operation against the database, it asks the EntityConnection for its current state state and if it finds that the connection is closed, it infers that the implicit on-demand open/close behavior is needed.

    When you open the database connection exposed in context.Database.Connection, one of two things may happen:

    1. You may open the database connection before the underlying ObjectContext gets initialized: If this is the case the operation will succeed, but you will get the exception as soon as the underlying ObjectContext instance gets initialized (e.g. as a side effect of executing a query), because initializing the ObjectContext also involves initializing the EntityConnection. As the exception message says, the reason this will fail is that the constructor of EntityConnection validates that the database connection passed to it is in the closed state. The main reason the constructor of EntityConnection only takes a closed database connection is to simplify the implementation and to mitigate the need to synchronize the state among the two connection objects.
    2. You may instead open the database connection after the underlying ObjectContext get initialized, in which case you won’t get the exception but you won't get the desired effects either: EF will still close the connection after using it. The reason that happens is that EF checks the state of the EntityConnection and not the state of the real database connection. EntityConnection maintains its own state independently from the state of the underlying connection, so even if the database connection is in the opened state, the EntityConnection will appear closed. 

    We considered changing the behavior of EntityConnection in .NET 4.5 so that the constructor would accept an open database connection, and to make its connection state property delegate to the corresponding property of the underlying database connection. This would have meant that an EntityConnection could be now created in the open state. After some analysis we realized that things would get very tricky and that in certain cases the proposed changes could break existing applications. .NET 4.5 is an in-place update for .NET 4.0 so we are not making deliberate changes that may break existing .NET 4.0 apps.

    Instead we figured out a way (I think it was Andrew Peters who suggested it) to make the fix in the DbContext API by making the EntityConnection follow the state of the underlying database connection. DbConnection exposes an event, StateChage that is perfect for this purpose, so we just subscribe to the event in the database connection and then call Open and Close on the EntityConnection as necessary. This implies that whenever someone access the context.Database.Connection property, the underlying ObjectContext and EntityConnection have to be initialized. This is a breaking change too, but one that we are willing to take given the benefits and given that EF 5.0 (i.e. the new version of the EntityFramework.dll) is not an in-place update for EF 4.x.

    We made one exception to this new behavior though: if you access the Connection property in context.Database during model creation (i.e. inside OnModelCreating) we won’t initialize the underlying ObjectContext (how could we, if we still don’t know what the final model is going to look like?).

  • Diego Vega

    Why Entity Framework vNext will be EF5 and nothing else


    This post touches on some history and on how different rationales have driven versioning of the Entity Framework for the last few years. I recommend you to continue reading only if you care about learning how things went in detail, otherwise here is all you need to know:

    • The Entity Framework now versions separately from the .NET Framework as some of its most important APIs ship outside .NET Framework (learn more details here)
    • We started using Semantic Versioning since EF 4.2, which means that  in the future any version increment will follow simple rules and will respond exclusively to technical criteria
    • The next planned version of EF after EF 4.3.0 is EF 5.0.0 (or EF 5 for short). This version will take advantage of the improvements we have made in the core libraries of EF in .NET 4.5 to deliver much awaited features such as support for enum and spatial types, TVFs and stored procedures with multiple results, as well as significant performance improvements

    As with everything in this blog, this post represents my personal view and not the position of my employer or of my team (although I am doing my best to just presents the facts :)).


    I have very little to say about how we thought about versioning during the development of the first version of EF. Perhaps other people in the team gave it more thought, but I think the majority of us were just focused on getting EF out of the door. As naïve as it might sound, I think when software engineers are working on the first version of a product, they seldom think about versioning.

    The only hint of versioning I remember being evident at the time were there many cool features we had to postpone to “vNext” because they didn’t fit in the schedule.

    4 = 2^2

    When development of what today is called EF4 started, we used to refer to it in various ways, like EF vNext, EF v2, etc. At the time the concern emerged that versioning EF separately from .NET would cause confusion among some customers. After all, the EF runtime was going to be only a library that would ship as part of the .NET Framework, much like the core ASP.NET technologies or WinForms, and there were no reasons in sight to think that would ever change.

    Although EF was a new library compared with ASP.NET and WinForms – which had been part of .NET from day one – at least with the information we had at the time it seemed that aligning the version of EF with the version of .NET would minimize fragmentation of the .NET brand and the confusion among a certain crowd.

    I feel tempted to say something like “in retrospective, we were wrong”. But what really happened is that there were too many important things we wouldn’t learn until later…

    4.1 = 4 + 5 CTPs + 1 Magic Unicorn

    Forward to June 2009, with .NET 4.0 still in development, we released the EF Feature CTP 1, which included the Self-Tracking-Entities template, the POCO template and the first version of Code First. The idea of the Feature CTPs was to have a vehicle to get semi-experimental features of EF out and gather customer feedback that would help us develop them or discard them rapidly. Then the plan was that once finished, those features would be integrated into .NET and Visual Studio in the next available window.

    That way Self-Tracking-Entities made it on time for the final version of Visual Studio 2010, and the POCO template was released out of band in the Visual Studio Gallery. In the meanwhile, the most interesting piece of the EF Feature CTP 1, Code First, was a runtime piece, and the schedule to integrate runtime improvements into .NET 4.0 was already very tight. We also knew that the design could be improved substantially, and wanted to have multiple rounds of shipping previews and gathering feedback.

    Code First ended up missing the .NET 4.0 release. The only part that made it was the addition of DDL methods to ObjectContext and the EF provider model (e.g. CreateDatabase, DeleteDatabase, DatabaseExists).

    More than a year later and after several previews more, we released EF Feature CTP 4, which included for the first time another critical piece of the new EF: the “Productivity Improvements”, also known as the DbContext API. The .NET 4.0 train had departed long ago, but the .NET 4.5 train hadn’t still been announced.

    As the popularity of Code First and DbContext grew rapidly, it became obvious that we could not wait for the next version of .NET to release it.

    We ended up releasing the RTM version of Code First and DbContext under the name Entity Framework 4.1 in April 2011.

    At the time I remember some people asked me why we called it EF 4.1 and not EF 5. Version number 4.1 comes from the fact that it builds on top of version 4, and it is a purely additive and incremental change (i.e. it is a separate DLL that uses the public API of EF 4). There were many new features in 4.1, but we wanted to reinforce that it was just an incremental improvement and that EF 4 was still there.

    x = 4 + 1

    Things have changed a lot since we decided to call the second version EF 4. It was never a popular choice among those people that really care about versioning. Some have even suggested that it was a marketing stunt to make EF look like a more mature product than it was. Although not strictly an engineering decision, this was never the goal. Regardless, by the time we released EF 4.1, EF 4 was completely established.

    When deciding how to call the next major version of EF, we looked at different alternatives. We could have chosen to just wait and hope that things would align magically, e.g. make sure we don't go over EF 4.4 based on .NET 4.0, and then say that the version of EF that releases at the same time as .NET 4.5 was also EF 4.5, but there were other forces at work...

    Since the first Feature CTPs we have released more and more of our new functionality outside of .NET. At this point the way we release EF resembles more the model of ASP.NET MVC, which ships outside .NET and hence evolves at a different pace. We have achieved greater agility with the things that we ship out-of-band, and given the sustained customer demand for improvments, it only makes sense to move more pieces to the out-of-band releases. From this perspective, and for the long term, it makes more sense for EF to have versioning that is independent from .NET.

    When we looked for alternatives to rationalize our versioning system, we run into Semantic Versioning almost immediately (Scott Hanselman had been evangelizing it for some time). Semantic Versioning is a versioning system that consists on a small set of very straightforward, common sense rules.

    Adopting Semantic Versioning has lots of advantages. For starters, any software pieces that has a dependency on e.g. version 2 of certain component can assume that it will work with any version greater or equal to 2 and lesser than 3 as long as they are using Semantic Versioning. This simplifies managing dependencies and authoring installers. SemVer is also not something completely new, but a formalization of common practices, therefore everyone can understand it.

    Last August we announced that we were considering Semantic Versioning and asked for feedback. Last October we made it official: we will be using "Entity Framework" to refer to the bits we ship outside the .NET framework, and “EF core libraries” for the libraries we ship in .NET. We will continue versioning EF outside of .NET but we will use Semantic Versioning. There is one caveat: we have to start counting versions from where we already are. Since we started versioning EF separately from .NET with EF 4.1, we actually had very few option but to continue doing so. Unfortunately there was no way we could go back in time and change the decisions we made years before. Decrementing version numbers was obviously not an option, and besides, changing the name of the product and resetting to v1 sounded like a really bad idea.

    x = 5.0.0

    Two days ago, in the announcement of EF4.3 beta we mentioned that the next major version of EF, which will contain much awaited features like Enum and Spatial type support, will be called EF 5 and that a beta is just around the corner.

    It seems that this has triggered some questions even among people that usually follow the evolution of EF very closely. Here is why EF 5.0.0 is our only option:

    The Semantic Versioning spec clearly states:

    9. Major version X (X.y.z | X > 0) MUST be incremented if any backwards incompatible changes are introduced to the public API. It MAY include minor and patch level changes. Patch and minor version MUST be reset to 0 when major version is incremented.

    There are in fact some breaking changes coming in EF5. For instance, DataAnnotations that in the .NET 4.0-based versions of EF were defined inside EntityFramework.dll have been moved to System.ComponentModel.DataAnnotations.dll in .NET 4.5.

    My favorite thing about Semantic Versioning is that it reduces the question of versioning to an engineering problem! EF 5.0 is what it has to be.

    Hope this helps make the reasons clearer,

  • Diego Vega

    Stored procedures with output parameters using SqlQuery in the DbContext API


    The DbContext API introduced in Entity Framework 4.1 exposes a few methods that provide pass-through access to execute database queries and commands in native SQL, such as Database.SqlQuery<T>, DbSet<T>.SqlQuery, and also Database.ExecuteSqlCommand.

    These methods are important not only because they allow you do execute your own native SQL queries but because they are right now the main way you can access stored procedures in DbContext, especially when using Code First.

    Implementation-wise these are just easier to use variations of the existing ObjectContext.ExecuteStoreQuery<T> and ObjectContext.ExecuteStoreCommand that we added in EF 4.0, however there still seems to be some confusion about what these methods can do and in particular about the query syntax they support.

    I believe the simplest way to think about how these methods work is this:

    1. A DbCommand from the underlying ADO.NET provider is setup with the query text passed to the method
    2. The DbCommand is executed with the CommandType property set to CommandType.Text
    3. In addition, if the method can return results (e.g. SqlQuery<T>) objects of the type you passed are materialized based on the values returned by the DbDataReader

    For a stored procedure that returns the necessary columns to materialize a Person entity, you can use syntax like this:

     1: var idParam = new SqlParameter { 
     2:     ParameterName = "id",  
     3:     Value = 1}; 
     4: var person = context.Database.SqlQuery<Person>( 
     5:     "GetPerson @id",  
     6:     idParam);

    For convenience these methods also allow parameters of regular primitive types to be passed directly. You can use syntax like “{0}” for referring to these parameters in the query string:

     1: var person = context.Database.SqlQuery<Person>(
     2:     "SELECT * FROM dbo.People WHERE Id = {0}", id);

    However this syntax has limited applicability and any time you need to do something that requires finer control, like invoking a stored procedure with output parameters or with parameters that are not of primitive types, you will have to use the full SQL syntax of the data source.

    I want to share a simple example of using an output parameter so that this can be better illustrated.

    Given a (completely useless Smile) stored procedure defined like this in your SQL Server database:

     1: CREATE PROCEDURE [dbo].[GetPersonAndVoteCount] 
     2: ( 
     3:   @id int, 
     4:   @voteCount int OUTPUT 
     5: ) 
     6: AS 
     7: BEGIN 
     8:   SELECT @voteCount = COUNT(*) 
     9:   FROM dbo.Votes 
     10:   WHERE PersonId = @id; 
     11:   SELECT * 
     12:   FROM dbo.People 
     13:   WHERE Id = @id; 
     14: END

    You can write code like this to invoke it:

     1: var idParam = new SqlParameter {
     2:      ParameterName = "id",
     3:      Value = 1};
     4: var votesParam = new SqlParameter {
     5:      ParameterName = "voteCount",
     6:      Value = 0,
     7:      Direction = ParameterDirection.Output };
     8: var results = context.Database.SqlQuery<Person>(
     9:     "GetPersonAndVoteCount @id, @voteCount out",
     10:      idParam,
     11:      votesParam);
     12: var person = results.Single();
     13: var votes = (int)votesParam.Value;

    There are few things to notice in this code:

    1. The primary syntax that SqlQuery and ExecuteSqlCommand methods support is the native SQL syntax supported by the underlying ADO.NET provider Note: someone mentioned in the comments that SQL Server 2005 won't accept this exact syntax without the keyword EXEC before the stored procedure name.
    2. The DbCommand is executed with CommmandType.Text (as opposed to CommandType.StoredProcedure), which means there is no automatic binding for stored procedure parameters, however you can still invoke stored procedures using regular SQL syntax 
    3. You have to use the correct syntax for passing an output parameter to the stored procedure, i.e. you need to add the “out” keyword after the parameter name in the query string
    4. This only works when using actual DbParameters (in this case SqlParameters because we are using SQL Server), and not with primitive parameters which SqlQuery and ExecuteSqlCommand also support
    5. You will need to read the whole results before you can access the values of output parameters (in this case we achieve this with the Single method) but this is just how stored procedures work and not specific to this EF feature

    Once you have learned that you can use provider specific parameters and the native SQL syntax of the underlying data source, you should be able to get most of the same flexibility you can get using ADO.NET but with the convenience of re-using the same database connection EF maintains and the ability to materialize objects directly from query results.

    Hope this helps,

  • Diego Vega

    Self-Tracking Entities: ApplyChanges and duplicate entities


    Some customers using the Self-Tracking Entities template we included in Visual Studio 2010 have ran into scenarios in which they call ApplyChanges passing a graph of entities that they put together in the client tier of their app, and then they get an exception with the following message:

    AcceptChanges cannot continue because the object’s key values conflict with another object in the ObjectStateManager.

    This seems to be the most common unexpected issue our customers run against when using Self-Tracking Entities. I have responded multiple times to this on email and I have meant to blog about it for some time. Somehow I was able to do it today :)

    We believe that most people finding this exception are either calling ApplyChanges to the same context with multiple unrelated graphs or they are merging graphs obtained in multiple requests so that they now have duplicate entities in the graph they pass to ApplyChanges.

    By duplicate entities, what I mean is that they have more than one instance of the same entity, in other words, two or more objects with the same entity key values.

    The current version of Self-Tracking Entities was specifically designed to not handle duplications. In fact, when we were designing this, our understanding was that a situation like this was most likely the result of a programming error and therefore it was more helpful to our customers to throw an exception.

    The problem with the exception is that avoiding introducing duplicate entities can be hard. As an example, let’s say that you have service that exposes three service operations for a car catalog:

    • GetModelsWithMakes: returns a list of car Models with their respective associated Makes
    • GetMakes: returns the full list of car Makes
    • UpdateModels: takes a list of car Models and uses ApplyChanges and SaveChanges to save changes in the database

    And the typical operation of the application goes likes this:

    1. Your client application invokes GetModelsWithMakes and uses it to populate a grid in the UI.
    2. Then, the app invokes GetMakes and uses the results to populate items in a drop down field in the grid.
    3. When a Make “A” is selected for a car Model, there is some piece of code that assigns the instance of Make “A” to the Model.Make navigation property.
    4. When changes are saved, the UpdateModels operation is called on the server with the graph resulting from the steps above.

    This is going to be a problem if there was another Model in the list that was already associated with the same Make “A”: since you brought some Makes with the graph of Models and some Makes from a separate call, you now have two completely different instances of “A” in the graph. The call to ApplyChanges will fail on the server with the exception describing a key conflict.

    There are changes we have considered doing in the future to the code in ApplyChanges in order to avoid the exception but in the general case there might be inconsistencies between the states of the two Make “A” instances, and they can be associated with a different Models, making it very difficult for ApplyChanges to decide how to proceed.

    In general, the best way to handle duplicates in the object graph seems to be to avoid introducing them in the first place!

    Here are a few patterns that you can use to avoid them:

    1. Only use Foreign Key values to manipulate associations:

    You can use foreign key properties to set associations between objects without really connecting the two graphs. Every time you would do something like this:

    model.Make = make;

    … replace it with this:

    model.MakeId = make.Id;

    This is the simplest solution I can think of and should work well unless you have many-to-many associations or other “independent associations” in your graph, which don’t expose foreign key properties in the entities.

    2. Use a “graph container” object and have a single “Get” service operation for each “Update” operation:

    If we combine the operations used to obtain car Models and Makes into a single service operation, we can use Entity Framework to perform “identity resolution” on the entities obtained, so that we get a single instance for each make and model from the beginning.

    This is a simplified version of “GetCarsCatalog” that brings together the data of both Models and Makes.

    // type shared between client and server

    public class CarsCatalog
        public Model[] Models {get; set;}
        public Make[] Makes {get; set;}

    // server side code

    public CarsCatalog GetCarsCatalog()
        using (var db = new AutoEntities())
            return new CarsCatalog
                Models = context.Models.ToArray(),
                Makes = context.Makes.ToArray()

    // client side code

    var catalog = service.GetCarsCatalog();
    var model = catalog.Models.First();
    var make = catalog.Makes.First();  
    model.Make = make;

    This approach should work well even if you have associations without FKs. If you have many-to-many associations, it will be necessary to use the Include method in some queries, so that the data about the association itself is loaded from the database.

    3. Perform identity resolution on the client:

    If the simple solutions above don’t work for you, you can still make sure you don’t add duplicate objects in your graph while on the client. The basic idea for this approach is that every time you are going to assign a Make to a Model, you pass the Make through a process that will help you find whether there is already another instance that represents the same Make in the graph of the Model, so that you can avoid the duplication.

    This is a really complicated way of doing it compared with the two solutions above, but using Jeff’s graph iterator template, it doesn’t really take a lot of extra code to do it:

    // returns an instance from the graph with the same key or the original entity
    public static class Extensions
         public static TEntity MergeWith<TEntity, TGraph>(this TEntity entity, TGraph graph,
             Func<TEntity, TEntity, bool> keyComparer)
             where TEntity : class, IObjectWithChangeTracker
             where TGraph: class, IObjectWithChangeTracker
             return AutoEntitiesIterator.Create(graph).OfType<TEntity>()
    .SingleOrDefault(e => keyComparer(entity,e)) ?? entity;

    // usage
    model.Make = make.MergeWith(model, (j1, j2) => j1.Id == j2.Id);

    Notice that the last argument of the MergeWith method is a delegate that is used to compare key values on instances of the TEntity type. When using EF, you can normally take for granted that EF will know what properties are the keys and that identity resolution will just happen automatically, but since on the client-side you only have a graph of Self-Tracking Entities, you need to provide this additional information.


    Some customers using Self-Tracking Entities are running into exceptions in cases in which duplicate entities are introduced, typically when they have entities retrieved in multiple service operations merged into a single graph. ApplyChanges wasn’t designed to handle duplicates and the best practice is to avoid introducing the duplicates in the first place. I have showed a few patterns that can help with that.

    Personally, I believe the best compromise between simplicity and flexibility is provided by a combination of the first and second patterns. For instance, you can use only foreign keys properties to associate entities with reference/read-only data (e.g. associate an OrderLine with a Product in an application used to process Orders), and use graph containers to transfer data that can be modified in the same transaction, i.e. entities that belong in the same aggregate (e.g, Order and its associated OrderLines).

    Hope this helps,

  • Diego Vega

    Wrapped System.IO.FileNotFoundException with Entity Framework POCO and Self-Tracking Entities T4 Templates


    Visual Studio 2010 and .NET 4.0 were released on Monday! The Self-Tracking Entities template is included in the box, and the POCO Template we released some time ago in Visual Studio Gallery is compatible with the RTM version.

    A few days ago we found a small issue in some of our code generation templates. It is really not a major problem with their functionality but rather a case of an unhelpful exception message being shown in Visual Studio when the source EDMX file is not found. So, I am blogging about it with the hope that people getting this exception will put the information in their favorite search engine and will find here what they need to know.

    If you open any of our code generation templates you will see near the top a line that contains the name of the source EDMX file, i.e. something like this:

    string inputFile = @"MyModel.edmx";

    This line provides the source metadata that is used to generate the code for both the entity types and the derived ObjectContext. The string value is a relative path from the location of the TT file to the EDMX file, so if you change the location of either, you will normally have to open the TT file and edit the line to compensate for the change, for instance:

    string inputFile = @"..\\Model\\MyModel.edmx";

    If you make a typo or for some other reason the template cannot find the EDMX file, you will in general see a System.IO.FileNotFoundException in the Error List pane in Visual Studio:

    Running transformation: System.IO.FileNotFoundException: 
    Unable to locate file File name:
    'c:\Project\WrongModelName.edmx' ...

    Now, the exception message above is the version thrown by the “ADO.NET EntityObject Generator” (which is the default code generation template used by EF), and it is quite helpful actually, because it provides the file name that caused the error.

    On the other side, if you are using the “ADO.NET POCO Entity Generator” or the “ADO.NET Self-Tracking Entity Generator”, the exception is going to be wrapped in a reflection exception and therefore you won’t directly get the incorrect file name:

    Running transformation: System.Reflection.TargetInvocationException: 
    Exception has been thrown by the target of an invocation. --->
    System.IO.FileNotFoundException: Unable to locate file ...

    Something very similar happens when you add the template to your project incorrectly. Our code generation templates have been designed to be added through the “Add Code Generation Item…” Option in the Entity Data Model Designer:


    When you do it this way, we automatically write the name of your EDMX file inside the TT file. But if you add the template to the project in some other way, for instance, using the standard “Add New Item” option in the Solution Explorer, the name of the EDMX file will not be written in the TT file, and instead a string replacement token will remain:

    string inputFile = @"$edmxInputFile$";

    When this happens, again, the exception message you get from the EntityObject generator is quite helpful:

    Running transformation: Please overwrite the replacement token '$edmxInputFile$' 
    with the actual name of the .edmx file you would like to generate from.

    But unfortunately, for the POCO and the Self-Tracking Entities templates you will just get a wrapped System.IO.FileNotFoundException as in the examples above.

    In any case, the solution is always the same: open the TT file, and replace the token manually with the name or relative path to the EDMX file. Alternatively, remove the template files from the project and add them again using “Add Code Generation Item…”.

    I hope you will find this information helpful.


  • Diego Vega

    EF Design Post: POCO template generation options


    It has been some time since the last time my team posted anything to the Entity Framework Design blog. The truth is we have been extremely busy finishing the work on Entity Framework 4.

    It feels great to be almost done! And now one of my favorite phases of the product cycle begins, again... How to best address the needs of our customers with the people we have, in the time we have, with the highest quality we can achieve? How do we test this design? How do we implement this hard piece? How do we make the API make sense as a whole?

    As we start putting together more design ideas, we will need to validate them more often with our customers. In those lines, today we published an article that describes some ideas we have on where we could go with the T4 templates we use for code generation, and in particular with the POCO template.

    My favorite idea is by far is the “generate once” experience, because I find myself most of the time just using code-gen to create a starting point with me, and when possible, I prefer to extend my own POCO classes (i.e. adding data annotation attributes, new methods and properties) in the same code file, without having to resort to partial classes.

    And you, what would you like to see us do with templates in the next release?

  • Diego Vega

    What would you like to see in Entity Framework vNext?


    With Visual Studio 2010 and .NET 4.0 very close to RTM, many of us in the team are spending more and more time brainstorming about the features and experiences that we would like to include in the next release of EF. I don’t think I need to tell how exciting that is :)

    During the development of the first two versions, one of the main sources of customer feedback has been the bugs and suggestions in Microsoft Connect. Up until now, whenever you filed a bug in Microsoft Connect for Entity Framework, it would typically take a couple of days for it to be routed to our own area in out internal TFS database. But today I heard the good news that we are getting our own page in Microsoft Connect!

    This will not only make our feedback channel more agile, but over time it will also make it possible for you to find all the feedback related to Entity Framework in a single place, and more easily vote for the features and capabilities that you care the most about.

    Looking forward for hearing from you!


  • Diego Vega

    Entity Framework and Data Services Teams are Hiring


    Just a quick note on this: Our team is hiring!

    If you think you have the skills and the will to improve how developers around the world deal with data in their applications, then this is a great opportunity to be in the forefront of the industry and also to become part of a nice group of geeks :)

    Click here and here to read the descriptions of the positions available, both in the role of Software Design Engineer in Test.

  • Diego Vega

    Entity Framework Feature CTP 2


    We have just released a new version of the Feature CTP that works on top of Visual Studio 2010 Beta 2. I have been focusing on Self-Tracking Entities a lot lately, and so it feels great to have this out for people to try it and give us feedback on it.

    The new version of Code-Only is great too.

    You can see the announcement here.


  • Diego Vega

    Standard generated entity classes in EF4


    A customer recently asked if there is still any advantage in using the entities that Entity Framework 4 generates by default instead of POCO classes.

    Another way to look at this is: why are non-POCO classes that inherit from System.Data.Objects.DataClasses.EntityObject and use all sort of attributes to specify mapping of properties and relationship still the default in EF4?

    This perspective makes the question more interesting for me, especially given a great portion of the investment we made in EF 4 went into adding support of Persistence Ignorance, and also given that using POCO is my personal preference, and I am aware of all the advantage this has for the evolvability and testability of my code.

    So, let’s look from closer and see what we find.

    Moving from previous versions

    If you simply diff the entity code generated using the first version with what we generate nowadays, the first thing you will notice is that they haven’t changed much.

    The fact that there aren’t many changes is actually a nice feature for people moving from the previous version to the new version. If you started your project with Visual Studio 2008 SP1 and now you decide to move it to Visual Studio 2010 (i.e. the current beta), it is a good thing that you don’t have to touch your code to get your application running again.

    It is worth mentioning that many of the improvements in the new version of EF (i.e. lazy loading) were designed to work with all kinds of entities, so they didn’t really require changes to the code we generate.

    Even if you later decided to regenerate your model to take advantage of new features (i.e. singularization and foreign key support), you might need to do some renaming, and some things may be simplified, but most things your code do will remain the same.

    New code generation engine

    As soon as you look under the hood though, you will notice that we actually changed the whole code generation story to be based on T4 templates. This opens lots of possibilities, from having our customers customize the code to suit their needs, to have us release new templates for entity types optimized for particular scenarios. This last idea is exemplified in the work we have been doing in the POCO template and the Self-Tracking Entities Template included in the Feature CTP 1.

    At this point, we don't have plans to include templates for generating entities of other kinds in Visual Studio 2010, so the default, EntityObject-based template is the only one that is included “in the box”.

    Update: The Self-Tracking Entities Template will also be in the box in RTM of Visual Studio 2010. Current thinking about the POCO Template is that its going to be available as an add-in in the Visual Studio Extension Manager.

    Change tracking and relationship alignment

    It is also important that default entities enjoy the highest level of functionality in Entity Framework. To begin with, they participate in notification-based change tracking, which is the most efficient. Also, navigation properties on default entities are backed by the same data structures Entity Framework uses to maintain information about relationships, meaning that any change you make is reflected immediately on the navigation properties on both sides.

    By comparison, plain POCO objects do not notify Entity Framework of changes on them, and relationships are usually represented by plain object references and collections that are not synchronized automatically. To work well with POCO, Entity Framework needs to compare snapshots of property values and reconcile changes in linked navigation properties at certain points of a transaction. To that end, we introduced a new DetectChanges method that allows user code to control explicitly when that change detection happens, and we also added an implicit call to it in SaveChanges.

    As an alternative to that, we also introduced POCO Proxies that inject most of the change tracking and relationships management capabilities of default entities into POCO types by the means of inheritance. This kind of POCO Proxies are created only (basically) if you make all properties virtual in the POCO class and, if you need to create a new instance, you invoke the new ObjectContext.CreateObject<T> method.

    Again, why is non-POCO still the default?

    To summarize:

    a. Default code-gen classes provide the easiest path for people moving from the previous version

    b. When creating a model from scratch or from the database, you don’t even need to write the code for the entities themselves

    c. You never need to worry about invoking DetectChanges or about making sure your code always uses POCO Proxies

    d. Finally, if you really care the most about writing entities yourself, we make it very easy for you to opt-out of code generation and start writing your own POCO classes.

    I hope this information is useful. So, now what kind of entity classes are you going to use?

  • Diego Vega

    WPF databinding with the POCO Template


    Update: with the new version of the POCO template available in Visual Studio Gallery (see here for more information), there is no need for this workaround. We decided to change the collection type for navigation properties to be based on ObservableCollection<T>, so that an IListSource implementation shouldn’t be required anymore.

    The new support for POCO in Entity Framework seeks to enable better ways of coding application domain logic without polluting domain classes with persistence concerns. Persistence Ignorance can in fact improve maintainability, testability and evolvability of an application by making it possible to write domain classes that the contain domain logic and nothing more.

    Persistence isn’t however the only common infrastructure service that might affect how you write your classes. Others, such as databinding and serialization, sometimes require more than POCO types to work really well.

    Disclaimer: I am not trying to enter a debate on whether you should use domain entities directly in databinding and serialization, I am aware of the recommended patterns :)

    For instance, to get databinding to fully work with a domain model, each object and collection type has to implement a number of interfaces for things like change notification, type description, capability negotiation, etc. That said, a decent level of databinding support can be achived with simple POCO types and collection types that implement IList, such as List<T>.

    In the POCO Template that is included in the Entity Framework Feature CTP 1, we use T4 code generation to produce POCO entity classes and a “typed” ObjectContext class based on an Entity Data Model. The POCO Template emits a special collection type named FixupCollection that has the capability to synchronize changes on both sides of a relationship (typically a one-to-many relationship is represented as a collection on the one side, and as a reference in each object of the many side). But as a colleague of mine found today, Since FixupCollection derives from ICollection<T> and not IList<T>, WPF databinding will not work with it in read-write scenarios.

    If you try to bind to a collection emitted by the POCO Template (i.e. in a master-detail form), and the you try to edit it, you will run into this exception message:

    'EditItem' is not allowed for this view.

    The exception indicates that WPF considers that the collection is read-only.

    Here is a way to overcome this:

    • Extend FixupCollection in a partial class to implement IListSource.
    • The implementation of IListSource.GetList has to return a binding list. For instance, I implemented a custom ObservableCollection that has the necessary hooks to update the underlying FixupCollection whenever elements are added or removed.
    • Currently, this doesn’t work the other way around. For instance, when entities added or removed in the underlying collection, the ObservableCollection is not updated.

    Here is the code:

    using System.Collections.ObjectModel;
    using System.ComponentModel;
    //TODO: update the namespace to match the same as the code-gen
    namespace Model
        public partial class FixupCollection<TFrom, TTo> : IListSource
            bool IListSource.ContainsListCollection
                get { return false; }

            System.Collections.IList IListSource.GetList()

                return new FixupCollectionBindingList<TFrom, TTo>(this);

        public class FixupCollectionBindingList<TFrom, TTo> :

            private readonly FixupCollection<TFrom, TTo> _source = null;

            public FixupCollectionBindingList(
                FixupCollection<TFrom, TTo> source)

                : base(source)
                this._source = source;

            protected override void InsertItem(int index, TTo item)
                base.InsertItem(index, item);
            protected override void RemoveItem(int index)

    Hope this helps,


  • Diego Vega

    Beth Massi on Entity Framework + WPF


    I haven’t met Beth in person but I noticed her awesome blog posts and videos focused on using Entity Framework with WPF. Very useful stuff!

  • Diego Vega

    Rowan on Entity Framework Events and Alex’s Tips


    I don’t post much on my blog lately (too busy working on Entity Framework for .NET 4!), but this post from my teammate Rowan struck me as something that would help lots of customers, so I wanted to link to it. It explains basically everything you need to know about events available in the Object Services API of Entity Framework.

    While I am here, the tips series in my other teammate Alex James’ blog probably don’t need much publicity from me, but they are an awesome resource for customers.

    Update 8.11.2010: Broken link to Rowan’s blog.

  • Diego Vega

    Third post about POCO, first post about Code Only


    It is always busy here with all the improvements we are doing in Entity Framework to make your code work better with it. That is why I haven’t been posting to my blog much in the last months. Today however, there are two important posts from people that sit very close to me, so I am going to link to them.

    Faisal posted the third part in a series on the POCO experience with EF4. His post delves into the details of how snapshot change tracking compares with notification based change tracking and on some of the API considerations for it.

    Alex, who sits in my office (although he likes to think I sit in his :)) made the first post about the Code Only experience we are working on. I like to think of Code Only as “POCO on steroids”, because it not only gives you the right level of decoupling between your domain classes and the persistence framework, but it also puts mapping artifacts out of the way. I am especially fond of the way you can customize mapping using LINQ queries, although that feature is not going to be included in the first preview.

    Please go read the posts, play with the bits (you will need to wait a few weeks to play with code-only) and tell us what you think!

  • Diego Vega

    Server queries and identity resolution


    I answered a Connect issue today that deals with a very common expectation for users of systems like Entity Framework and LINQ to SQL. The issue was something like this:

    When I run a query, I expect entities that I have added to the context and that are still not saved but match the predicate of the query to show up in the results.

    Reality is that Entity Framework queries are always server queries: all queries, LINQ or Entity SQL based, are translated to the database server’s native query language and then evaluated exclusively on the server.

    Note: LINQ to SQL actually relaxes this principle in two ways:

    1. Identity-based queries are resolved against the local identity map. For instance, the following query shall not hit the data store:

    var c = context.Customers
        .Where(c => c.CustomerID == "ALFKI");

    2. The outermost projection of the query is evaluated on the client. For instance, the following query will create a server query that projects CustomerID and will invoke a client-side WriteLineAndReturn method as code iterates through results:

    var q = context.Customers
        .Select(c => WriteLineAndReturn(c.CustomerID));
    But this does not affect the behavior explained in this post.

    In sum, Entity Framework does not include a client-side or hybrid query processor.

    MergeOption and Identity resolution

    There are chances that you have seen unsaved modifications in entities included in the results of queries. This is due to the fact that for tracked queries (i.e. if the query’s MergeOption is set to a value different from NoTracking) Entity Framework performs “identity resolution”.

    The process can be simply explained like this:

    1. The identity of each incoming entity is determined by building the corresponding EntityKey.
    2. The ObjectStateManager is looked up for an entity already present that has a matching EntityKey.
    3. If an entity with the same identity is already being tracked, the data coming from the server and the data already in the state manager are merged according to the MergeOption of the query.
    4. In the default case, MergeOption is AppendOnly, which means that the data of the entity in the state manager is left intact and is returned as part of the query results.

    However, membership of an entity in the results of a given query is decided exclusively based on the state existing on the server. In this example, for instance, what will the query get?:

    var customer1 = Customer.CreateCustomer(1, "Tiger");
    var customer2 = Customer.CreateCustomer(2, "Zombie");
    customer1.LastName = "Zebra";
    var customer3 = Customer.CreateCustomer(100, "Zorro");
    context.AddObject("Customers", customer3);
    var customerQuery = context.Customers
        .Where(c => c.LastName.StartsWith("Z"));
    foreach(var customer in customerQuery)
        if (customer == customer1)

    The answer is:

    1. The modified entity customer1 won’t show up in the query because its LastName is still Tiger on the database.
    2. The deleted entity customer2 will be returned by the query, although it is a deleted entity already, because it still exists in the database.
    3. The new entity customer3 won’t make it, because it only exists in the local ObjectStateManager and not in the database.

    This behavior is by design and you need to be aware of it when writing your application.

    Put in some other way, if the units of work in your application follow a pattern in which they query first, then make modifications to entities and finally save them, discrepancies between query results and the contents of the ObjectSateManager cannot be observed.

    But as soon as queries are interleaved with modifications there is a chance that the server won’t contain an entity that exist in the state manager only and that that would match the predicate of the query. Those entities won’t be returned as part of the query.

    Notice that the chances that this happens has to do with how long lived is the Unit of Work in your application (i.e. how much does it take from the initial query to the call to SaveChanges).

    Hope this helps,

  • Diego Vega

    EntityDataSource and Bind: What are those square brackets?


    This post is about a small issue that I have seen in the forums and that arises often in cases in which EntityDataSource is used in combination with bound controls that use templates, like FormsView or a GridView with template based columns.

    If you can create your page correctly, simply using the different drag & drop features and context menus provided by the design surface. However, in the end, some Bind expressions will get enclosed in square brackets that produce compile or run-time errors, then your code may look like this:

        <asp:DropDownList ID="CategoryDropDownList" runat="server"
            SelectedValue='<%# Bind("[Category.CategoryID]") %>'>

    This annoyance may keep you blocked until you realize how simple the workaround is: just remove the square brackets!

    People that watched my video here, may have noticed that I had to remove the square brackets manually around minute 8:10. I wish I had blogged about it then, but it wasn’t clear at the time I recorded that video that the issue would still be there in the RTM version.

    Here is a short explanation:

    In order to adapt entity objects that do not have foreign key properties to work better with ASP.NET databinding, we decided to wrap each entity returned with a databinding wrapper object that, among other things, flattens the members of EntityKeys in related entities (for more details on how and why the EntityDataSource returns wrapped entities you can go here).

    The name of flattened reference keys are usually composed by the corresponding navigation property, plus the name of the property in the related end. For instance, in a Product, the key of the associated Category, will be exposed as “Category.CategoryID”. We decided to use dots to delimit the parts of the name because that plays well and is consistent with the use of dots in most programming languages, and also in Eval expressions, like the one used in the following code:

        <asp:Label ID="CategoryLabel" runat="server" 
            Text='<%# Eval("Category.CategoryName") %>'>

    The problem is that for the particular case of Bind (often used in template based databound controls to perform two-way databinding) the design time component of ASP.NET will catch property names that contain dots, and it will try to escape them by surrounding them with square brackets (leading to ta Bind expression of the form “[Categories.CategoryID]”).

    Unfortunately, the rest of the ASP.NET components which need to evaluate the binding expression are actually not capable of parsing the square brackets escaping notation (i.e. for DataBinder.Eval() the brackets actually indicate that a lookup in an indexed property is necessary, and there is no indexed property in this case).

    Given that the EntityDataSource in fact exposes property descriptors that contain dots in their names, the escaping is unnecessary. So, next time you find this issue, just remove the square brackets.

    The good news is that the issue will be fixed in the next release.

    Hope this helps!

  • Diego Vega

    Exposing EDM and database server functions to LINQ


    Alex published today a description Colin and I wrote on a new feature the team has been working on for LINQ to Entities.

    Beyond all technicalities, it is a very simple and attribute-based way of exposing any arbitrary server-side function to LINQ. It goes beyond what LINQ to SQL does with SqlMethods and it leverages our metadata system so that you don't have to specify the full mapping of parameters in the attribute.

    The post itself may be a little boring ;), but the scenarios it enables are quite impressive.

    Read more here.

    Update: I remembered today that Kati Dimitrova and Sheetal Gupta also contributed to the document.

  • Diego Vega

    Quick tips for Entity Framework Databinding


    One of our customers asked this question yesterday on the Entity Framework forums. There were a few details missing and so I am not completely sure I got the question right. But I think it is about an issue I have heard quite a bit, and so I think it may be useful to share my answer here for others.

    Given this simple line of code (i.e. in a WinForms application):

    grid.DataSource = someQuery;

    Several things will happening under the hood:

    1. Databinding finds that ObjectQuery<T> implements the IListSource interface, then it calls IListSource.GetList() on it, to obtain a binding list (an internal object that implements IBindingList and some other well-known interfaces that databound controls know how to communicate with)
    2. GetList() gets the query executed and it copies the contents of the resulting ObjectResult object to the binding list
    3. The new binding list is finally passed to the databound control

    When a binding list is created this way from a tracked query (i.e. MergeOption in the query is set to anything but NoTracking), there are several other interesting details in the behavior:

    1. Changes made to entities in the binding list will get noticed by the state manger. Therefore changes will be saved to the database when SaveChanges is invoked on the tracking ObjectContext
    2. Additions of new entities to the context will not result in the new entities being added automatically to a binding list (this is a common expectation). Whether an entity belongs to a binding list is decided at the time the binding list is created. If we wanted to do something different (i.e. have the binding list get new objects added to the context automatically), we would hit some considerable obstacles:
      • The binding list does not remember which filter condition was used in the query.
      • Even if it did, our queries are server queries, expressed in either ESQL or LINQ, but always translated to server queries and then expressed in terms of the current values on the server.
      • Even for LINQ queries, we cannot assume that the same query will have equivalent behavior while querying in-memory objects when compared to the same query translated to the native SQL of the database.
    3. As a consequence of #2: You can have multiple binding lists based on the same EntitySet with overlapping or disjoint sets of entities. You can use different queries (or even from the same query, but with different parameters) to get a different set of entities in each binding list.
    4. Deletion of entities from the context will result in the deleted entity being removed from all binding lists that contain it. The principle behind this is that the binding list is a window into the "current state of affairs" in the state manager. Even if we don't know if a new entity belongs into a binding list, we do know when it doesn't belong anymore, because it has been deleted.
    5. A binding list has its own Add() method you can use. If you could get a reference to one of our binding lists you could “manually” add new entities to the binding list, and the entity you add will also be added to the context automatically, the same as if it had invoked  AddObject() on the context.

    All these facts, especially #5, warrant me to give you a couple of tips:

    1. If you are interested in getting a reference to the binding list directly (i.e. to use its Add method), you can do something like this:
      var bindingList = ((IListSource)query).GetList() as IBindingList;
      There is a more convenient ToBindingList() extension method included in the EFExtensions you may want to take a look at.
    2. If you are not interested in getting the reference to the binding list for yourself, and you are using databinding directly against a query as in the sample at the top of this post, you should know that WinForms will call IListSource.GetList() twice, causing the query to be executed on the server twice! The recommendation then is to bind to the results, rather than the query, using code similar to this:
      grid.DataSource = someQuery.Execute(MergeOption.AppendOnly);
      In this case, since the resulting ObjectResult acts as a foward-only, enumerate-once cursor, its implementation of IListSource.GetList() is different from the one in the query: the call will consume the results (i.e. it will iterate through the whole results) and will cache the binding list in the ObjectResult instance. All subsequent calls to IListSource.GetList() will return the same binding list.

    We have a few ideas around things we may improve in future versions. Some of them you may be able to guess from this explanation. But I will save that for another post...

  • Diego Vega

    Jarek publishes his excellent Entity Framework POCO Adapter


    I have been back from vacation for some time but I haven't had time to post anything (i.e. I was on vacation the day Entity Framework went RTM in .NET 3.5 SP1!).

    Finally, something happened that I cannot wait to talk about.

    Jarek and I went discussing how persistence ignorant classes could be supported by building a layer on top of the EF v1 API for a long time.

    He finally figured out a good way to do it. The result is this sample that wraps Entity Framework APIs and uses simple code-generation to create "adapters" and "proxies" for your POCO classes:

    Entity Framework POCO Adapter

    You can read Jarek's blog post explaining it all.

    We hope people will find it interesting and will send useful feedback that can help us improve the core product.

  • Diego Vega

    Sample Entity Framework Provider for Oracle now Available


    This new sample builds on top of System.Data.OracleClient and showcases some techniques a provider writer targeting databases different from SQL Server can use.

    The code is not meant for production, just a sample directed to provider writers. It has also a few limitations related both to SP1 beta bits and with types not supported in OracleClient.

    For more details, read Jarek's post.

    You can download the source code from our home page in Code Gallery.

  • Diego Vega

    Entity Framework Sample Provider Updated for SP1 Beta


    Just to get the news out: The updated version of the Entity Framework Sample Provider that is compatible with .NET 3.5 SP1 Beta is now available in our Code Gallery page. From the description:

    The Sample Provider wraps System.Data.SqlClient and demonstrates the new functionality an ADO.NET Provider needs to implement in order to support the ADO.NET Entity Framework

      • Provider Manifest
      • EDM Mapping for Schema Information
      • SQL Generation

    Update: for more details on provider API changes since the Beta3 release, you can read Jarek's post here.

  • Diego Vega

    EntityDataSource's flattening of complex type properties


    I explained a few days ago the rules of wrapping in this blog post. But why do we wrap after all?

    Julie asked for some details today in the forums. I think the answer is worth of a blog post.

    In ASP.NET there are different ways of specifying which property a databound control binds to: Eval() Bind(), BoundField.DataField, ListControl.DataTextField, etc. In general, they behave differently.

    The flattening we did on EntityDataSource is an attempt to make the properties that are exposed by EDM entities available for 2-way databinding in most of those cases.

    For instance, for a customer that has a complex property of type Address, we provide a property descriptor for customer.Address, and also for customer.Address.Street, customer.Address.Number, etc.

    At runtime, in the case of a control binding to Eval(“Address.Street”) from a customer, Eval will use the property descriptor corresponding to Address, and it will drill down on it to extract the value of the Street property on it.

    A grid column of a BoundField derived type with DataField = “Address.Street” will work differently: it will just look for a property descriptor in the data item with a name as “Address.Street”. In fact, EntityDataSource is the first DataSource control that I know off that will provide such a thing.

    Bind(“Address.Street”) will work in a similar fashion to Eval() when reading the properties into the data bound control, but will act a little bit more like BoundField when sending back changes to the DataSource.

    There are a few cases in which the behavior is not any of the above and hence you end up with a control that cannot have access to a complex type’s properties. You can expect us to work closely with the ASP.NET team in making the experience smoother in future versions. But for the time being, what you can do is create an explicit projection of the properties. For instance, in Entity SQL:

           SELECT c.ContactId, c.Address.Street AS Street 
         FROM   Northwind.Customers AS c

    I think it is worthy of mentioning:

    • Remember flattening of complex properties only happen under certain conditions (see wrapping).
    • We worked very closely with the ASP.NET Dynamic Data in this release, to enable their technology to work EDM through the EntityDataSource. I think it is very worthy of trying.

    Hope this helps.

  • Diego Vega

    Entity Framework Extensions Project Update


    Just a couple of links:

    Colin posted a refresh today today that is compatible with .NET 3.5 SP1 Beta and includes some optimizations for the materializer using dynamic methods. Here is his post about it.

Page 1 of 2 (43 items) 12