Sdílet prostřednictvím


EF7 - What Does “Code First Only” Really Mean

A while back we blogged about our plans to make EF7 a lightweight and extensible version of EF that enables new platforms and new data stores. We also talked about our EF7 plans in the Entity Framework session at TechEd North America.

Prior to EF7 there are two ways to store models, in the xml-based EDMX file format or in code. Starting with EF7 we will be retiring the EDMX format and having a single code-based format for models. A number of folks have raised concerns around this move and most of it stems from misunderstanding about what a statement like “EF7 will only support Code First” really means.

Code First is a bad name

Prior to EF4.1 we supported the Database First and Model First workflows. Both of these use the EF Designer to provide a boxes-and-lines representation of a model that is stored in an xml-based .edmx file. Database First reverse engineers a model from an existing database and Model First generates a database from a model created in the EF Designer.

In EF4.1 we introduced Code First. Understandably, based on the name, most folks think of Code First as defining a model in code and having a database generated from that model. In actual fact, Code First can be used to target an existing database or generate a new one. There is tooling to reverse engineer a Code First model based on an existing database. This tooling originally shipped in the EF Power Tools and then, in EF6.1, was integrated into the same wizard used to create EDMX models.

Another way to sum this up is that rather than a third alternative to Database & Model First, Code First is really an alternative to the EDMX file format. Conceptually, Code First supports both the Database First and Model First workflows.

Confusing… we know. We got the name wrong. Calling it something like “code-based modeling” would have been much clearer.

Is code-base modeling better?

Obviously there is overhead in maintaining two different model formats. But aside from removing this overhead, there are a number of other reasons that we chose to just go forward with code-based modeling in EF7.

  • Source control merging, conflicts, and code reviews are hard when your whole model is stored in an xml file. We’ve had lots of feedback from developers that simple changes to the model can result in complicated diffs in the xml file. On the other hand, developers are used to reviewing and merging source code.
  • Developers know how to write and debug code. While a designer is arguably easier for simple tasks, many projects end up with requirements beyond what you can do in the designer. When it comes time to drop down and edit things, xml is hard and code is more natural for most developers.
  • The ability to customize the model based on the environment is a common requirement we hear from customers. This includes scenarios such as multi-tenant database where you need to specify a schema or table prefix that is known when the app starts. You may also need slight tweaks to your model when running against a different database provider. Manipulating an xml-based model is hard. On the other hand, using conditional logic in the code that defines your model is easy.
  • Code based modeling is less repetitive because your CLR classes also make up your model and there are conventions that take care of common configuration. For example, consider a Blog entity with a BlogId primary key. In EDMX-based modeling you would have a BlogId property in your CLR class, a BlogId property (plus column and mapping) specified in xml and some additional xml content to identify BlogId as the key. In code-based modeling, having a BlogId property on your CLR class is all that is needed.
  • Providing useful errors is also much easier in code. We’ve all seen the “Error 3002: Problem in mapping fragments starting at line 46:… ” errors. The error reporting on EDMX could definitely be improved, but throwing an exception from the line of code-based configuration that caused an issue is always going to be easier.
    We should note that in EF6.x you would sometimes get these unhelpful errors from the Code First pipeline, this is because it was built over the infrastructure designed for EDMX, in EF7 this is not the case.

There is also an important feature that could have been implemented for EDMX, but was only ever available for code-based models.

  • Migrations allows you to create a database from your code-based model and evolve it as your model changes over time. For EDMX models you could generate a SQL script to create a database to match your current model, but there was no way to generate a change script to apply changes to an existing database.

 

So, what will be in EF7?

In EF7 all models will be represented in code. There will be tooling to reverse engineer a model from an existing database (similar to what’s available in EF6.x). You can also start by defining the model in code and use migrations to create a database for you (and evolve it as your model changes over time).

We should also note that we’ve made some improvements to migrations in EF7 to resolve the issues folks encountered trying to use migrations in a team environment.

 

What about…

We’ve covered all the reasons we think code-based modeling is the right choice going forwards, but there are some legitimate questions this raises.

What about visualizing the model?

The EF Designer was all about visualizing a model and in EF6.x we also had the ability to generate a read-only visualization of a code-based model (using the EF Power Tools). We’re still considering what is the best approach to take in EF7. There is definitely value in being able to visualize a model, especially when you have a lot of classes involved.

With the advent of Roslyn, we could also look at having a read/write designer over the top of a code-based model. Obviously this would be significantly more work and it’s not something we’ll be doing right away (or possibly ever), but it is an idea we’ve been kicking around.

What about the “Update model from database” scenario?

“Update model from database” is a process that allows you to incrementally pull additional database objects (or changes to existing database objects) into your EDMX model. Unfortunately the implementation of this feature wasn’t great and you would often end up losing customizations you had made to the model, or having to manually fix-up some of the changes the wizard tried to apply (often dropping to hand editing the xml).

For Code First you can re-run the reverse engineer process and have it regenerate your model. This works fine in basic scenarios, but you have to be careful how you customize the model otherwise your changes will get reverted when the code is re-generated. There are some customizations that are difficult to apply without editing the scaffolded code.

Our first step in EF7 is to provide a similar reverse engineer process to what’s available in EF6.x – and that is most likely what will be available for the initial release. We do also have some ideas around pulling in incremental updates to the model without overwriting any customization to previously generated code. These range from only supporting simple additive scenarios, to using Roslyn to modify existing code in place. We’re still thinking through these ideas and don’t have definite plans as yet.

What about my existing models?

We’re not trying to hide the fact that EF7 is a big change from EF6.x. We’re keeping the concepts and many of the top level APIs from past versions, but under the covers there are some big changes. For this reason, we don’t expect folks to move existing applications to EF7 in a hurry. We are going to be continuing development on EF6.x for some time.

We have another blog post coming shortly that explores how EF7 is part v7 and part v1 and the implications this has for existing applications.

 

Is everyone going to like this change?

We’re not kidding ourselves, it’s not possible to please everyone and we know that some folks are going to prefer the EF Designer and EDMX approach over code-based modeling.

At the same time, we have to balance the time and resources we have and deliver what we think is the best set of features and capabilities to help developers write successful applications. This wasn’t a decision we took lightly, but we think it’s the best thing to do for the long-term success of Entity Framework and its customers – the ultimate goals being to provide a faster, easier to use stack and reduce the cost of adding support for highly requested features as we move forward.

Comments

  • Anonymous
    October 21, 2014
    The only customization I tend to do to generated elements (using DB first right now) is either to rename some properties, and to add extra ones via partial classes. I assume that additions in a partial class will still be doable, but how does the new "import from database" tool handle cases where I've renamed something on the model end? It seems like a pretty reasonable change otherwise, the model certainly isn't that easy to work with once it starts getting big.

  • Anonymous
    October 21, 2014
    The comment has been removed

  • Anonymous
    October 21, 2014
    Ultimately I do prefer code modeling with migrations over the edmx with tt code gen. Migrations are very useful and I prefer them over using a separate database project for managing changes and deployments. My biggest concern is using migrations in a team environment, so I'm glad you mention addressing some of those challenges in v7.

  • Anonymous
    October 21, 2014
    The comment has been removed

  • Anonymous
    October 21, 2014
    @James Hancock - Some of the design principles for EF7 are definitely things that were present in LINQ to SQL, and things that people liked in that stack - arguably you could generalize these into design principles that should apply to all good software. Building a core that avoids the SQL generation and model creation delays in past versions is definitely a focus for EF7 and one of the core driving forces for reinventing the core rather than adapting what we had (along with the cost involved in adding new features). The features you mentioned are all things that we are thinking about as we design the core framework. They won't necessarily be there in the initial RTM, but we know they are important and we are laying the groundwork to enable much better support for them. We know that means not every app will be able to use the initial RTM - whole blog post on that topic coming up next :).

  • Anonymous
    October 21, 2014
    @Chris - The change to better support team environments was actually pretty simple when we came up with it. Rather than storing a model snapshot in each migration, we moved to a single file with the snapshot so that when you merge another users changes you also merge the model snapshot (which is then diffed against to generate the next migration). We're also moving to a code-base format for the model snapshot, so merging is done within source code rather than xml or some other format :).

  • Anonymous
    October 21, 2014
    I embraced Code First from the beginning. From a developers standpoint it's very clear and the relation to the database structure is evident. I Never understood all the fuss about tooling, reverse enginering and edmx files. Code first rocks please continue focusing on this simplicity and remove the overhead from the current API. Wish you good luck!    

  • Anonymous
    October 21, 2014
    The comment has been removed

  • Anonymous
    October 21, 2014
    Thijs - The thing is in my environment, the DBA creates the tables. So a "DB First" approach saves everyone time because I point at the already existing tables and say "create stuff out of that." Done. In EF 5 this takes pretty much no time to get up and running. With the early "code first", I couldn't do that, thus using it at all was a massive inflation in duplicate work. It sounds like that won't be much of a problem going forward, which will make it practical to use. I'm not married to .edmx files (there are clear benefits to moving away from that given in this article), but I'm not going to get corporate to change the policy of who designs and creates the database structure in Oracle. So if there's a good tool to create the "code first" code for me? That's pretty awesome. The tooling matters to developers in my situation.

  • Anonymous
    October 21, 2014
    @Tridus - Starting in EF6.1 we updated the same tooling you use to create an EDMX model from a database to also generate a Code First model. You just select which model type you want on the first step and then you get the same screens for selecting tables etc. Prior to that you had to know about the EF Power Tools, and install them to get the functionality (and it wasn't as rich as the EDMX wizard - no ability to select a subset of tables etc.). More info here if you are interested - msdn.microsoft.com/.../jj200620.

  • Anonymous
    October 21, 2014
    Is there any documentation on how to migrate from EF6 to EF7?

  • Anonymous
    October 21, 2014
    I did a code generation project before. The way I separated my generated and custom code was by using partial classes and methods.

  • Anonymous
    October 21, 2014
    I am not often to use other workflows except "Code First", I'd like to see this happened.  Of course, Microsoft needs to take care the persons who used database first or model first.

  • Anonymous
    October 21, 2014
    I've always used Model First (in Linq-to-SQL and EF) as it's much easier and faster to get going on a project by creating a visual model of how you want to store your information. My models tend to have 15-20 tables and I can't imagine having to type all of these in code, let alone maintain them. I've never understood the appeal of code first and I feel like dropping support for model first would be a huge mistake.

  • Anonymous
    October 21, 2014
    The comment has been removed

  • Anonymous
    October 21, 2014
    And again not a word from Microsoft about 3rd party tooling which solves their problems. If you want a healthy eco-system around EF, you really should stop seeing your own ivory tower as 'the world'. DB first / Model first with EF and code first isn't a problem anymore, 3rd party tools, like LLBLGen Pro, can do that today, with EF 6.x, and EF 7 won't be a problem at all: simply create the model (either DB first or model first, import your existing EDMX if you want), generate code-first mappings+classes, or edmx + classes, whatever you prefer. I had to chuckle when I read 'At the same time, we have to balance the time and resources we have and deliver what we think is the best set of features and capabilities to help developers write successful applications.' All your competitors have much smaller teams, to the point where you're up against teams of just a couple of people or even a single person, and they manage to provide frameworks, even with designers and additional tooling, with equal or even more features than EF. You can play many cards, but 'resources' isn't one of them.

  • Anonymous
    October 21, 2014
    I think it's quite common to have the database contain the primary definition of the schema and have the code/EDMX be a derived artifact. In more demanding database scenarios it is not possible, or practical, to define all parts of the schema in code. Simple things like fillfactors or descending index columns cause trouble. Indexed views are not at all realistic to define using EF. I see most bigger apps use the database as the primary system. Often there is a DBA involved. Migrations are never executed in production. This development model must work well.

  • Anonymous
    October 21, 2014
    Well, to be clear. I had only used the model first approach when I was teaching someone else in EF. I have sort of experience in code first and for me it was much more clean. At the time I started using it I used the reverse engineering tools too because we already have a database.

  • Anonymous
    October 21, 2014
    @William Bosacker - You didn't explain why EF is a security risk. Traditionally inline SQL was considered dangerous because we were building an SQL string and you were at risk of SQL injection via a small mistake from a developer. But with EF we're not building the SQL string, it's generated for us from LINQ expressions so the framework protects us from that risk. So is there some other security risk you have in mind?

  • Anonymous
    October 21, 2014
    Thanks for the great work on EF. My personal preference has been database first with EDMX designer for tweaks, but I understand the decision and a lightweight framework that runs in more places is certainly appealing so think it's the right decision.

  • Anonymous
    October 22, 2014
    I am eagerly waiting for the EF7 stack to mature and become useful.  There may be some naysayers around but having a framework that I can use across all of Microsoft's operating systems and hardware scenarios is the "Holy Grail" for me.  EF7 will enable use of the same models on Windows Phone as on Windows desktop and that will save a ton of development effort.  I'm reading everything you put out on EF7.  I just wish I could get a weekly update or "fix" about EF7 from Microsoft.  Keep up the great work! Also, I'm wondering if Julie Lerman will be updating her books for EF7?  Anyone with any insight??

  • Anonymous
    October 22, 2014
    Our team eliminated the majority of the merge conflict headache by adding a save-hook in VS. When an EDMX file is saved, we sort the contents (both elements and attributes) in a way that makes changes extremely unlikely to cause conflicts. This works well for us. That said, the EDMX designer was never as easy to use as the Linq2Sql DBML designer, and it performs poorly if you map hundreds of tables and stored procedures.

  • Anonymous
    October 22, 2014
    @Juan Rovirosa – This scenario is definitely possible with code-based modelling, rather than writing a script to alter the EDMX once generated you would customize the T4 template that we use to generate the model. In EF6.x we allow you to drop a customized copy of the T4 templates in your project and the reverse engineer process will pick these up and use them instead of the defaults – we’ll do the same (or similar) for EF7. If you are applying the renames and enum conversion in the script, then that will work fine too. If you are doing it by manually editing the generated model then you probably want to hold off on EF7 until we have some sort of additive incremental updates. Having multiple smaller models that hold a subset of the total tables is absolutely supported in code-based modelling too.

  • Anonymous
    October 22, 2014
    @kburtram - There isn't any upgrade documentation yet. EF7 is still pretty early in the release cycle and we are iterating on design fairly rapidly (i.e. stuff changes a lot). We definitely wouldn't recommend trying to post an existing app to EF7 yet. The current state of EF7 is really just for trying out the basic patterns/experience and there is a lot of stuff that doesn't work yet. We will absolutely have guidance around upgrading before we get to RTM.

  • Anonymous
    October 22, 2014
    @Byron Adams – Agreed, partial classes is what we recommend in general. There are some customizations (such as renaming a generated property) which can’t be done in partial classes though.

  • Anonymous
    October 22, 2014
    @Tzu-Yie – Glad to hear it’s working for you. And yes, we definitely want to support folks who currently have EDMX models. That will include ongoing development on the EF6.x code base for longer than we typically would invest in a past major release. We’ll also provide a lot of guidance around moving over, and possibly even some tooling.

  • Anonymous
    October 22, 2014
    @Guy Godin – Regarding this comment “My models tend to have 15-20 tables and I can't imagine having to type all of these in code”, one of the key things we wanted to clear up with this post is that you don’t have to type out a code-based model by hand, there is tooling that will reverse engineer it for you (in fact, in EF6.x it’s the exact same wizard you use to create an EDMX model). Hopefully that clears up some of the confusion. We understand that the visualization is useful, we’re still listening to feedback on this one to see how many folks want it and how important it is to them. We have some ideas around a simple visualization – perhaps not something we would have for the initial RTM, but if folks want it then it’s something we’ll look at.

  • Anonymous
    October 22, 2014
    Sounds good so far. I'm definitely looking forward to trying it out. I'm hoping that performance will be improved on large models. I'm also hoping that the reverse engineer from database tool is sped up. Also, I'm hoping that the generated code uses attributes first rather than using the fluent API for everything. It will be nice to not have the weird XML mapping errors in Code First. Any idea when an alpha might be available?

  • Anonymous
    October 22, 2014
    @William Bosacker (cc @Michael) – First off just an acknowledgement that not everyone is going to love any one framework. Many developers like using an O/RM, others prefer something that feels more like a database. My personal opinion is that the downfall of many frameworks is when they try to please everyone rather than trying to be excellent at one particular thing. For EF7, we are trying to focus on being a great O/RM (or O/”X”M I guess since we’re not just tied to relational). If you have another approach that works well for you, then that’s great – and that’s why we have different data access tools available in .NET. Regarding security, EF always uses parameterized SQL so you are protected from SQL injection. I think the biggest reason we usually hear for folks using sprocs is that ability for DBAs to control exactly what SQL gets executed in the database and to help with permissions and auditing. These are valid and EF is capable of mapping to sprocs (though it does require a little more work than mapping directly to tables). On the flip side, I do think this post raises some valid questions about the downsides of hiding schema from developers  www.sullivansoftdev.com/.../sprocs-really. I realize I’m not going to change your mind, nor am I trying to. If you have a data access technology that makes you successful on .NET then that’s ultimately what we want. I just wanted to acknowledge your view point and give you some insight into where we are coming from.

  • Anonymous
    October 22, 2014
    I recently cloned a coded config model by writing it to EDMX (XML), using an XDocument to modify the XML and the created a new model in memory based on that.  It is a very handy scenario.  If you removed the XML that is fine, but I need to be able iterate over a model and generate a new one adding, modifying or removing things.

  • Anonymous
    October 22, 2014
    @xor88 – Agreed, one of the key things we wanted to convey in this post is that we see ‘database as source of truth’ and ‘model as source of truth’ as both being key scenarios (and are actually both supported today in EF6.x – though we know there are a few things that need improving for incremental updates to the model from database). Even in situations where folks use migrations the production database may be controlled by a DBA (or use some other deployment process) and using migrations directly isn’t an option, so we already have the ability to generate scripts.

  • Anonymous
    October 22, 2014
    @TTRoadHog – You’ll see more blog posts coming out of our team now that we have a more concrete design on how the overall goals are going to be implemented and have a better feel for how the code base is progressing. I’ll let Julie comment for herself on the book topic, feel free to encourage her on Twitter (@julielerman) :). Things will need to settle down in the code base/design quite a bit more before any of us tackles any sort of documentation aside from the 101 getting started content we have on GitHub (which already goes out-of-date very quickly).

  • Anonymous
    October 22, 2014
    @Peter LaComb – Agreed that there are things you can do to avoid unnecessary merge conflicts, glad to hear that worked well for you.  Perf in general is a common complaint on previous versions and something that we are taking into account right throughout the EF7 code base (both in the runtime and the tooling).

  • Anonymous
    October 22, 2014
    @Jon – Yep, perf is key thing for us on EF7 and we’re building things with performance in mind right from the start. While saying that, I should just note that we know we have some bottlenecks in the current EF code base and they are bits we know we have to come back to and speed up/replace. For most of these we know what we’re going to do, we just wanted to get the end-to-end scenarios lit up before coming back to them. We actually made the move to attributes over the Fluent API in EF6.1 when we integrated Reverse Engineer Code First into the ‘New ADO.NET Entity Data Model’ wizard. We technically have some alpha’s available now, but that was more about including something in the previews of Visual Studio “14” rather than the EF7 code base actually having alpha level functionality and quality. I would say we’re at alpha quality now -  the mainline scenarios mostly work but stuff breaks if you stray off that path - and you could try out nightly builds if you want - github.com/.../Getting-Started-with-Nightly-Builds.

  • Anonymous
    October 22, 2014
    @Will Smith – Did you need to clone the model, or was the requirement just to be able to manipulate it? In EF7 the meta data model is mutable, so you can easily manipulate it once it has been calculated/loaded. If cloning was required, we’d love to hear more about the scenario if you could open a GitHub issue - github.com/.../new.

  • Anonymous
    October 22, 2014
    Thanks for the clarification Rowan, after reading the article a second time I get a better idea of how this will work.

  • Anonymous
    October 22, 2014
    You should hire a developer behind: github.com/.../linq2db and rename his project to EF7. This guy has built something alone that the whole EF team cannot do in years and 7 versions.

  • Anonymous
    October 22, 2014
    Once we started a 400+ entity application using EDMX and we faced every single problem you mentioned. I agree that it'll be difficult to please everyone, but this decision seems like a defficult decision in the right direction. Any serious application with many entities and several developers is very difficult to be completed using EDMX. But ever since we've changed to code base modeling (I never liked the name codefirst) we've never had any problems of such nature. Thanks, it's allways been exciting to read EF announcements!!

  • Anonymous
    October 22, 2014
    I am having performance problems with EF 6.1 because of the compilation time of complex queries (see stackoverflow.com/.../26073248). Will this process be (much!) faster in EF7 or can we "persist the query-cache" in EF7 as it seems to be not possible yet with EF 6.1?

  • Anonymous
    October 23, 2014
    The comment has been removed

  • Anonymous
    October 23, 2014
    @Thijs, @Vitor Canova, @Michael, and @Alireza Haghshenas - Thanks for the feedback, glad the direction sounds good to you.

  • Anonymous
    October 23, 2014
    @Frans Bouma – Totally agree that third party tooling is important and a healthy ecosystem is key to any project. We love that there are great tools for EF already today (LINQPad, EF Prof, Glimpse, LLBLGen, etc.) and we’ve made changes for and taken contributions from the owners of those tools. You’re spot on that all the items such as ‘visualizing a code based model’, ‘read/write designer over code based model’, ‘incremental updates to code-based model’ etc. could come from the EF team, could be contributed to the EF repo from community members, or be owned by third parties. Of course, EF is a free and open source product… so we want folks to have a good set of functionality without having to pay for tools :). Regarding resources, I’m glad we gave you a change to chuckle… so much better than being angry :). We totally agree that in years past it took a lot of hours to achieve things. Part of this was a need to embrace a more agile approach to development, and our team has made a lot of changes in this area over the last few years. A lot of this was also due to the nature of the core framework we were trying to add features too, hence the changes in EF7 to adopt a new core. I think it’s fair to say that if you look at the progress/pace on EF7 so far that this is paying off. Of course I realize I’m not going to convince you that our team is awesome… just wanted to let you our take on things. Hey, at least we live in an open source ivory tower these days and the door is always open (we even got rid of the dragon that guards the lower courts).

  • Anonymous
    October 23, 2014
    The comment has been removed

  • Anonymous
    October 23, 2014
    The comment has been removed

  • Anonymous
    October 23, 2014
    Good move in my opinion. I've used Code-First as a replacement for NHibernate and Fluent NHibernate for a couple of years now - and was never really interested in EF until Code-First became a first-class citizen.

  • Anonymous
    October 23, 2014
    @Rowan: "You’re spot on that all the items such as ‘visualizing a code based model’, ‘read/write designer over code based model’, ‘incremental updates to code-based model’ etc. could come from the EF team, could be contributed to the EF repo from community members, or be owned by third parties. Of course, EF is a free and open source product… so we want folks to have a good set of functionality without having to pay for tools :)." The thing is, and this might be due to your PR department, not sure, that Microsoft communicates that there's just 1 way to do things, and that's Code first, there's no other way, 3rd party tools don't exist, you (as in team) never talk about them, at all. In that light, if you look at the number of people who have existing databases and use database first or model first with current MS tooling and are now running into a problem, it's clear there are a lot of people who can be helped with 3rd party tooling and still use EF, but are unaware of any 3rd party tooling. I mean, no-one would create a 100+ entity model in the EF designer unless they wouldn't know any better that there are 3rd party tools out there which let you work with models of 1000s of entities without a problem. It's as if you're afraid that people would run away from EF to other ORMs once they have tasted what 3rd party tooling can do, I have no other explanation. But don't you agree it's better for 3rd parties and EF to combine efforts instead of the EF team ignoring what's out there for 3rd party tooling to add features to EF / tooling to work with EF? I mean: I fully understand you want to provide a framework which is usable without 3rd party tooling, e.g. through a code-based mapping api. But you also know that using a code-based mapping api to write everything out by hand isn't how a lot of project teams (are able to/allowed to) work. Microsoft now communicates to its EF users that the user is mistaken, there is just 1 way: code-based mappings, also because Microsoft has little choice: the designer for database first/model first is phased out and sub-par to deal with the requirements of database first/model first. But the reality is: there is a choice: the model first/database first scenarios are handled perfectly well by 3rd party tooling.

  • Anonymous
    October 24, 2014
    Sorry but I don't think removing EDMX is a good choice because xml is more expressible than code. For database first approach, the code should be generated from like EDMX this kind of "middle file", not from database.

  • Anonymous
    October 24, 2014
    @ChrFin – We don’t have detailed info yet on exactly what is in/out with complex mappings because we’re still working through the design and that drives how we scope things. We will definitely have capabilities around inheritance mappings, but we still need to dig into exactly what’s going to be supported. You’ll definitely see a detailed blog post on this topic once we make some more progress. Once we know the scope of what we want to ultimately support we’ll then also work out what makes it into the initial RTM. Regarding the query cache, being able to persist and load it probably isn’t going to be trivial based on the nature of what we are caching. To that end, we probably won’t build this into it initially, but I can absolutely see the value in that and it’s something we would consider adding in the future. Everything in EF7 is also based on services and dependency injection, so it would be easy for someone outside of our team to tackle this and provide an alternate cache service that can be loaded (or contribute the change to our code base of course). Glad you find the early info useful.

  • Anonymous
    October 24, 2014
    @Henning Kilset – Thanks for the feedback, glad the direction sounds good to you.

  • Anonymous
    October 24, 2014
    The comment has been removed

  • Anonymous
    October 24, 2014
    The comment has been removed

  • Anonymous
    October 24, 2014
    The comment has been removed

  • Anonymous
    October 24, 2014
    The comment has been removed

  • Anonymous
    October 24, 2014
    The comment has been removed

  • Anonymous
    October 24, 2014
    Can you share any info about the release date?

  • Anonymous
    October 24, 2014
    The comment has been removed

  • Anonymous
    October 24, 2014
    @Rowan: fair enough. All I want is that MS acknowledges there are 3rd party tools which can offer what MS doesn't offer for EF, so people here and elsewhere who are seeing a problem coming towards them with EF7 and no MS designer now know there are solutions to that problem so they don't have to leave EF. @Darren: you're not in my shoes. You don't know what's it like to compete with a free competitor which is installed on every dev's machine because MS does that. You don't know what's it like to sit in a keynote where a big MS hotshot tells thousands of devs in the hall that the only (yes, only) way to do data-access on .NET is by using the entity framework.   But I'm not asking MS to do my marketing, I just want MS to show their users that there is more to EF than just what MS ships. My posts aren't meant to be disrespectful, and the EF team knows that. On a regular basis I help them out even with bugreports, feedback on what they're planning to do, I have done that for years. I don't have to do that, mind you.

  • Anonymous
    October 24, 2014
    You guys seem to have forgotten why developers loved "Visual" Studio so much in the first place... With "Visual" being the key word here.

  • Anonymous
    October 25, 2014
    Just to sum it up: NO conceptual models (if you know what CONCEPTUAL means) and NO dynamic ESQL anymore. The only two distinguishing features are gone, welcome "just another code-to-db ORM anyone can write in two weeks". Well done guys. I just leave it here: blogs.msdn.com/.../ef7-new-platforms-new-data-stores.aspx blogs.msdn.com/.../ef7-new-platforms-new-data-stores.aspx blogs.msdn.com/.../ef7-new-platforms-new-data-stores.aspx

  • Anonymous
    October 26, 2014
    Hard changes are necessary sometimes. I'm happy with that. I only use code based these days. The cadence of features has been quite slow so I hope that dropping all the unnecessary/edge-case stuff will enable faster feature development. I'll throw in my main pain points in there: view generation performance and lack of batch update (I'm aware of EntityFramework.Extended though). ps: async queries are awesome!

  • Anonymous
    October 27, 2014
    @Yitzhak Khabinsky – Validation is actually a really interesting point. XML does lend itself pretty well to validation within the document. The issue is that for EF the validation needs to involve your CLR classes too. For example, you may have configured a primary key in xml but does that property actually exist on the class. This validation is much easier handled by strongly typed APIs and the compiler. You also end up duplicating a lot of info in xml and code to enable full validation, i.e. to verify that the type of foreign key properties match the type of the target primary key you need to know the data type… but that is already represented in your classes. Scenario 2 is interesting… I have never heard of anyone else doing that :). If that’s an important scenario for you it would be fairly trivial to write code that takes the EF7 metadata model and serializes it out to an xml document.

  • Anonymous
    October 27, 2014
    @Darren – Thanks for the feedback :)

  • Anonymous
    October 27, 2014
    @Vincent – Disclaimer: this is just current thinking and could all change... We don’t have anything concrete for release dates, but we are targeting early-to-mid next year to have the stack in a stable state where it could be used for simple applications that don’t have complex requirements. It will probably be later next year before we would have EF7 ready to use a proper O/RM in more complex applications (i.e. come early next year it’s not going to have enough critical features for us to want to claim EF7 is a ready-to-go O/RM, but it would be useful for simple applications). Of course, that’s all just our current thinking and we’ll keep folks up dated as we progress – and we’ll continue updates to EF6.x as we go.

  • Anonymous
    October 27, 2014
    The comment has been removed

  • Anonymous
    October 27, 2014
    The comment has been removed

  • Anonymous
    October 27, 2014
    @Andy – It is really interesting, I also thought that a visual design surface was going to be really key and that code-based modelling would always just appeal to a subset of developers. It’s been surprising to me to see how quickly code has taken over as the way a large percentage of our customers (perhaps even the majority now) prefer to model. I think it’s part of a more general trend toward simple lightweight frameworks that are easy to use and avoid the layers of abstraction present in designers etc. so that developers are closer to the code  - which makes it easier to deal with more complex scenarios that a designer would not handle. Of course, that is a general trend and does not apply to all developers :)... it’s also based on the observation of our team which is of course a subset of all developers.

  • Anonymous
    October 27, 2014
    @Dennis – We will have some sort of dynamic querying support because we now support the idea of ‘shadow state’, which allows you to have properties/entities that don’t have corresponding CLR properties/types. To address these in queries you can’t write strongly typed LINQ queries against the actual CLR classes, so we are going to have something in this area. Regarding the conceptual model, this is definitely a shift in thinking (that has been happening over the last several years) away from being a product that is focused on EDM and a ubiquitous model and towards a product that is focused on being a great data access technology.

  • Anonymous
    October 27, 2014
    @Clement Gutel – Glad the plan sounds good to you. Totally agree on the performance front, and that is a bit part of EF7. Bulk update is definitely a useful scenario, it’s not going to be in the first release and we don’t have definite plans around when we bring particular features online… but for me personally that is something I would love to see us support in the near future.

  • Anonymous
    October 27, 2014
    My scenario is that I TSQL to add a column when there is a model change.  I then use Update Model from Database to regenerate my POCOs. This practice updates the EDMX so EF6 does not get confused in it's metadata.   I lose annotations, such as date formatting for the UI, in the model, but it is easy enough to copy the changes from backup copy of the model classes.  I am terrified of using Migrations because the Db is already in production.  I develop against a Dev database and add columns to the live database and do not allow column drops.  It would be nice if I could rely on Migrations to fix up the database without fear of losing data from production.

  • Anonymous
    October 27, 2014
    i tested code first , its a little hard to define relationships by hand but its work great.its better than old database first . but in real world app its not so stable as classic ad-hoc queries against database.it has problems in managing the connection to database.

  • Anonymous
    October 28, 2014
    Please take in consideration detached objects graph update...

  • Anonymous
    October 28, 2014
    @bizcad – That workflow of having the database as the source of truth is absolutely supported by code-based modelling – even in EF6.x. BTW if you are nervous about running migrations in production then you can get migrations to generate a SQL script which you can check over before applying. And of course, I would never recommend deploying any database changes to production without testing them in a test environment first (regardless of whether it’s a SQL script, migrations, database project, or some other tool).

  • Anonymous
    October 28, 2014
    The comment has been removed

  • Anonymous
    October 28, 2014
    @elijah – Detached entities are one of the things we really want to address in EF7, we think a good solution requires some behavior changes in the core APIs so EF7 is the right time to make them.

  • Anonymous
    October 31, 2014
    @Rowan Miller Reading the following in the article: "In EF7 all models will be represented in code. There will be tooling to reverse engineer a model from an existing database (similar to what’s available in EF6.x). You can also start by defining the model in code and use migrations to create a database for you (and evolve it as your model changes over time)." I understand that I can still have a "Database First" approach (like with EF4... EF6.x) and the EF7 engine will update my code to progressively reflect the changes made to the Database (updating the code instead of the EDMXXML). Could you please confirm if I have understood correctly or not? thank you Alberto

  • Anonymous
    November 01, 2014
    @Rowan Miller:  I wasn't going to respond as it's not my job to educate people on the security of their systems, though it is my job to ensure that the applications that I create are secure.  Since you work for Microsoft I had assumed that you would have been educated in all of the areas of a security breach, especially in web applications, but that may have been a poor assumption on my part.  If you think that SQL injection is the major reason why people should use SPs, then you seriously have a lot to learn, as it barely makes my list. I am extremely dismayed at your reference to Mr. Sullivan's blog post, as it is very apparent that he knows nothing about which he speaks.  And when Keith Elder tries to bring up just a few of the flaws in his thinking, someone else goes off the handle with a view that equates to that of the EF team.  I'm sorry, but if someone does not know the security risks, they need to learn them before they do any coding.  And, I haven't even started on the performance issues related to EF, though the Obama Care web site does come to mind. EF is an OK tool for prototyping, but people really need to know the serious down sides if they intend to use it in a production environment.  I do write all of my own SQL, and I do periodically consult with our SQL admins to ensure that everything that I do is being done in an efficient way.

  • Anonymous
    November 01, 2014
    P.S.  I forgot the say this before, but the only reason I posted here is because I was told in no uncertain terms by the ADO.NET Project Manager that all new System.Data development would only be done through the EF project.  There would not be any new development work performed on the .NET Framework.

  • Anonymous
    November 01, 2014
    I would suggest that you give us the ability to only update specific entity. Maybe just a Right-click menu option on the class name.

  • Anonymous
    November 03, 2014
    While I eagerly welcome the forced retirement of EDMX, I wonder how easy it will be in practice to migrate our Code First (Yes, it was a horrible name from Day 1 and I commend Rowan Miller for saying so.) designs to EF7. We have long used code review to drive for simplicity, and I think its about to pay off for us -- I hope!

  • Anonymous
    November 03, 2014
    I personally hate the performance of edmx editor. Having said that, the edmx file allows me to actually get to meta data easy and consistently using custom xml parser. While meta data on objectmodel is constantly broken - classes moving from public to protected and vice versa, IsKey on column object is still not working and I have to build custom resolvers using info from key collection. The relationships and their cardinality is impossible to find at all. If you want to move to code only, please fix metadata to be a first class citizen if we generate code using database first. Which is a reality to too many of us who works with big distributed databases.

  • Anonymous
    November 04, 2014
    @Piggy – Correct. There are some rough edges in the ‘incremental update’ process when you have made certain customization to the generated code, but we have ideas on how to improve those.

  • Anonymous
    November 04, 2014
    @William Bosacker – I mentioned SQL Injection because Michael mentioned it in response to your comment – actually I think I was agreeing with you that SQL injection can happen regardless of whether you use SPROCs or not (and that parameterizing is the way to solve this for both SPROCs and direct table access). There are definitely differing opinions on who should be responsible for making sure data stays secure, queries are performant, etc. (whether the DBA should lock down the database to ensure it, work with the developer to tune/verify the app, or if it should be up to a developer). There is never going to universal agreement on that question :). What we do want to do is provide the right tools to help folks be successful in the environment they choose (or are mandated to use). EF isn’t going to be the right tool for all of those situations, but we do want it to be a good tool for folks who want to use the O/RM approach (and we do support the use of stored procedures, though you do loose some of the O/RM functionality when you do that). I’m not sure who mentioned that EF was the only place that development is happening in the data space, but I can assure you that is not the case. We’ve actually been working closely with the team that owns System.Data (and SQL Client) to fit some work to help us in EF7 around the other work they are doing.

  • Anonymous
    November 04, 2014
    @Johan Bisschoff – That’s an interesting idea, rather than having to detect which entities to update we have the developer tell us. It may even be a good interim step towards something more featured.

  • Anonymous
    November 04, 2014
    @Zell T. – For the most part the APIs will be very similar and the set of conventions will be mostly the same too. Of course, the “mostly” in those statements means there will be some code changes need, but our goal is that things will generally map 1:1 with a new API that does the same thing and has almost the same name. A couple of exceptions to this are the relationship fluent API, which we’re experimenting with some improvements to, and no longer using a pluralization service to pluralize table names. I will just note that the current code base is missing a number of conventions etc. at the moment, so I wouldn’t try and port anything just yet :).

  • Anonymous
    November 04, 2014
    The comment has been removed

  • Anonymous
    November 07, 2014
    Still can't figure out how I can fit Sql Server Data Tools into all this.

  • Anonymous
    November 07, 2014
    Will this mean that T4 templates will no longer have access to metadata? I have a large number of T4 templates that use the metadata from the EDMX file. Will there be another location that the T4 template can look to get the metadata or will I have to roll my own?

  • Anonymous
    November 10, 2014
    @Sven - If you're using SSDT then the database is the 'source of truth' and you would maintain you model based on that (not using migrations etc.). I think one possible optimization would be the ability to generate/update a model based directly on an SSDT project rather than the database. That's something we may look at in the future, but not for the initial release.

  • Anonymous
    November 10, 2014
    The comment has been removed

  • Anonymous
    November 11, 2014
    I started with EF4 Code First new database. I have worked with EDMX Database First for legacy. Most recently working with EF6.x Code first reversed from a very large legacy DB. Basically I support the new direction. However you have not really outlined what other changes you are making and I am curious about that. In using EF6.x it generates an entity class file as well as other things like configurations and table mappings. Then there is a problem of moving all that stuff around to conform, for example I turn the configuration into a static method in the entity class. Ideally I would like to see configuration go completely as in most cases configuration can be achieved via attributes in the entity class. No doubt you can tell me of many more advantages this change will bring.

  • Anonymous
    November 13, 2014
    I have modified the EF .tt files in order to generate some features and other classes( resource, repository, and so on) For me, EDMX file was a code generator - although I work after with Code FIrst (ok, code-base modeling ) Then I eliminate from the build the edmx file. If you eliminate EDMX file, what it will be done with my .tt files? ( I can generate from database , though - but this is a re-work)

  • Anonymous
    November 13, 2014
    Not to mention also DI ( for context and repositories)

  • Anonymous
    November 15, 2014
    Thank you Rowan, I use T4 templates to generate partial classes containing support routines and unit testing. The templates loop through all classes and properties and identify things like whether or not the field is a navigation field or simple property or is a key field, or the data type of the field. I'm using the Edm namespace available through these calls: MetadataLoader loader = new MetadataLoader(this); string inputFile = @"....CoreData.edmx"; EdmItemCollection ItemCollection = loader.CreateEdmItemCollection(inputFile); Will the EdmItemCollection still be supported by just using as different source or will that namespace and its routines be mothballed? My project isn't live yet so if I need to move away from it now is the time. thanks, john

  • Anonymous
    November 16, 2014
    When you start making reverse-engineer tools, please make reference names to classes meaningful. Example: When you have table Customer that has field KeyAccountManagerId that reference to table Employee, then reference should not be in code Customer.Employee but Customer.KeyAccountManager. For LinqToSQL and EF6 i have written my on tools to make this possible, because Reference names like Emloyee1, and Employee2 will lead some serious business logic bugs.

  • Anonymous
    November 17, 2014
    @Peter Burke – When we consolidated reverse engineering a code model into the EDMX wizard (in EF6.1) we actually moved to using data annotations by default. We now only generate fluent API for things that can’t be done with annotations.

  • Anonymous
    November 17, 2014
    @What about ,tt files? – The reverse engineer process for a code-based model actually already uses tt files in EF6. You can add the files to your project and customize them to affect the generated code. We’ll have something similar in EF7.

  • Anonymous
    November 17, 2014
    @Andrei Ignat – I’m not sure what your comment means :). Was it a question?

  • Anonymous
    November 17, 2014
    The comment has been removed

  • Anonymous
    November 17, 2014
    @Sander A – Good feedback, we should just get this right from the start in EF7. Opened this item to make sure we do - github.com/.../1076.

  • Anonymous
    November 22, 2014

  1. When EF will allow to retrieve fresh data from database into global context variable? I intended to create multi-user application which supposed to immediately see the changes done by other users. EF doesn't retrieve new data from database, if context variable is global - it just continues to use its Local cache. I hear everywhere that I should use only local context variable and dispose it immediately. But what about data binding? If I retrieve data, bind to DataGrid and dispose context, then I wouldn't be able to edit and save data. This is really frustrating. Wish there would be some method which would allow to get fresh data from database.
  2. Will EF7 support EntityTypeConfigurtion class?
  • Anonymous
    November 22, 2014
    @Rowan Miller Hi Rowan, refer to your comment (@Sven - 10 Nov 2014 1:02 PM) Please consider creating that feature as soon as possible: generating/updating a model based directly on an SSDT project rather than the database would be really GREAT!!! I do love SSDT and now I am designing and implementing Databases always using that tool: having the EF capable to read the SSDT  would be a great gift. I am working on businessenterprise level projects, and in the last few years I am feeling quite neglected by the new features and enhancements of VS this feature will make me smile again!!! [I believe all new technologies are great, but I still cannot adopt anything in my daily business activities for many reasons] If you can address me to a link to suggestvote up this suggested feature I will run to it. Thanx

  • Anonymous
    November 24, 2014
    The comment has been removed

  • Anonymous
    November 25, 2014
    @Sektor - You can use the DbContext.Entry(object).Reload() to get fresh data for an individual entity. You could also look at dropping down to ObjectContext and using MergeOption to merge in changes from the database. In EF7 you will be able to use a pattern like EntityTypeConfiguration, but it probably won't be that class exactly.

  • Anonymous
    November 25, 2014
    @Piggy – You can suggest features here - github.com/.../new. Just to set expectations, it’s not something we will tackle in the initial release.

  • Anonymous
    November 25, 2014
    @Jeremy Huppatz – You should vote on the feature request that @Piggy opens :) (see above comment). I can see the value in this, the main thing to take into account is that it would be a significant amount of work and we need to balance it along with the other features etc. In all honesty, I don’t think it’s something we are going to get to in the near future. It is something that could make a good third party tool, or a contribution to our tooling.

  • Anonymous
    November 25, 2014
    @Rowan Miller - Thanks for the quick response.  I'll keep an eye out for the feature request. ;) If an external developer was ambitious enough to check out a branch and build a set of interfaces and implementation code that would open the EF class model to synchronization with other models (e.g. SSDT projects, XMI models, LightSwitch LSMLs, third party tools like Modelio or SparxSystems EA), how open would you be to accepting a pull request to incorporate this feature?  There'd also need to be some work done within the VS SDK to create add-ins to handle the synchronization between different project types, but without the foundational interface, those add-ins wouldn't be possible.  :)

  • Anonymous
    November 26, 2014
    @Jeremy Huppatz - That's hard to answer without knowing what the changes would be. We'd definitely be open to it, but it would be good to have a rough proposal of the changes before spending much time on it. When it comes to synchronizing an EF model though, your probably talking about updating code that is generated from the SSDT project, so that probably wouldn't require anything special on the EF side (you could probably reuse a lot of the infrastructure we create for the database reverse engineer process).

  • Anonymous
    November 26, 2014
    In response to: "@Jeremy Huppatz - That's hard to answer without knowing what the changes would be. We'd definitely be open to it, but it would be good to have a rough proposal of the changes before spending much time on it. When it comes to synchronizing an EF model though, your probably talking about updating code that is generated from the SSDT project, so that probably wouldn't require anything special on the EF side (you could probably reuse a lot of the infrastructure we create for the database reverse engineer process)." I understand completely.  The trick here is identifying whether this is a tweak for EF, or for SSDT projects, or other forms of model persistence - and finding a way to create the appropriate model tweaks depending on which modelling tool is being used as the source of truth.  e.g. Someone might be using a tool like Modelio to create class models and/or ERDs, and want to push the changes to an existing EF/SSDT project.  Or you might initiate a change within EF.  Or SSDT.  The issue is then how to cascade changes from an initiator to one or more targets.  I suspect interfaces are part of this, but I suspect there might be more to it.  There'd also need to be some configuration entries created to define the targets, providers to use and merge rules. Something like a command table could work... but there'd have to be some kind of parsing layer that interprets the details of the changes each model format requires.  Food for thought! :)

  • Anonymous
    November 28, 2014
    @Rowan Miller @Jeremy Huppatz Hi all, A bit late, but finally I opened the feature request issue: github.com/.../1186 I would be very happy to get it, maybe in 7.0.1 ;)

  • Anonymous
    December 15, 2014
    Julia Lerman in her Book "Programming Entity Framework: Code First" describes Code First Workflow as "Code First" is aptly named: the code comes first, the rest follows." The EF team member Rowan Miller says "Code First is a bad name", where does that leave me?   Knowing that I can reverse engineer an existing database into POCO classes or craft my own POCO classes and generate a database from the POCO model. I will give it the name  Code workflow. That way there isn't any confusion between reverse engineering into POCO classes or Using POCO classes to generate the database. It is simple Code or POCO workflow.

  • Anonymous
    December 16, 2014
    @Julius Depulla - It's probably just a context or perspective thing, "Code First" makes total sense when you are writing code first and then generating the database. Another way to think about it is Code First and Code Second. Regardless of wording, it sounds like you have a good understanding of the options :).

  • Anonymous
    December 19, 2014
    m wrote on  Thu, Oct 23 2014 12:30 AM: "You should hire a developer behind: https://github.com/linq2db and rename his project to EF7. This guy has built something alone that the whole EF team cannot do in years and 7 versions." Completely agree!

  • Anonymous
    December 22, 2014
    The comment has been removed

  • Anonymous
    December 22, 2014
    An idea for "Update model from database" issue. Maybe it can be handled like Source Control Diff where you see two panes showing current vs. proposed and you can select which pieces get merged.

  • Anonymous
    December 25, 2014
    Hi Rolan, Our current DB enterprise workflow from a DBA and Developer prospective goes as follows:

  1. DBA creates logical model to represent business needs (Can use EDMX at times for new projects)
  2. DBA creates physical model, changes/add/remove are schema, (table, stored procedures, UDF, Full-Text Indexing, SQL Service Broker changes, Other SQL object changes) using SSDT
  3. Deploys the database to some local DB
  4. DBA or Developer updates Model from Database, may have to tweak imported functions or sometimes delete the a entity from the model so that model is imported correctly.     Steps 1,2,3 and are repeated often until the schema & code stabiles as feature matures It would be really useful if EF 7 could just read the SSDT metadata and create the model.  We don't care if it is EDMX (however its nice to see the conceptual model sometime) or C# code as long we don't have to manually update from physical database and potentially introduce errors when schema and model are not in sync.  
  • Anonymous
    December 29, 2014
    @Chris - Interesting idea, especially given that models are purely code-based. Probably not something we'll look at for the initial release but could be a good scenario for us or a third party to look at in the future.

  • Anonymous
    December 29, 2014
    @Haroon Said - Did you want the model to just be purely code generated rather than requiring a manual step to update it? If so, a T4 template that reads the SSDT project and generates a model should be pretty simple (and could probably re-use a lot of the infra we are building for database -> model generation). You mentioned wanting to generate from SSDT project rather than from the physical database a few times. For you, what is the advantage of this (since the SSDT project effectively represents the structure that will be in the physical database)? Just want to make sure I fully understand what you are wanting :).

  • Anonymous
    December 30, 2014
    its time taking (code first )process...to learn and implement compared to database first approach and also difficult for migrating the projects to ef7 for older versions

  • Anonymous
    December 31, 2014
    @Shameer Shaik - Agreed, those are valid reasons that moving to EF7 may not be the correct thing to do immediately for all developers/applications. That is the main reason we are going to be investing in EF6.x for some time - so that it remains a valid option for those who want to continue with it.

  • Anonymous
    January 07, 2015
    EF7 code-based model .. would you consider a Data Dictionary approach ? Thanks Rowan for the clear description of direction. We are a user of EF since sometime and we are overall pleased with what it brings and somehow excited to see it developing to the next major release, EF7. One point that keeps coming to me is this, would we ever see a Data Dictionary approach into code-based modeling ? To explain what this is, consider the fact that if we have something like "Customer ID" property that is repeated some 40 times in 40 different model entities, that all refer to the same physical meaning, ie sharing the same common attributes such as Type, Display Format, Display Label, Validation, etc., then with a Data Dictionary approach this Property is defined Once with all its related attributes and then referred to as many as required in the model entities without the need to redefine the Attributes in each entity model file. As it is now, we copy the Property with all its Attributes to all the Model files that need to have it, and this makes maintenance of the Model a nightmare. The idea of Data Dictionary based development is not new (in the 1990's of object oriented 4GL development) proved to be a very productive and clean development environments. Just a humbled thought I wanted to share.

  • Anonymous
    January 08, 2015
    The comment has been removed

  • Anonymous
    January 09, 2015
    @Adel Mansour - I see you also posted this question on GitHub, we'll follow up there github.com/.../1370

  • Anonymous
    January 09, 2015
    @IFS_ - There isn't a lot of actionable feedback here, but a few points:

  • There is already someone who is planning to write a SQL CE provider. For EF6 the Microsoft team owned the provider but for EF7 it will most likely be a third party provider (same as Oracle, MySQL, etc. were in the past)
  • Regarding your comments about mapping to a large database, you'll still be able to reverse engineer a model from a large database (in fact the startup time should be better on EF7).
  • Anonymous
    January 14, 2015
    Is this question be able to solve by using EF7? stackoverflow.com/.../15373090

  • Anonymous
    January 15, 2015
    @Tan - Yes, EF7 supports relationships that target a unique property that isn't a primary key. EF7 is still in pre-release though, so I wouldn't recommend trying to build actual applications with it just yet. The current releases are designed for trying out new features etc.

  • Anonymous
    January 19, 2015
    The comment has been removed

  • Anonymous
    January 26, 2015
    @IFS_ - I understand your concern. We will still be including EF6.x tooling in VS2015 and that tooling can still open and work with models from all previous version of EF. Correct, you will still be able to point to an existing database, select tables, get a model, and use it. The model will just be represented in code rather than having the choice between code and xml.

  • Anonymous
    January 28, 2015
    The comment has been removed

  • Anonymous
    January 29, 2015
    Here's the thing about doing away with Edmx for us:  we figured out how to get migrations out of the "Model First" approach (it was really easy), and we even went the extra step of figuring out how the T4 templates worked and how to add extensions to Visual Studio so that we could add extra information and spit out code and a database exactly the way we want it to look, even building automatically versioned streaming code (therefore significantly reducing repetitive coding).  Without a model that can be fed into T4 templates, all of this automation won't be upgradeable to EF7.  We even got around the merge problems by requiring an svn lock to modify the Edmx file. Basically, we worked around every complaint (except for confusing errors) listed above about the Edmx approach and turned it into an amazing GUI way of designing classes, where a few minutes in the Edmx editor could wind up spitting out hundreds of lines of code that would otherwise have to be hand-written.  Automating this also meant that there were almost never any bugs in this section of code. So, without a way to feed the model into T4 templates, it won't be possible to upgrade to EF7.

  • Anonymous
    January 29, 2015
    @Phil Morris - First up, I understand the concern. Breaking changes (and other major changes) are always going to cause pain to customers and need to be carefully considered. EF7 isn't a decision we took lightly, but it is a decision that is now made and we are well and truly into the release process. "There are many changes and fixes we need in EF (support for all SQL Server datatypes would be a good start!)" This is actually one of the main reasons behind taking the step to rebuild the core of EF. The majority of the most often requested features would have been extremely expensive to implement on the EF6.x code base (to the point that it wasn't really feasible to implement them). But with a metadata system that is purpose built for an O/RM we can add them to EF7. In fact, a lot of the most often requested features are already added in EF7:

  • Batching on SaveChanges

  • Partial client evaluation of LINQ queries

  • Detached graph improvements

  • Mapping to backing fields of properties

  • Relationships to unique non-primary key properties

  • In-Memory provider (for testing)

  • Extensible key generators (including SQL Server sequences)

  • Database default values

  • Anonymous
    January 29, 2015
    @Hank Schultz - You are right, if you have extended the EF Designer to do more than just generate the POCO classes for the model (sounds like you are probably adding business logic etc. to the classes) then that won't translate to EF7. Honestly, I would probably recommend sticking with EF6 in this scenario. However, another option would be to still use some kind of file to define the shape of your classes (you could perhaps even still use EDMX), but then have it generate the code based model for EF7.

  • Anonymous
    January 29, 2015
    Hi Rowan, In the follow session you explain a way to handle soft deletes. channel9.msdn.com/.../DEV-B417 Will this approach or something similar that can use the same column be possible in EF7?

  • Anonymous
    January 29, 2015
    The comment has been removed

  • Anonymous
    February 03, 2015
    Having worked with many 'codefirst' data models from various companies now I can say this is one of the worst ideas I've ever come across.  Codefirst models are nearly universally spaghetti.  Giving programmers the ability to define anything into the data model that they please leads to all sorts of bad practices.  I can't believe they are ruining such a good technology by going this route.

  • Anonymous
    February 05, 2015
    @worst idea ever - I'd be interested to hear what makes "Codefirst models nearly universally spaghetti" in your experience. What you can do in Code First in EF6 is actually a subset of what is possible in EDMX. So developers should be able to come up with a more complex model using EDMX compared to Code First. Is the issue that it's harder to track down where a particular thing is configured?

  • Anonymous
    February 05, 2015
    I guess drop the designer rather than fix the horrible bugs there. So often changed one thing in one table, and the designer caused ALL my generated models to be dropped from TFS. That and the first model was always marked as changed even though it never was.

  • Anonymous
    February 09, 2015
    The comment has been removed

  • Anonymous
    February 10, 2015
    @bb - Thanks for taking the time to provide feedback. Your points make total sense and I agree they are things we need to have in reverse engineering. They may not be completely there in the initial RTM, and that would be a totally valid reason to stick with EF6.x for a while until they are added.

  • Anonymous
    February 21, 2015
    The comment has been removed

  • Anonymous
    February 27, 2015
    It doesn't matter to me whether the authoritative source for the models are in my chosen OO language or in an XML file. I'd like to politely disagree that the revision control problem is derived from using XML, though. It's actually an effect of the monolithic EDMX file. That said, what DOES matter to me is the loss of the Designer-First approach. It was an elegant place to construct an architecture that attended to both OO and relational concerns. Strict "Code-First" just makes no sense to me. By that, I mean, designing and creating a performance-sensitive relational back-end in C# (for example) code has many problems. Either we must forget about all the difficult-to-express-in-attributes-yet-still-crucial constraints and other important standard DB features, or we must accept that we've just inserted a tremendous amount of impedance in our development cycle. Back-end data modelling is still deeply relevant to many projects. The ORM presents an abstraction over this, it doesn't replace it. If the real issue is that the vision of the Designer is being retired because it was too difficult to ever fully realize, so be it. If so, then I wish the EF team would admit that. Confess that the expected professional approach now is to design and visualize the back-end entirely in an external tool... and create rock-solid Database-First tooling. Without it, EF will simply become unworkable for my team. Thanks for listening.

  • Anonymous
    March 16, 2015
    Great decision! I really enjoy working with "code first", even in big enterprise projects. Keep it up guys!!

  • Anonymous
    March 16, 2015
    @Darkonekt - A few thoughts based on your comments...

  • Totally agree that in a lot (probably the majority) of applications developers can't just make arbitrary changes to the database. You mentioned "Code First to existing database" and I just want to make sure it's clear that this workflow does not require changes to the database. In fact, in EF6.1 it uses the exact same wizard as Database First, you just select if you want a Code or XML based model during the wizard.
  • The scenario of using a T4 template that reads your metadata and generates some other code (in another project or the same one) will still work. However, instead of loading up the EDMX file to get metadata you would construct the context and get the model from it. If you want to actual entity classes themselves in a different project then I think we just need the option to generate them elsewhere (I opened an issue to track this github.com/.../1836).
  • Regarding 'Update database from model' this is definitely an important scenario. It works already if you don't change the generated code, customize it in partial classes, or customize the generation template rather than the code itself (in these scenarios you can just regenerate and overwrite). That said, there is definitely still a gap where you want to rename classes or properties (just in the code without modifying the templates used to generate). The EDMX designer would often overwrite your changes when you updated, but for these simple renames it would leave your changes in place... we do need to make that work better with code-based models.
  • Anonymous
    March 16, 2015
    The comment has been removed

  • Anonymous
    March 16, 2015
    @Adam - thanks, glad it sounds good! We often find folks working on larger projects are hesitant to try Code First, but once they try it they find it much better.

  • Anonymous
    March 18, 2015
    @Rowan : absolutely, and I appreciate your interest. it will take me a couple days to respond.

  • Anonymous
    March 23, 2015
    I'm just wondering... do you guys plan on adding an easy way to read the changes on navigation properties (for e.g. auditing)? You'll reckon that doing this through ObjectContext as it has to be done in EF6 or less is a bit too hellish, and having the need to model your entities having both the navigation property -and- the foreign key id pollutes the entity model code greatly (and complicates the data access layer if you want to do it automatically).

  • Anonymous
    March 24, 2015
    Rowan. Thanks for soliciting feedback. I've been short on time, but I have given it a little thought, so will post some now; more if I have time to consider again. I'm fairly confident my sense of "code smell" that made me cry out (you know what I mean) in my previous comment is not limited to what I'm about to describe, so I'll try to return with more info soon. I'm going to use the term "designer-first", for what was originally termed "model-first", for the sake of clarity. First, I'd like to speak to my frustration, for a moment. I agree that MS has been building more capabilities into the code-first interface than the designer-first interface. So strictly speaking, your comment about having "more control over the shape of the database" when using code-first is now true. As an extreme case, it will be even more true with the upcoming release, where the designer is removed. However, having found the designer an extremely useful middle-ground to model a repository when designing multiple new applications, I've been concerned about it going away for years. The community has always been assured it isn't going away, even while its features languished. The vision for it was obviously always for it to do more than it currently does. Even without going back to some of the initial launch presentations for direct quotes, it is clear from the complexity of the EDMX file. So, to say we should use code-first because it has more features feels to me like a frustrating dying-gasp avoidance of my (possibly the community's) long-standing request to come clean about the future of designer-first. I know, as a business person, that this is water under the bridge. But I also know as a business person that experiences like these are what make or break trust in a product. Ok, now that I've got that off chest, on to the more technical concerns. (cont.)

  • Anonymous
    March 24, 2015
    Let's consider the most fundamental of fluent statements, such as a relation. // Composite primary key modelBuilder.Entity<Department>() .HasKey(d => new { d.DepartmentID, d.Name }); // Composite foreign key modelBuilder.Entity<Course>()      .HasRequired(c => c.Department)      .WithMany(d => d.Courses)    .HasForeignKey(d => new { d.DepartmentID, d.DepartmentName }); This is completely opaque to most DBAs. It carries all the ugly... Such as the anonymous type to support a composite key. This commonly stumps C# developers when they are learning to use LINQ, because it's just so unintuitive that one would (seem to) new up instances of objects to compare their internal values, without for example overloading the comparison. And when trying to predict any SQL code generated, creating the instance might seem to imply that we need to retrieve the data locally to make the comparison. This one simple constraint demonstrated above carries so much required depth of C# and .NET understanding (EF, generics, LINQ, fluent configuration, anonymous types... lambdas), that you'll never have C# and SQL developers collaborating in a workspace that looks like this. Even if the database is considered as a second-class citizen in a product, you'll never have an SQL developer use this Single-Point Of Truth to understand the product they've come to work with. This definition does not belong here. It creates impedance here. (cont.)

  • Anonymous
    March 24, 2015
    The code can exist there, and it's conceptually beautiful that it can, but using that "design surface" as the single-point of truth is arguably just wrong in many cases. I agree, it's a great way for someone who wants to build now and understand data modelling later, to get started. I'm sure that the much of the community that spends most of their time in Visual Studio think that code-first is ideal, and that it makes sense for MS to build this product for that reason. But my comment was about designing a performance-sensitive database in C#. It was about going into a project with the knowledge that the data tier was going to be as important as any other tier. It's those cases that I was referring to. And until I saw this product direction, I had thought Visual Studio wanted to capture that business. Thanks for listening, I'll try to capture more of my discomfort with this announcement in the near future.

  • Anonymous
    March 25, 2015
    A little more time at the moment. So, the EF team is obviously set on this course of action, which is why you ask "what to improve?". So that question, forgive me if this is blunt, is really about what to improve to address certain diminished capabilities in the next release of the product as I project it, given this announcement. Given the fears I've just elaborated in the previous few comments, we can see a common source with other comment: "how will you handle unexpected schema changes from the DBA", "universally spaghetti" DB's in the hands of C# developers, "database as the primary system... development model must work well", "EF for prototyping" only, etc. SQL has no tools for defining adaptation to C# models. That means as data becomes complex, the "truth" of a SQL database belongs in SQL code. Otherwise, we have, for example:

  • Magic string stored procedures, with no reasonable access to intellisense or other coding tools, tucked inside a DB migration or other C# code. Worse yet, writing it requires imagining the probable translation that will occur, for example pluralization.
  • SQL performance analysis that requires debugging a non-"truth", which the person who created the definition may poorly-understand
  • Acquiring team members that are simultaneous experts in a greater number of verticals. This increases cost greatly... potentially long term.
  • etc. (cont.)
  • Anonymous
    March 25, 2015
    So the best improvement I can imagine given the set trajectory, is to elevate the priority of DB-first. Some examples:
  • Provide tools for detecting backing schema changes and automatically refreshing attributes or refactoring code, perhaps driven by DB project diffs. When you consider "DropCreateApplicationOnSchemaChange" you'll see how one-sided the toolset currently is.
  • Provide code deployment tools that integrate better with database projects, allowing experts to write migrations. Perhaps allow external DB up/downs to be included in other ways. Buck this trend of driving all supported database mutation from fluent. So much ugly there.
  • Provide better SQL modelling integration into Visual Studio.
  • Provide even 1/4 as much documentation and blogosphere investment for DB-first as for code-first. There are more examples, I'm sure your team has these ideas and plenty more on a chart already. Look through the list and add the best to release 1, please. Thanks! Shannon
  • Anonymous
    March 25, 2015
    p.s. I can second that I have generally poor experiences working with data in Code-First environments. At the end of the day, a SQL expert would generally prefer to write a SQL constraint in SQL DDL. Logically (literally) that means that someone who generally prefers to write a SQL constraint in Code-First is not a SQL expert. They are back-filling a deficiency with a tool. Great, it's nice that it exists, but it isn't a replacement. I've always thought Code-First was cool. I really wanted to find it useful. But I discovered it's generally not for me. No matter how much time and effort goes into, no matter how beautiful it is or how polished it gets, those are wasted resources for most of my projects.

  • Anonymous
    March 26, 2015
    The comment has been removed

  • Anonymous
    March 30, 2015
    Its a pretty significant cost to organizations like mine which have hundreds of hours vested in T4 templates based on the EDMX model.   Ever object has its properties wrapped in change handlers to trigger events in our code based on code generation from EDMX.   Losing EDMX and moving to writing a ton of code by hand will cost us man months of development with a over 290 entities.    Maybe we can write a new template to generate the mapping for our classes into code first maps from what is now our legacy EDMX file, but the fact that you abandoned those of us that are model first is a little unfair.

  • Anonymous
    April 16, 2015
    I just had a brand-new idea for a feature that would help me a lot. I'm so excited that I made a video about it! ;) channel9.msdn.com/.../DEV314

  • Anonymous
    April 16, 2015
    Rowan. I just finished viewing a presentation by you of the forthcoming EF features, and also fired up VS 2015. I repeatedly see two messages here meant to diminish fear/uncertainty/doubt regarding the upcoming release, for people like me:

  1. It's not really "Code-First Only"
  2. It's not really an EF "upgrade", per-se, so we needn't upgrade until better DB-first I'd like to dispute #1, although I suspect the EF team is aware of this: all of the actual tool-assisted development in this new release really does presume the schema expert does their work in C#. That just really doesn't make sense to me, and I'm very disappointed. Re: #2... well, that is sure looking like a dead-end at the moment... I really mean no ill by my message, but please understand my position. I've worked primarily with SQL Server and .NET for a long time now. I'd rather keep VS as my primary tooling, but I also advocate predictive adaptation. Which will be necessary without some concrete evidence that strong support for my workflow (schema experts in relational projects work with DDL, not C#) is coming in the very near future. Also, I'm personally very curious. Does your market research indicate that I'm an outlier? Does your team think SQL is becoming less relevant? What am I missing? Again, predictive adaptation. If my entire paradigm is wrong, I'm happy to adapt. Thanks again for listening! Shannon
  • Anonymous
    April 18, 2015
    The comment has been removed

  • Anonymous
    May 03, 2015
    Just a reminder that your return commentary would be appreciated. I presume my concerns don't reflect the bulk of your market research, so I'd love to hear a little about your take on my disconnect.

  • Anonymous
    May 11, 2015
    How about this. Pick up efreversepoco.codeplex.com . Enhance it to generate directly from SQL database projects. Provide a visualization tool for SQL Data Projects in VS.

  • Anonymous
    May 12, 2015
    @Javier Campos – In recent version of EF we’ve had the Collection and Reference methods that allow you access navigation property values in a much easier way (i.e. db.ChangeTracker.Entry(post).Reference(“Blog”).CurrentValue). We will have the same (or similar) API in EF7. As with past release, you also won’t be required to have the foreign key property in your class. In EF7, if the property isn’t in your class (known as a shadow property) then the value can easily be retrieved from the ChangeTracker (context.Entry(post).Property(“BlogId”).CurrentValue).

  • Anonymous
    May 12, 2015
    @Carl – This sounds like exactly the type of application we would not recommend porting to EF7 unless you have a compelling reason to. EF6 will be a perfectly valid option for some time to come. If you wanted to move to EF7 then, as you said, you could swap to generate the entities from a different source, but we would only recommend that if there were things in EF7 you really wanted to use.

  • Anonymous
    May 12, 2015
    @Dave – Batching of commands in SaveChanges is already in EF7. LINQ style UPDATE/DELETE commands are not, but we want to enable them in the future. Not sure which of those two you were after. Indexes are a first class citizen in EF7, filtered indexes you may need to extend migrations a little to support.

  • Anonymous
    May 12, 2015
    The comment has been removed

  • Anonymous
    May 12, 2015
    The comment has been removed

  • Anonymous
    May 12, 2015
    The comment has been removed

  • Anonymous
    May 26, 2015
    Yeah, was just working on some EF code, and remembering some of the old challenges with EDMX. I really worry that this will be many steps backwards for DB-first development, which was already missing capabilities important features available to code-first. Won't there be major obstacles to performing updates to "code-authoritative EF models" after common changes to a DB-first schema? With the definition stored in code, configurable options will also be stored in code. For example, enum types, default values, etc. In a naive implementation, all such configuration will be overwritten during an update. In a less naive implementation, you could just leave untouched any "user-configured" attributes associated with class members that still exist. Or I suppose that by leveraging the existing database project "schema compare" feature, you could create a list of lines of configuration to remove and to add. But this still leaves lots of headaches for teams and individuals. For example, there is no way to refactor a DB-first token. Yet, sp_rename is common early in a schema's life, and would result in (silently) lost configuration.

  • Anonymous
    June 09, 2015
    The comment has been removed

  • Anonymous
    June 10, 2015
    My question is this.  Is something being done to improve batch operations in EF?  Having to delete or update multiple records in EF creates a huge performance bottleneck as essentially records are still operated on one by one.  We solve this by mapping to stored procedures and first using xml type parameters and later user-defined table types.  We wrote extensions that allow us to convert IEnumerable to DataTable and pass it to a stored procedure as a Structured parameter.  I guess DB is still king at batch processing.  So will there be any improvement in EF7?

  • Anonymous
    July 11, 2015
    With all the complaints you probably get about current or previous versions of the framework, it's clear that a lot of thought and hard work as been put into the project.  I laughed a little at the admittance of 'getting the name wrong', but it's good that you can recognize these things and continue to move forward.  I believe in the project and look forward to the advances the team continues to make.  Thanks.

  • Anonymous
    July 24, 2015
    FWIW, it is entirely possible to do both db first (thereby permitting proper rdbms design) with code first (proper POCO w/Attribute decorated meta data) in the same project if you are otherwise stuck with an edmx based solution.  Just use two contexts, one with an edmx connection string, and one with a standard connection string.  The code first context specifies the domain POCO dbsets you want, just watch out for namespace collisions with the generated set (which are practically useless, in my opinion). That said, if I have to use a non-micro ORM, (and I would never NOT use an ORM to decouple the data from other concerns) I'd much rather use EF than Hibernate, because of IQueryable and LINQ syntax, and I would simply write code first POCO's against an existing database.  You get the full control of the POCO's, a properly designed db, and decoupling of data from business concerns, which IMO is the only valid reason for using an ORM in the first place.

  • Anonymous
    August 16, 2015
    We currently use a model first / Domain driven design approach using NHibernate. We tried to move over to entity framework, but the current migrations stopped us from doing that. The trouble is that you have to keep in mind to create the migrations at the correct time and changes and migrations from colleagues. Normally we would start adding models and extending current models, adding business logic, and test it by unit tests. Then there is no database involved. Then you might pull in some changes from your team mates and keep on making changes. When the functionality is finished we create some integration tests, where the database comes in. Every developer has his / hers own database. Then I would like to have an option to compare my current model / domain, with the current state of my database. Then I would like to update or recreate my database, according to the changed model. Note that these changes come from myself and probably my team mates as well. It would be nice to get an update script with all changes. Or a recreate script first throw away everything (in correct order) and after that recreate all tables. When moving to test, acceptance or production, you could run the tool again to get the script with differences. We could hand those over to the dba, who could make additions and improvements before running them.

  • Anonymous
    September 06, 2015
    The comment has been removed

  • Anonymous
    September 24, 2015
    You surely should integrate the Designer in Visual Studio. In the past the Designer is based on EDMX file, now it's time to let it be based on code (some kind of auto-generated code). Designer can be used for almost tasks while modifying code for advanced tasks. Let's think about what it should be rather than how hard it is to do. I believe you all great developers can make it!

  • Anonymous
    October 11, 2015
    I was worked in a model based.. so am i suitable to other techniques ... and is it better to work in a model based approach

  • Anonymous
    November 23, 2015
    @Shannon – I now it is several months since you posted your comments… but I just wanted to follow up to thank you for your detailed responses. We are still thinking thru how all this pieces together and your thoughts have been helpful.

  • Anonymous
    November 23, 2015
    @Craig Williams – In EF7 we made a change so that you start by installing the NuGet package for the provider you want to use (i.e. we stopped special casing SQL Server). This means the package names are a distinct set between EF6/EF7. The upside of this is that you don’t inadvertently upgrade an existing project.

  • Anonymous
    November 23, 2015
    @Edward Kagan – In EF7 we don’t send individual commands for every change anymore, things are batched up during SaveChanges. That said, we do still require that the objects to be changed/deleted be loaded into memory. We do want to add set-based Update/Delete style commands in the future.

  • Anonymous
    November 23, 2015
    @Ewout – I’m not sure if this is exactly what you are after… but in EF7 we now have an InMemory “database” that you can use for testing/prorotyping/etc. without needing to worry about schema changes etc. We also have context.Database.EnsureCreated()/EnsureDeleted() APIs that are good for prototyping.

  • Anonymous
    November 23, 2015
    @F. Ed – Enums are supported (we supported those back in EF5, so maybe there is something in particular about the EF5/6 implementation you found undesirable?). Many-to-many relationships are (at least for the moment) represented with a join entity in EF7, which removes a lot of the issues they had in EF6.

  • Anonymous
    November 23, 2015
    @Love.NET Framework – Agreed that we need tooling (we need to keep the “Visual” in “Visual Studio” as many folk have said). We’ll be starting work on it soon.

  • Anonymous
    November 24, 2015
    Thanks EF Team! By reducing the options, we are all starting to get on the same level playing field, we have a number of apps started in each of the versions of EF, migrating between them hasn't been fun, but with each new iteration I NEVER find myself says, wow that was so much easier before, the bar keeps being raised and we keep getting more and more productive. I still have one old all that I have left with EDMX, but that is because we have DBAs on that project who keep making changes directly in the DB schema. You are right, the gripes that we all have stem from lack of knowledge or experience in how we should be structuring our data access to get the most out of EF.

  • Anonymous
    November 24, 2015
    The comment has been removed

  • Anonymous
    November 30, 2015
    The comment has been removed

  • Anonymous
    December 02, 2015
    just wanted to say: we're using edmx for our authoritative DB schema. we have customized .tt templates based on both EF4 and EF5. we generate migrations directly from the model changes. we won't be switching to EF7.

  • Anonymous
    December 22, 2015
    This huge change is because of late recognition of other platforms. Maybe underestimated their future at the first point. Just saying my personal opinion.