D3: Testing, TFS and a Teenager
It has been a couple months since I’ve put up a new release of D3, but that doesn’t mean I haven’t been working on it. Release 0.0814 is now available. This release is designed to work with VS2010 RC (available starting last week), but it should also work just fine or with very minimal changes with VS2010 beta 2 if that’s what you have.
The Teenager
My 14-year old son, Keith, is developing into quite the programmer. You may have heard the .Net Rocks guys read a letter he wrote them in episode 520 about the new support for complex numbers in .Net 4. Let me assure you that he did that all on his own, and I didn’t even hear about it until he mentioned that he’d gotten a response from them asking for his shirt size so they could send him a .Net Rocks hoodie (which he now wears all the time). You can see a few projects on his website, including the picture to the right which is a computer generated image that he created. Anyway, he has been watching me work on DPMud through each of its versions over the last several years, and he keeps me on the straight and narrow by regularly asking what I’ve done on it lately. One of his long-standing requests has been for us to find a way that he could work on the project with me, so in the time since the last release of D3 not only have I been working to evolve the codebase, but I also installed TFS on a server at home and got things setup so that he can begin contributing. The code in the 0.0814 release is still all from me, but he has begun doing code reviews, and in the next release some of the bits will have originated with him. (What a blast it is to code with my son! Go Keith!)
TFS
Before we dive into a discussion of the changes in this release of D3, let me take a moment to mention what a great experience I’ve had using TFS 2010. The new basic installation is AMAZING. I repurposed an old single-core AMD PC I had sitting around the house, upgraded it’s RAM a little so that it now has 2GB, reformatted it with a clean copy of Win7, and installed TFS 2010 beta2 plus VS2010 beta2, and then followed the instructions in Jason Zander’s blog post and the post from Brian Harry that he links to. Setting things up was super smooth, which is a huge change. The last time I tried to set up TFS for myself I couldn’t even figure out what software or licenses I would need and eventually got disgusted and just decided to use something else for my source control. This time it was as easy as could be, and the final result is a server I can access from home, work or anywhere on the internet, full integration of source control with VS including simple offline support, a build server that does a continuous integration build whenever one of us makes a checkin and even a bug/work item tracking system which we haven’t fully started using yet, but is ready to go. Oh yeah, and not only can I use the built-in client in visual studio, but if I need to check on something from a machine that doesn’t have VS installed I can just connect to the web interface. Now there’s no excuse for not having good source control.
D3 Changes in this Release
OK. So what have I been doing with D3 this time around? Well, as usual I’ve been making general investments in the code including things like refactoring and renaming the projects so that everything follows a three-part naming convention starting with D3, ensuring that every project had source analysis, stylecop and warnings as errors turned on and then correcting any issues which turned up, etc. The really interesting things, though, are that I switched from the standard codegen to POCO classes generated using a T4 template, separated the POCO entities into one project and the context and other database specific bits into another project, and then began porting over some events and actor actions and writing tests for them.
Most of this is pretty straightforward and you can look at the code in the release to see what’s up. I started with one of the POCO templates we’ve release as part of the EF Feature CTP, but because some things have been changing between beta 2 and RC and such, I copied not only the core template but also the EF.Utility.ttinclude file into the project so that I’d be isolated from changes there. Once those things completely settle down, I’ll look at getting ride of the ttinclude file and just using the standard one. On a similar note, I’m still using my customizations to the model-first workflow to generate C# code for creating the database rather than just a SQL file, but .net 4 RC now has methods on ObjectContext which will create a database or check for its existence based on just the SSDL used in the context, so I’ll probably switch to using that in a future release of D3. (It’s always great when I can get rid of some code and just use the EF’s built-in mechanisms).
The really interesting part, though, was when I started porting over the first core DPMud capabilities from the previous version code base. Most of what makes DPMud really work is in the Actor actions and the Event entities. This is how we simulate things happening in the virtual world of DPMud—methods are called on the Actor class for various actions like entering or leaving a room, picking up an item, etc. Those methods do some validation, modify the database state as appropriate and then create one or more Event entities which notify other actors about what happened. The normal workflow for a player client in DPMud is a combination of parsing string input to call these action methods and periodically querying the database for new Events and then displaying a string representation of the Events from the perspective of the player. The Event classes contain the business logic to compute the string describing what the event means so that when I see the event that I picked something up it will return a string saying “You picked up foo.” But when someone else sees that same Event, they will see “Danny picked up foo.” or something like that.
The key point is that these action methods and the events that go with them are really the core business logic of the system, and doing a good job of testing them is a critical mission for the D3 rewrite of DPMud. Since we’re going to need a lot of tests, and since we want to isolate the part of the system under test, it seems clear that we need to create a fake data access layer. Further, while I’m a big fan of the new IObjectSet<T> interface which is designed to make this process simpler, in this case we’ve got some tricky dependencies. The action methods are members of entity classes which we need to make sure have no dependencies on the DAL, but at the same time those methods need services from the DAL such as the ability to add new events or modify entity state.
Inversion of Control
The standard way to handle this kind of problem is an inversion of control / dependency injection pattern which has three parts:
- We define interfaces as part of the same assembly containing the entities which describe the capabilities that the entities require from the DAL.
- The DAL implements those interfaces, and we can create fake implementations of those interfaces for use by unit tests.
- We create some mechanism to make an implementation of the interfaces available to the entities at runtime—either the “real” DAL implementation when the app is in production, or the fake implementation for the tests.
The first two aren’t terribly difficult… T4 to the rescue! I created a new interface ID3ObjectSet which is essentially the same as IObjectSet but specific to D3 and part of the D3.Model assembly rather than in System.Data.Entity.dll. Then it was easy to make a T4 template which generates an interface representing the context which has an ID3ObjectSet property for each top-level entity set. It was also a simple exercise to modify the template generating the context so that the generated context implements that interface except that ObjectSet<T> which provides the core capabilities for working with the sets in the database unfortunately does not implement the new ID3ObjectSet interface. So I created a simple class which wraps an ObjectSet<T> instance and implements the interface.
The third part of the pattern can be done several ways. Sometimes there’s a dependency injection mechanism which supplies the implementation whenever the objects that need it are constructed, but in this case the EF generally creates entity instances as part of materializing queries which makes it harder to inject dependencies at this time. So the approach I took instead was to make a simple IoC container the entities can call to get an implementation from. Often this is done with a general purpose IoC container like Unity or something along those lines, but in this case I created a small special purpose implementation just for the context which I call D3DB.
It has two parts. The core piece is just a thread-local-storage static property whose type is the context interface (ID3Context):
[ThreadStatic]
private static ID3Context current;
public static ID3Context Current
{
get
{
if (current == null)
{
throw new InvalidOperationException("Cannot access context without setting it first.");
}
return current;
}
set
{
current = value;
}
}
This means entities can interact with the DAL (or a fake of it) by calling methods on “D3DB.Current”. I have a support method for actions, for example, which helps with creating new events that looks like this:
public void Log(Event e)
{
if (e == null)
{
throw new ArgumentNullException("e");
}
e.SourceActor = this;
e.SourceRoomId = this.RoomId;
D3DB.Current.Events.AddObject(e);
}
The second part is a disposable object with a constructor that sets the static property and clears it when the object is disposed. That way we can have a “using” block that sets the implementation and clears it when the block goes out of scope. Also really simple except for the fact that correctly implementing the dispose pattern has several parts:
public class D3DB : IDisposable
{
// thread-local-static property here...
public D3DB(ID3Context context)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
if (current != null)
{
throw new InvalidOperationException("There already is a current DB.");
}
Current = context;
}
//
// Dispose pattern
//
private bool disposed = false;
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!this.disposed)
{
if (disposing)
{
var context = Current;
Current = null;
context.Dispose();
}
disposed = true;
}
}
~D3DB()
{
Dispose(false);
}
}
This means, for instance, that I can create a fake context which uses HashSets instead of real ObjectSets, and then it’s easy to write a test where the action methods will use the fake context in the test but still have them use the real context in production. Something like this:
[TestMethod]
public void Enter_ValidRoom_SetsActorRoomAndLogsEnterEvent()
{
using (new D3DB(new FakeD3Context()))
{
var actor = new Actor { Id = 1, RoomId = 1 };
var room = new Room { Id = 2 };
Assert.AreNotSame(room, actor.Room);
actor.Enter(room);
Assert.AreSame(room, actor.Room);
Assert.AreEqual(room.Id, actor.RoomId);
var e = (EnterEvent) D3DB.Current.Events.Single();
Assert.AreEqual(actor.Id, e.SourceActorId);
Assert.AreEqual(room.Id, e.SourceRoomId);
}
}
I guess that’s more than enough for one post. Have a look, and let me know if you have questions or feedback. Next release, hopefully we’ll get more of the end-to-end infrastructure together so we can begin to have a working app rather than just a bunch of tests…
- Danny
Comments
Anonymous
February 25, 2010
Good job. I have a similar situation and I've resolved using patial class for implement a custom interface. It works, but if my DAL uses ContextOptions I get a NullReferenceException because options are null. ObjectContextClass is sealed and hasn't any public constructor... So, how can I mock this? Any Idea?Anonymous
February 26, 2010
I assume you mean that ObjectContextOptions class is sealed. No, there's not a great way to mock that directly, but personally that's not the approach I would take anyway. I would view it more as an implementation detail, and any code that references it directly I would try to push into my concrete implementation of the context rather than up in a layer above that. Usually these are options you tend to set once when the context is constructed anyway, so I would tend to set them in the OnContextCreated partial method for the context so they are initialized that way and then not worry about mocking them except to make sure my fake implementation reflects the behavior of them being set. If you need to change the options during the lifetime of the context and therefore need to expose things up to higher layers, then I would create a new method in the partial class for the context which encapsulates the intended change, put that in the interface I have for the context and then mock that higher order method rather than trying to mock the ObjectContextOptions directly.
- Danny
Anonymous
March 07, 2010
I have problem trying to use context CreateDatabase() in integr. test harness (I'm using VS2010 RC). I have 2 simple tables with many-to-many relationship. Autogenerated join table have 2 identity columns coming from the entities - which leads to sql error of course. i.e. create table [dbo].[TenantModule] ( [Tenants_TenantId] [int] not null identity, [Modules_ModuleId] [int] not null identity, primary key ([Tenants_TenantId], [Modules_ModuleId]) ); sql script generated by designer is correct, but one returned by ctx.CreateDatabaseScript seems wrong. I see also other differences - i.e. desginer-generated contains indices on FKs, while the other one doesn't have any index generation.Anonymous
March 07, 2010
I had other problems related to adding related objects and after some poking/googling found that there seems to be a bug in generated edmx for m:n join tables - I had to replace manually StoreGeneratedPattern="Identity" with "None" for the join tables. After that CreateDatabaseScript was OK and creation code also works as expected. Please, please fix this for RTM. Is this in t4 template? Maybe I can "fix" it there?