Rock, Paper, Azure Deep Dive: Part 1
If you’re not sure what Rock, Paper, Azure (RPA) is all about, check out the website or look over some of my recent posts. In this series of posts, I want to go into some of the technical nuts and bolts regarding the project.
First, you can download Aaron’s original project on github (here and here). The first project is the Compete framework, which is an extensible framework design to host games like Rock, Paper, Scissors Pro! (the second project). The idea, of course, is that other games can be created to work within the framework.
Aaron and the other contributors to the project (I remember Aaron telling me some others had helped with various pieces, but I don’t recall who did what) did a great job in assembling the solution. When moving it to Windows Azure, we had a number of issues – the bottom line is, our core requirements were a bit different than what was in the original solution. When I describe some of these changes in this and other posts, don’t mistake it for me being critical of Aaron’s project. Obviously, having used it at code camps and the basis for RPA shows I have a high regard for the concept, and the implementation, in many parts, were quite impressive.
So, if you download those two projects on github, the first challenge is getting it up and running. You’ll see in a few locations there are references to a local path – by default, I believe this is “c:\compete”. This is the local scratch folder for bots, games, the db4o database, and the logfile. Getting this to work in Windows Azure was actually pretty straightforward. A Windows Azure project has several storage mechanisms. When it comes to NTFS disk I/O, you have two options in Azure: Local Storage, or Azure Drives.
Azure Drives are VHD files stored in Azure Blob Storage and can be mounted by a VM. For our purposes, this was a little overkill because we only needed the disk space as a scratch medium: the players and results were being stored in SQL Azure. The first thing we needed to do to get local storage configured is add a local storage resource:
In this case, we just created a local storage area called compete, 4GB in size, set to clean itself if the role recycles.
The next step was to remove any path references. For example, in Compete.Site.Models, you’ll see directory references like this:
Because there’s so much disk I/O going on, we created an AzureHelper project to ultimately help with the abstraction, and have a simple GetLocalScratchFolder method that resolves the right place to put files:
Now, we inject that call wherever a directory is needed (about a half dozen or so places, if memory serves). The next major change was deciding: to Spark, or not to Spark? If you look at the project references (and in the views themselves, of course), you’ll see the Spark view engine is used:
I’m no expert on Spark but having worked with it some, I grew to like its simplicity:
The problem is, getting Spark to work in .NET 4.0 with MVC 2 was, at the time, difficult. That doesn’t appear to be the case today as Spark has been revived a bit on their web page, but we started this a few weeks earlier (before this existed) and while we recompiled the engine and got it working, we ultimately decided to stick with what we knew best.
The end result is the Bot Lab project. While we’re using RPA with the idea that it can help others learn about Azure while having fun, it’s also a great example of why to use Windows Azure. The Bot Lab project is around 1 MB in size, and the Bot Lab itself can be up and running in no time (open solution, hit F5).
Imagine if you wanted to host an RPS style competition at a code camp. If you have a deployment package, you could take the package and host it locally if you wanted, or upload it to Windows Azure – hosting an extra small instance for 6 hours at a code camp would cost $0.30. Best of all, there’s no configuring that needs to be done (except for what the application dictates, like a username or password). This, if you ask me, is one of the greatest strengths behind a platform as a service.