Partilhar via


Azure – from July CTP to November 2009 PDC Release

Well, the official PDC 2009 release of the Azure SDK is out and there are a lot of changes.  I’m not going to do a release-notes style narrative on changes, but thought it might be interesting to instead focus on changes I had to make to get a simple service that was working with July CTP against cloud table storage to work with the PDC release.

My service is a simple service to display the next departure time for the ferry I regularly use here in Seattle.  The Washington State Ferry makes available an RSS feed of upcoming sailings, which I have a worker role to consume and translate into a more easily manipulated form in an Azure table storage entity.  The web role simply retrieves the next ferry time from the Azure table storage and updates it on the web page every minute.  Very simple.  By the way, I don’t claim that the way I’m doing this is necessarily the best way – part of my goal in writing this was to learn more about Azure and the ADO.Net Data Services so it may be sub-optimal in some cases.

So here goes.

  • The biggest change (for my service) was in StorageClient.  It went from being an “as-is” sample that you included and built with your project to a supported part of the Windows Azure namespace, Microsoft.WindowsAzure.StorageClient.  The way you use StorageClient changed as well in response to feedback from the sample.
    • Note that part of what you need to use StorageClient, CloudStorageAccount (which replaces the StorageAccountInfo in the July CTP bits), is in the Microsoft.WindowsAzure namespace.
  • I’m using ADO.Net Data Services classes to access a simple table entity.  So I have a class, FerryCrossing, which provides the structure of the table storage entity – no changes there.
  • I also have a class, FerryContext, which inherits from the Azure TableStorageContext class.  The constructor here changed from taking no arguments in the July CTP to taking two arguments in the new release (StorageCredentials is a part of the CloudStorageAccount class).

public FerryContext(string baseAddress, StorageCredentials credentials)
: base(baseAddress, credentials)

  • Finally, I have a class, FerryDB, which provides the public API to the database actions – things like getting a list of crossings, adding a crossing (called from the worker role based on the RSS data, etc.).  This one changed in a number of ways:
    • It can no longer assume that StorageClient will just read the config file to get the information about table storage endpoint and account information (AccountSharedKey, AccountName) as it did in the July sample.  Now those are read by my code and passed in (through the TableStorageContext constructor). 
    • The config format has changed a bit.  To use development storage, you use this in your ServiceConfiguration.cscfg file associated with your Azure service:

<Setting name="DataConnectionString" value="UseDevelopmentStorage=true" />
<!-- Value for Development Storage -->
<Setting name="TableStorageEndpoint" value="https://ipv4.fiddler:10002/devstoreaccount1/">

    • Notice that for the endpoint for local development storage, I’m using a trick that lets me use Fiddler to spy on the traffic; if you don't need this or aren't using Fiddler, just replace ipv4.fiddler with 127.0.0.1 in theURL.
    • Now in my FerryDB code, I read these config settings and pass those to the ctor for FerryServiceContext:

private FerryDB()
{
_acct = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
_baseAddr = RoleEnvironment.GetConfigurationSettingValue("TableStorageEndpoint");
_ctx = null;
_fBatchUpdateInProgress = false;
}

private FerryContext FerryCtx()
{
if (!_fBatchUpdateInProgress)
_ctx = new FerryContext(_baseAddr, _acct.Credentials, TableName);
return _ctx;
}

In order for this to work, you have to in the startup code for your role initialize a Configuration Setting Publisher; this code goes in the OnStart method.

#region Setup CloudStorageAccount Configuration Setting Publisher

// This code sets up a handler to update CloudStorageAccount instances when their corresponding
// configuration settings change in the service configuration file.
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
// Provide the configSetter with the initial value
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));

RoleEnvironment.Changed += (sender, arg) =>
{
if (arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>()
.Any((change) => (change.ConfigurationSettingName == configName)))
{
// The corresponding configuration setting has changed, propagate the value
if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))
{
// In this case, the change to the storage account credentials in the
// service configuration is significant enough that the role needs to be
// recycled in order to use the latest settings. (for example, the
// endpoint has changed)
RoleEnvironment.RequestRecycle();
}
}
};
});
#endregion

Finally, one of the other big changes I saw was in RoleEntryPoint (you inherit the body of your worker and web role code from this).  RoleEntryPoint's basic interface has changed:

  • Start -> OnStart
  • The body of your role is in the Run method
  • There is an OnStop method
  • You no longer need the GetHealthStatus method
  • There is a mechanism for informing the role of configuration changes and allowing it to respond to those, including potentially asking to be recycled and restarted.

In the July CTP, RoleManager handled diagnostic logging through a very simple API, WriteToLog.  There is now a standard ETW framework diagnostics model.  Instead of invoking RoleManager.WriteToLog, you have the full range of ETW methods in System.Diagnostics; so you can use:

  • Trace.WriteLine
  • Trace.TraceError
  • Trace.TraceInformation
  • Trace.WriteIf
  • etc.

To do this, set up the Azure diagnostics ETW listener to persist your logs.  This is pretty simple; you have a config setting similar to the data connection string (DiagnosticsConnectionString) in the config file for the diagnostics table storage – note that this means you could separate your diagnostics logs from your other app data storage if you want.  Then in the OnStart method for the role, pass this to the DiagnosticMonitor class:

public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
DiagnosticMonitor.Start("DiagnosticsConnectionString");

Well, I’m sure I’m missing something but those were the big things for me.  Look at the samples for other items.

Comments

  • Anonymous
    December 31, 2009
    Thanks!! You saved me a lot of time. Why they dont make this easier is beyond me.. Heres hoping for better tooling in the future...