Making a Rich Client Smart : Going Offline
In my Smart Client Architecture Principles session at TechEd I did a very simple demo which took a simple rich client application and made it work offline. I have received a number of requests for this code but the TechEd folks don’t seem to allow demo and sample code to be made available on the conference DVD. So, as a next best measure, I put together this article which walks through the code and the rationale behind the design.
The Sample Application
The sample application is a Windows Forms application that just retrieves a product catalog from a web service, displays a list of products that the user can select and add to an order, and then submits the order to the web service. Once the order has been accepted, the web service returns an order ID which is then displayed to the user.
The original version of the sample is based on what I call a typical rich client design: going offline has not been considered at all and there are a few fundamental problems with the design in terms of usability. In this article, we’ll take this simple rich client and turn it into more of a smart client capable of working offline, and we’ll fix a few of the other problems along the way too.
The sample consists of a C# Windows Forms project and a web service project. To get started you need to create these projects in Visual Studio, add a web reference in the Windows Form project to the web service project, and then add a typed DataSet called ProductDataSet to the web service project for the Products table in the Northwind database. As we go through the web service and client application details below, you should be able to reconstruct the entire sample application piece by piece.
The Web Service
The web service consists of two web methods; one to return the product catalog as a DataSet, and one to accept a list of ordered items and return a unique order ID. The GetProductCatalog web method looks like this.
[WebMethod]
public ProductDataSet GetProductCatalog()
{
ProductDataSet products = new ProductDataSet();
using ( SqlConnection sqlConn =
new SqlConnection( "Data Source=localhost;Integrated
Security=SSPI;Initial Catalog=northwind" ) )
{
SqlCommand sqlCommand = new SqlCommand(
"SELECT * FROM Products", sqlConn );
SqlDataAdapter sqlDataAdapter = new SqlDataAdapter();
sqlDataAdapter.SelectCommand = sqlCommand;
sqlConn.Open();
sqlDataAdapter.Fill( products, "Products" );
sqlConn.Close();
}
return products;
}
This sample uses the Northwind database and returns a typed DataSet based on the Products table. To retrieve the catalog data we just open a SQL connection, define the select command and then use a SqlDataAdapter object to retrieve the data and populate the DataSet. You would not implement a real system using this kind of code, but it suffices for the present purpose.
The second web method is even simpler. In this sample, the ordered items are simply passed in an array which contains the names of the products we wish to order. Also, we don’t actually store any of the order information; only return a dummy order ID. To do this, we just generate a GUID and return it as a string. In a real system, you would need to store the order details, check the inventory and do a whole heap of other checks before accepting the order and returning a meaningful order ID.
[WebMethod]
public string AddOrder( System.Collections.ArrayList items )
{
return Guid.NewGuid().ToString();
}
That’s the boring server side code done; now let’s look at the client side…
The Service Agent Class
The client application uses a service agent class to encapsulate the code which interacts with the web service. The service agent class in turn uses a web service proxy (which is generated by Visual Studio) to actually make the web service calls. But why have two classes instead of just one? Why not just call the web service proxy class directly from the UI?
Using a service agent class is a good way to separate the code that interacts with a web service from the rest of the application, especially the application’s user interface. If you want to call the web service asynchronously (and you really really do want to do this!) then having a service agent class can make it a lot easier to handle this. It also means you can implement new functionality (like being able to work offline!) by making changes to the service agent, keeping the changes to the rest of the application to a minimum.
You can of course build all of this logic into the web service proxy class. I don’t like to do this simply because the proxy will get regenerated when the web service changes and I will lose all of my carefully crafted code. The proxy class is really concerned with providing methods which map to the web service and taking care of all of the SOAP details. The service agent is responsible for helping the application interact with the web service.
So what does the service agent class look like?
public class ProductServiceAgent
{
private OfflineSampleWebService _proxy;
public ProductServiceAgent()
{
_proxy = new OfflineSampleWebService();
_proxy.Credentials =
System.Net.CredentialCache.DefaultCredentials;
}
public ProductDataSet GetProductCatalog()
{
// Call the web service.
return _proxy.GetProductCatalog();
}
public string AddOrder( object[] items )
{
// Call the web service.
return _proxy.AddOrder( items );
}
}
The Service Agent class is pretty simple at the moment. It has an instance of the web service proxy class and provides two public methods which wrap the two web service methods exposed by the proxy. The default security credentials are initialized in the Service Agent constructor – one less detail for the user interface code to worry about.
In this version of the Service Agent class, the two methods are very simple wrappers around the equivalent methods provided by the web service proxy. It is these two methods that we will be changing shortly to make the application work offline.
The User Interface
The application has a single form which has two listboxes and three buttons. The first listbox holds the product catalog. The second holds the list of products that we want to order. A product is selected in the catalog listbox and then placed in the current order listbox using the order button. The two other buttons on the form retrieve the product catalog from the server, while the other sends the current order up to the server.
Let’s look at the code in the main form class. This code consists mainly of the event handlers for each button click. When the user clicks on the button to download the product catalog the following method is executed.
private void DownloadCatalogClick( object sender,
System.EventArgs e )
{
try
{
// Call the web service.
ProductDataSet products =
_serviceAgent.GetProductCatalog();
// Add the products to the list box.
this._catalogListBox.DataSource = null;
this._catalogListBox.Items.Clear();
if ( products != null )
{
this._catalogListBox.DataSource =
products.Tables[ "Products" ];
this._catalogListBox.DisplayMember = "ProductName";
this._catalogListBox.ValueMember = "ProductName";
}
}
catch( Exception ex )
{
MessageBox.Show( ex.Message );
}
}
The service agent class is used to retrieve the product catalog as a DataSet and this is bound to the list box. Any exceptions that are thrown are displayed to the user in a message box.
When the user selects a product in the catalog list box, it can be added to the current order using the AddProductToOrder button. The event handler for this button is as follows.
private void AddProductToOrderClick( object sender,
System.EventArgs e )
{
string productName =
this._catalogListBox.SelectedValue as string;
this._orderListBox.Items.Add( productName );
}
This method simply adds the product name to the order list box ready for sending to the web service. When the user has finished selecting products, the PlaceOrder button is pressed to send the order to the web service. The following code defines the event handler for this button.
private void PlaceOrderClick( object sender,
System.EventArgs e )
{
try
{
object[] items =
new object[ this._orderListBox.Items.Count ];
this._orderListBox.Items.CopyTo( items, 0 );
string orderId = _serviceAgent.AddOrder( items );
if ( orderId != null )
{
OrderAccepted( orderId );
}
this._orderListBox.Items.Clear();
}
catch( Exception ex )
{
MessageBox.Show( ex.Message );
}
}
Again, the service agent is used to call the web service. The AddOrder web method returns a string order ID. In this example, the order ID is a simple GUID. If a non-null order ID is returned, then the OrderAccepted method is called which in this example simply displays the order ID to the user using a message box.
public void OrderAccepted( string orderId )
{
MessageBox.Show( "Order Accepted! Order ID = " + orderId );
}
Not Too Smart Is It?
If we run the above application, we can retrieve the product catalog, order a few jars of “Sir Rodney’s Marmalade”, submit our order and see the order ID proudly displayed. We just need to sit back and wait for our marmalade to be delivered. Lovely.
Take down the web server or remove the network connection, though, and the user’s experience is radically different. Since the web service is no longer available, the user will see an error when they try and retrieve the product catalog or order the marmalade (at least after the web service request has timed-out). Hmmm, not a very good experience and worst of all, we won’t get any marmalade unless we remember to re-order it when we go back online. Surely we can do a better job than that.
The code shown above is not what you’d call production quality but it does illustrate the problem with many applications that are not designed with offline behavior in mind: A lot of applications depend on the server being available and don’t handle it at all well when it isn’t.
Also, I’m sure you’ve noticed that in the code above I have not taken my own advice and called the web service asynchronously. I could have used another thread to make the web service call but that wouldn’t really help when going offline – the thread calling the web service would experience the same error and throw an exception, which would have to be handled by the UI, resulting in pretty much the same user experience. However, as we shall see below, you don’t necessarily have to use a background thread to call a web service asynchronously, and in fact, we can fix this problem at the same time as we fix the offline problem.
The sample application above, despite being extremely simplistic, illustrates the two basic issues that we need to address when we design an application to work well offline. The first is how to handle data that we would normally retrieve from a server. The second is how to handle data that we would normally send to a server.
For our sample application, we can address both of these by making small changes to the two methods the Service Agent class. And by keeping these changes local to this class, we can keep the rest of the application the same.
Data Caching
We can fix the first of the issues by simply caching the data on the client. Caching data allows the client application to continue to work when the original data provider is not available. Of course, you might also cache data on the client for performance reasons even when the data provider is available. In either case, the key is to cache the data for as long as possible without running into concurrency issues.
For this sample application I am going to use the Patterns and Practices Caching Application Block. This block allows data to be cached on the client in a very flexible and secure way. It is based on a provider model, as most of the Patterns and Practices application blocks are, so that you can either use the default providers that are supplied as part of the block, or use one of your own if they don’t suffice. Using this model you can choose how to store and secure the cached data and how to scavenge expired data from the cache. You just need to specify the required providers in the application configuration file.
The provider model is very flexible but it makes the code look more complicated that it really is. To make matters worse, this block is not very well documented and the sample applications that are provided with it are, ahem, less than useful. The block is actually not very difficult to use, especially if you use the default providers. So while the caching application block provides many options in terms of storage and security, for the purposes of this sample, we’ll just cache the data in memory using one of the default providers to get a feel for how it works.
Most of the functionality provided by the caching application block is accessed through the CacheManager class. This is a singleton class so there is only one per app domain and we get a reference to it by calling the static GetCacheManager method. This class acts a bit like the cache object provided by ASP.NET. We can add items to the cache and specify an expiry time and the cache will automatically remove the items when they expire. We can also specify a callback for an item so that we can be notified when the item has been removed from the cache. This can be very useful in some situations; for example we could use this event to retrieve fresh data and put it back into the cache, or to disable a menu item to show the user that a particular action is no longer available.
Using the caching application block, the GetProductCatalog method on the service agent class looks like this:
public ProductDataSet GetProductCatalog()
{
ProductDataSet catalog = null;
lock( typeof( CacheManager ) )
{
CacheManager cachManager = CacheManager.GetCacheManager();
// See if the product catalog is already in the cache.
catalog = cachManager.GetData( "Products" )
as ProductDataSet;
// If not, download it and put it into the cache.
if( catalog == null )
{
// Call the web service.
catalog = _proxy.GetProductCatalogCacheInfo();
// Add data to the cache.
SlidingTime expiration = new SlidingTime(
new TimeSpan( 0, 10, 0 ) );
cachManager.Add( "Products", catalog,
new SlidingTime[] { expiration },
Microsoft.ApplicationBlocks.Cache.CacheItemPriority.Normal, null );
}
}
return catalog;
}
The code above first obtains a reference to the CacheManager object and then checks to see if we have the data in the cache. If we do, we return it to the caller. If the data is not in the cache, then we call the web service (via the proxy) and put the data into the cache along with an expiry time. Once the data has expired it will be automatically removed from the cache. In this case, the next time that the GetProductCatalog method is called, it will find that the data is not in the cache and will call the web service to retrieve it and then put it back into the cache.
How long should the product catalog be cache it for? In the code above, the data is cached for 10 minutes. Of course, in a real world application, the data is likely to be valid for a lot longer than 10 minutes, but for the purposes of this sample, 10 minutes will suffice. The question is, is the client in the best position to make the decision that the catalog will be valid for a particular length of time?
It turns out that the provider of the data is in a much better position than the client to make this sort of decision – after all, the data provider owns the data and should be able to provide information about the length of time that it will be valid. So how do we get this information from the web service? Sounds like an ideal job for a SOAP header.
The first thing we need to do on the server side is to define a soap header class which will hold the catalog expiry time. In this sample, we’ll use a simple integer to specify the number of minutes that the catalog can be cached for.
public class ExpiryHeader : SoapHeader
{
public int SlidingExpiry;
}
Once we have defined the ExpiryHeader class, we need to define a public member in the web service page class of this type:
public ExpiryHeader expiryHeader;
We also need to add the necessary attributes to the GetProductCatalog web method so that the ASP.NET runtime associates this Soap header with the web method. Finally we need to populate the header when we return the catalog data. The GetProductCatalog web method now looks like this:
[WebMethod]
[SoapHeader("expiryHeader", Direction=SoapHeaderDirection.Out)]
public ProductDataSet GetProductCatalog()
{
ProductDataSet products = new ProductDataSet();
using ( SqlConnection sqlConn =
new SqlConnection( "Data Source=localhost;Integrated
Security=SSPI;Initial Catalog=northwind" ) )
{
SqlCommand sqlCommand = new SqlCommand(
"SELECT * FROM Products", sqlConn );
SqlDataAdapter sqlDataAdapter = new SqlDataAdapter();
sqlDataAdapter.SelectCommand = sqlCommand;
sqlConn.Open();
sqlDataAdapter.Fill( products, "Products" );
sqlConn.Close();
// Set expiry time for the catalog to 10 minutes.
expiryHeader = new ExpiryHeader();
expiryHeader.SlidingExpiry = 10;
}
return products;
}
OK, so now we have some information about the expiry time of the data, we can use that when we cache it on the client. The GetProductCatalog method becomes:
public ProductDataSet GetProductCatalog()
{
ProductDataSet catalog = null;
lock( typeof( CacheManager ) )
{
CacheManager cachManager = CacheManager.GetCacheManager();
// See if the product catalog is already in the cache.
catalog = cachManager.GetData( "Products" )
as ProductDataSet;
// If not, download it and put it into the cache.
if( catalog == null )
{
// Call the web service.
catalog = _proxy.GetProductCatalogCacheInfo();
// Add data to the cache.
SlidingTime expiration = new SlidingTime(
new TimeSpan(
0,
_proxy.ExpiryHeaderValue.SlidingExpiry,
0 ) );
cachManager.Add( "Products", catalog,
new SlidingTime[] { expiration },
Microsoft.ApplicationBlocks.Cache.CacheItemPriority.Normal, null );
}
}
return catalog;
}
In one sense, the web service is making a ‘contract’ with the client, stating that the data that is provided will be valid for a specific length of time. In our case, the service is implicitly stating that any submitted orders based on a non-expired catalog will probably be honored. Providing this information to the client makes the web services much more client friendly and it gives the client information which helps it to operate better when offline. Of course, the service has the final say in the matter. If, for some business reason, a particular catalog is revoked and is no longer valid then the service may refuse to accept orders based on that catalog. The client should be prepared to handle this situation but if the client and the service know what assurances have been made then they’ll both be in a much better position to handle it if the situation changes.
If the service is not prepared to make any statements about the validity of the data then it puts the client application in a bit of a bind. It will have to make a decision about the data which it might not be qualified to make. In these cases, the client may end up getting into trouble by caching data for longer than it should, or for refreshing the cached data more frequently than is strictly necessary.
Using the code above allows the client to operate when the web service is not available, only as long as the data is available in the cache. In the simple example above, the cache is not persisted and all of the data will be lost once the user exits the client application. To provide a more robust data cache, you will have to configure the caching application block to use a persistent storage provider. That way, the data will still be available when the application is re-run when it is offline.
Making Web Service Requests When Offline
Caching inbound data is relatively straightforward, but what about handing outbound service requests. The Offline Application Block, again provided by Patterns and Practices and available for download from the MSDN site, provides the necessary infrastructure for deferring web service requests until such time as the client goes back online. It basically implements a store-and-forward mechanism, storing service requests in a queue and then replaying them to the web service when it becomes available.
This block is also based on a provider model. You can choose providers for network detection, queue management, service request storage and for data encryption, specifying the ones you want in the application configuration file. Again, this block is not that difficult to use but the apparent complexity of all the pieces can make it a little daunting. The samples aren’t easy to follow either but you don’t actually need to do much to integrate this block into your code. We’ll go through a step-by-step list of the things you need to do in a while. Before we do that though, it’s useful to understand the main components of the block so you can see where your code fits in:
- The Queue – This is where the service request details are stored while the client is offline. The queue is a FIFO queue and can be persistent so that the service requests are still available after the client has been restarted.
- The Executor – This is the part of the block that takes the service requests off the queue and ‘executes’ them. This component uses a worker thread to execute the service requests so it will operate in the background leaving your client application to maintain a responsive user interface.
- The Service Agent – This component supplies the code that actually executes the service request. The block does not mandate that you use web services (though you will probably need a good reason not to), so you can use .NET Remoting, or direct database access or whatever to make the actual request to the remote service. The service agent also processes the results from the service request once it’s been successfully executed.
- The Online Detection Component – This component is used to detect whether the client is online or not. You can use whatever strategy suits your situation best. The block comes with a simple detection component which simply checks for a network connection. This component is not suitable for a production system since having a network connection is a necessary but insufficient condition to actually being online, but it provides an easily understood sample and is good for demo purposes. You can also use it as the basis for a real one for your particular situation.
So what code do you actually need to write? You only need to write four short pieces of code – you need to write the code to initialize the block, the code to put service requests into the queue, the code that actually executes the service request (the service agent), and the code to process the results.
The main object in the block is the OfflineBlockBuilder object. Again, this is a singleton object, accessed through a static Instance method that provides access to all of the various parts of the block such as the queue and the network detection component. To initialize the block, we simply need to get hold of the singleton instance and call the Start method. Our Main method in the client now looks like this:
static void Main()
{
// Initialize the Offline Block.
OfflineBlockBuilder.Instance.Start();
ProductForm productForm = new ProductForm();
Application.Run( productForm );
// Shutdown the Offline Block.
OfflineBlockBuilder.Instance.Dispose();
}
Once the main form exits, we just need to call Dispose on the OfflineBlockBuilder instance.
The rest of the changes we need to make are in the service agent class. The first thing we need to do is to change the AddOrder method to put the service request on the queue.
public string AddOrderOffline( object[] items )
{
string assemblyName = "OfflineSampleClient";
string className =
"Microsoft.Samples.OfflineSampleClient.ProductServiceAgent";
string methodName = "UploadOrder";
string callback = "OrderAcceptedCallback";
OnlineProxyContext proxyMethodContext =
new OnlineProxyContext( assemblyName, className, methodName );
ServiceAgentContext serviceAgentContext =
new ServiceAgentContext( callback );
Payload serviceRequest = new Payload( proxyMethodContext,
this.Guid, serviceAgentContext, items );
OfflineBlockBuilder.Instance.PayloadConsumer.Enqueue(
serviceRequest );
return null;
}
This isn’t as complicated as it looks. There are two pieces of information that the block needs to put into the queue; the method on the service agent to call to make the actual service request, and the method to call to process the results when the service request has been executed. The first is encapsulated in the OnlineProxyContext class, and the latter by the ServiceAgentContext class; intuitive names I think you’ll agree.
How does the executor actually execute these two methods? By using reflection. This is what makes the whole thing a little verbose and complicated. To specify the methods, you need to identify the name of the assembly the class and the actual method on that class. The four strings at the start of the above method store this information. We use these strings to construct instances of OnlineProxyContext and ServiceAgentContext. Finally we need to put these two together into a Payload object which keeps everything together in one place.
The second to last line is where we put this information in the queue by calling the Enqueue method of the PayloadConsumer instance. The PayloadConsumer is really the queue but you can plug in any type of consumer here.
Once the service request is enqueued, we can return from this method back to the client. Now, the AddOrder method was supposed to return the order ID that was obtained from the web service. Since we won’t have this information until the service request has actually been executed, what do we do? We have a number of options here. We could make up a tentative order ID (or pull one from a pool of pre-allocated order IDs) and reconcile it once the actual order has been submitted. We could take an even simpler approach and just return null ID or a ‘pending’ indicator. The client should be able to handle this situation and inform the user that the order is still pending. For an application that been designed for synchronous behavior this can be a little tricky. For our sample application, returning null has no side effects since we check for a null return and only report the order ID to the user when we have obtained a valid one.
The actual service request is handled in the UploadOrder method which looks like this:
public Payload UploadOrder( Payload payload )
{
// Call the web service.
string orderID = _proxy.AddOrder(
(object[])payload.RequestData );
payload.Results = orderID;
return payload;
}
This method simply extracts the required request data from the payload object and calls the web service. Once we have obtained the order ID, it is placed back into the payload object so we can process it later on in the OrderAcceptedCallback method.
public void OrderAcceptedCallback( Payload payload )
{
string orderID = payload.Results as string;
InvokeMethodOnUIThread( "OrderAccepted", orderID );
}
This method retrieves the order ID from the payload and then fires an event up to the user interface using a helper method InvokeMethodOnUIThread. This method does just what you’d expect and invokes a method on the main form on the UI thread. Remember that the OrderAcceptedCallback method is called by the executor which works on a background thread so we need to switch back to the UI thread before we touch any user interface controls.
private void InvokeMethodOnUIThread( string methodName, object param )
{
if ( _parentForm != null )
{
_parentForm.Invoke( Delegate.CreateDelegate(
typeof( OrderAcceptedDelegate ),
_parentForm,
methodName ),
new object[] { param } );
}
}
This method simple uses the Control classes Invoke method to invoke the OrderAccepted method on the main form. The _parentForm member is a reference to the main form that was passed in on the service agent’s constructor. There are other ways to do this; the above example uses an explicit reference to the main form but you can also implement a standard event on the service agent class and fire that event on the UI thread.
You might be wondering why we didn’t pass the order ID to the main form in the UploadOrder method. The offline application block splits the execution of the service request and the processing of the results for flexibility reasons. For example, you can store the results and then process them later on. This situation is common when you implement a ‘synchronization service’ where the executor lives in a service process on the client and not inside the client application. This lets your application synchronize without having to be explicitly run when online.
When the above application is run, the user can select the products to order and when they press the submit button the order will be queued waiting for the application to go back online. When this happens, the service requests will be executed and the order will be submitted and the order ID presented the user.
All of this will of course happen virtually instantaneously if the application is already online. The nice thing about this is that the actual service request is now asynchronous with respect to the user interface thread, so even when connectivity changes unexpectedly, the user interface is in no danger of freezing!
Conclusion
The simple example above shows two basic techniques for allowing an application to operate nicely while offline. Caching data on the client and queuing service requests allow it to operate when the services that it normally interacts with are no longer available.
Of course, there are other issues that you will need to consider, especially with respect to handling data that your client app needs to change. In this case, you will need to carefully keep track of any tentative changes that you make until the request for the change has been accepted by the service. This issue and more are covered in some detail in the Patterns and Practices Smart Client Architecture Guide.
Copyright © David Hill, 2004.
THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.
Comments
- Anonymous
July 12, 2004
The comment has been removed - Anonymous
July 13, 2004
An interesting scenario.
Unfortunately, hosting Windows Forms controls in Internet Explorer is not a good way to provide offline support in a smart client application – there are a few problems with this approach, including the fact that the download cache, which is managed by IE and is where all of the .NET assemblies are stored on the client, can get purged at any time so your application might not actually be available when the user is offline.
Your application needs both the data and the UI to be available when the user is offline. By storing the data within a web page as an XML data island, you are requiring the user to manually save the page and then navigate back to the saved page when they want to ‘run’ the application again, or are relying on IE to cache the page and data for you. Either way, this approach does not give you much control over how the data is stored and managed on the client. An alternative is to use isolated storage, and to access this through directly from the Windows Forms controls themselves, but you will still have the problem of caching and accessing the web page to run the application.
Usually, hosting an application within IE does not buy you anything that ClickOnce or No-Touch Deployment (NTD) can’t provide, and these mechanisms provide a number of advantages – using NTD, you can access the application through a web link but it will run pretty much like a traditional Windows application giving you more control over the application’s user interface and behavior. However, NTD suffers from the same cache purging problem described above, and by default will run in partial trust (as do Windows Forms controls in a web page) so you won’t be able to do anything too fancy in terms of data caching and storage on the client. ClickOnce deployment, which is available in the next version of the .NET Framework, solves these problems but if you are using the current version of the Framework you will run into these issues.
The best way to provide robust offline support and flexible data caching and storage is to provide a client application which provides the user interface and client logic, accesses the data and services it needs via web services, and uses something like the Application Updater block to provide seamless deployment and update. It really depends on what user experience you want to achieve – if your user’s aren’t really expecting the application to be available at all times whether they are online or offline, and are willing to suffer the poor performance and peculiarities of a web interface (sorry, is my bias showing there?), then hosting your application within IE may be appropriate. - Anonymous
July 13, 2004
Sorry - wasn't clear on one thing - it's not a windows forms control, it's a web page. Thanks for the response, could use a bit more clarification if you have the cycles
User interface is standard asp.net stuff with 3 new usercontrol derivatives delivered as client side C# and embedded as objects using fusion (and some dll installs for our core) - We have to use IE because this will be a revenue generating application for the medical school where docs sign up, pay, use the site to do interactive training on surgical procedures, and get continuing education credits - yup, we're using windows and directx to do medical training - be afraid, very afraid :-)
The catch is that we have to provide a single system that provides web based training for the customer docs, and also provides the ability to work offline when all the content is downloaded - we don't have a choice here. Has to be web based, has to work offline (within reason, of course). Dominant use is offline, but the client org wants to be able to do presentations offline, and the presentations need to support interactivity.
We do require that the user identify our site as trusted to .net, so we've got full access to the machine. We aren't at the click once or no touch promise of fusion, but it certainly has made life a whole heck of a lot nicer. Fusion makes the update of the application layer C# stuff a dream, as opposed to a living nightmare (the bad old days of COM & Java)
Here's the basic interface
1) An interactive 3D viewport - this is an active region, i.e. it steals all mouse motion and keyboard input when it has focus - it sends events to ie to let it know whats happening
2) An interactive 2D ultrasound display. This is also an active region, but it is weird - it sends all its events to #3 (the button bar) which then dispatches them, which could cause events to be triggered in C# client (often from C++ core) and result in a repaint of this display
3) A bar of buttons and other winform ctls that also serves as the 'root' of all the embedded client side controls - this is a weird one, because it does things like let the core C# client side stuff (what sits on top of our C++ engine) know how to cross wire the elements (e.g. the core engine generates the 2D ultrasound images, renders the 3D viewport, and jams the 2D image into the ultrasound display
4) Web content and htc controls that make a web page look an awful lot like a windows application
Questions
1) The xml data island representing the model - I get the impression you think we'd do better to get this out of the page and stored locally using some defined protocol - so should we follow your approach above, and have our client side stuff look for the local file and fail over to a web service if it isn't there ?
2) Should we perhaps install a 'local page' when we put the core dlls into place ? I.e., instead of hoping the user themselves saves the page for offline use, or the page stays in the cache (already been worrying about that), would we do better to just save the thing ourselves ?
3) Right now we install our generic core C++ dlls and the service layer C# dlls using an installer - the C# side goes to the gac - we install the application layer C# code via fusion to the download directory in the gac - would we do better to not use fusion, and instead install everything to our program files directories and the gac and use the application updater block ?
Thanks again
Mark - Anonymous
July 14, 2004
The comment has been removed - Anonymous
July 14, 2004
OK - I'll digest this some - I don't really have a choice on the web, it's not a technical decision so much as a market decision - the browser offers a comfort level to the user base at a lower development cost
One thing - we're not wrapping active-x, the core C++ dlls are called directly through a C# interface layer via pinvoke, but I don't see that affecting your fundamental points.
Since it's the 'nature' of the web interface thats important, p'raps we could distribute an app that contains an embedded explorer control and do everything that way - it is just a thought
thanks for your input, it's been extremely valuable