Freigeben über


Real World Windows Azure: Optimizing PHP Applications for the Cloud

June 7, 2012 update: The Microsoft Windows Azure team has released a new Windows Azure SDK for PHP. This release is part of an effort to keep PHP client libraries up to date with new Windows Azure features and to make PHP a first-class citizen in Windows Azure. The latest client libraries are on GitHub: https://github.com/WindowsAzure/azure-sdk-for-php. While the SDK hosted on CodePlex will continue to work for the foreseeable future, it is strongly recommended that new PHP/Windows Azure application use the SDK hosted on GitHub.

The work done by Maarten Balliauw and other contributors in building the SDK hosted on CodePlex was critical in unifying the PHP developer experience for Windows Azure. The Windows Azure team is grateful to these contributors for their pioneering work and looks forward to their continued support (and yours!) in adding to the new SDK on GitHub.

Thanks,

      
The Windows Azure Team


This is a guest-post written by Ken Muse, Vice President of Technology at ecoInsight. He has written an amazingly in-depth post about optimizing PHP applications for Windows Azure based on his experience in doing a mixed PHP (Joomla!) and .NET deployment. ecoInsight is using Joomla! running on Windows Azure to manage, maintain, and publish informative articles and industry news to its users. The Joomla! server is used by the company’s content teams to aggregate industry content and publish it to end users in several different vertical markets as AtomPub feeds. The advertisements displayed to users of the ecoInsight Energy Audit & Analysis desktop and mobile platforms are also published using Joomla! In the future, ecoInsight plans to utilize the Joomla! platform for building a collaborative social network and to provide content storage services.

Mr. Muse is responsible for the overall technical architecture of ecoInsight’s software solutions. Mr. Muse was a senior architect and team lead at SAP, focusing on security, globalization standards, and distributed systems integration. Mr. Muse has a breadth of software and hardware knowledge, including a broad skill set in enterprise and distributed network architecture and multi-platform software development. Mr. Muse is a member of MENSA and IEEE (Institute of Electrical and Electronics Engineers), and a full member of ASHRAE (American Society of Heating, Refrigerating and Air‐Conditioning Engineers).

Optimizing PHP Applications for the Cloud

As a technology company, you are often forced to look for the right balance between available options. Even though our primary product is written using the .NET Framework, we still find it important to take advantage of existing applications and platforms whenever possible. In several cases, the applications we use are written in PHP. While a number of efforts have been made to simplify the task of using PHP on Windows Azure, it is still important to realize that all server-based applications require some level of adjustment to get the best possible performance and reliability. In this article, I wanted to share some of the optimizations and considerations we made to get the best performance from the PHP applications we are hosting on Windows Azure.

PHP Configuration

The master configuration for the PHP runtime is the PHP.INI file. This file controls the numerous settings used by PHP when processing a request. It’s important to make sure you start with a file containing the proper configuration for an IIS deployment. If PHP is installed using the Web Platform Installer (WebPI), this will be automatically configured. If not, the Learn IIS web site provides all of the details here.

One improvement you can make in this file is to remove unnecessary extensions. Each referenced extension library is loaded by the PHP runtime when it is initializing. These libraries then have to register the various functions and constants that are available to the PHP runtime and perform any necessary initialization. By removing any extension which is not used, this additional overhead can be removed.

An important consideration with your PHP configuration is that these settings may need to change based on the PHP deployment. Settings for the server name, database connectivity, and external resources can change quite quickly in the cloud. The use of Windows Azure does not guarantee zero downtime.

On February 29, 2012, Windows Azure experienced a partial service outage, the “Leap-Day Outage”. The partial outage impacted a large number of servers. The Windows Azure team was incredibly transparent about the details of the scope, cause, and fixes involved in this outage. I highly recommend reading their blog article for more information. You can find it at https://blogs.msdn.com/b/windowsazure/archive/2012/03/09/summary-of-windows-azure-service-disruption-on-feb-29th-2012.aspx.

In order to account for this inevitability, your application must be fault tolerant and easily reconfigured. During the leap-day outage, we were able to recover and minimize the impact of the outage. We quickly redeployed our web roles and a read-only snapshot of our database instance to a different data center until the problem was resolved. While some portions of the application were forced to use this read-only database snapshot, users were still able to use the entire application with a minimal disruption. If settings are hard-coded into a PHP file, these kinds of redeployments are difficult. You would be forced to modify your application and rebuild the deployment package. Then, you would need to undo those changes if you needed to relocate your resources back to the original servers. If your settings are obtained dynamically, you can redeploy the package to another server with very little difficulty. There are several strategies which help improve the code in this particular area.

There are several strategies which an application can use for configuring its settings or to configure PHP.INI:

  • The application can receive its settings externally from the csdef using the Windows Azure SDK for PHP. The current implementation of the Windows Azure SDK for PHP uses a command line tool in order to read the service definition settings from Windows Azure. It is therefore important to make sure to take advantage of the caching features in PHP to minimize the frequency of calls to those SDK functions if you use this approach.
  • Rewrite the PHP and application configuration files using a startup task. Combined with the previous method, this can minimize the number of times the SDK is used to retrieve values. The most common way to perform this task is to use a batch file or Windows PowerShell script, although nothing prevents you from invoking the PHP command line.
  • Configure the command-line parameters for the PHP process to pass additional “defines” on the command line. By scripting the FastCGI configuration to pass additional defines (-D) into the PHP runtime with these settings when the role is starting, it is possible to get the flexibility of using external configuration values without the overhead of executing the SDK tools each time a value is needed.
  • Use the PHP Contrib extension, php_azure.dll. This provides a native method for retrieving configuration settings using the published Windows Azure native library APIs. The method azure_getconfig() is able to directly retrieve the configuration settings with minimal overhead; it is the equivalent of calling RoleEnvironment.GetConfigurationSettingValue() from the .NET runtime. It is important to be aware that calls to this function require the PHP runtime to have %RoleRoot%\base\x86 in the PATH. This is allows the native and diagnostics libraries to be loaded by the PHP runtime.
    The extension and its documentation are available from https://phpazurecontrib.codeplex.com.

Whenever possible, you should try to make sure your application can retrieve its configuration settings directly from the role. This provides you the greatest control over the configuration and minimizes any special handling that might be required to deal with restarts or redeployments. Since the machines are stateless, you are not guaranteed of a constant configuration between any two starts.

By using one or more of these techniques, the application can be dynamically pointed to appropriate Windows Azure resources with minimal effort.

Configuring the Web Role

As mentioned previously, the Windows Azure instances require some configuration. There are many ways to deploy PHP, including using the scaffolding framework or the Web Platform Installer (WebPI) command line. Both of these are covered in some detail in numerous blog articles. You can also manually install and configure PHP as part of your deployment.

In our case, we realized that we wanted to ensure that the version of the PHP runtime we were using was completely configured and under our control. By using a specific runtime version, we could adjust the settings more specifically to our needs. In addition, owning the installation would allow us to verify that the extensions we are using will each behave as expected. In more extreme circumstances, this would also allow us to patch the components for any discovered issues. We package the PHP runtime with the deployment and use startup scripts to configure the settings required by IIS, including configuring FastCGI and handlers for the *.php file extension.

There are two important things to remember in the configuration process:

  • The configuration scripts must be idempotent. These startup scripts are not just executed when the Windows Azure instance is first deployed; they can be executed any time Windows Azure restarts or reconfigures the instance. This means that it is important to test the scripts and to make sure that the scripts can handle being run multiple times. Many sample startup scripts for configuring IIS forget this. This can lead to instances failing to launch or suddenly becoming unavailable and refusing to reload. If you are using appcmd to deploy settings to IIS, make sure that the script is either detecting an existing configuration or deleting the configuration section and recreating it.
  • The configuration of the instance can change any time the instance is redeployed, and this configuration change can occur without the server rebooting. When deploying an upgrade, this is especially noticeable. Windows Azure can disconnect the drive hosting the current deployment and connect a new drive containing the upgraded deployment. When this occurs, the paths to the application and any CGI values may need to change in order for the instance to remain usable. This is why idempotent startup scripts are necessary – the scripts may be configuring the PHP runtime to a new location on a completely different drive letter from the previous location. Several early failures on our system could be traced to this particular situation.

Web.Config

Because we are running under IIS, we have to make sure that the application is properly configured. This involves making sure to include a valid web.config file with any additional definitions or settings required by the application. If you do not include a web.config with your application, one is automatically created. If you include a web.config, Windows Azure will modify the file slightly to include various settings used on the server. In some cases, these settings may also reflect changes made using the appcmd tool.

When hosting PHP under IIS – including with Windows Azure – you might find that your code occasionally fails with the generic 500 error message. This message makes it difficult to troubleshoot the issues. In these cases, adding the line <httpErrors errorMode="Detailed" /> can make the initial troubleshooting significantly easier. This will allow the error message returned by PHP to be displayed to the client. Important: It is not recommended that you leave this setting enabled in production since it can expose significant details about your application.

It’s important to remember that this is a base feature of IIS, so you can take advantage of this file to customize the behavior of your application on IIS. If you understand this file and the related configuration settings available on IIS you can further improve the application behavior. For example, in our deployments we commonly enable dynamic and static compression to reduce the bandwidth required by our application. Occasionally we enable the caching mechanisms to fine-tune the caching of documents and files hosted on IIS. These settings and more can be configured in one of two popular ways.

The first is to use the web.config packaged with your deployment. This will allow you to configure most of the common settings associated with your application. For example, one of the more important settings you can configure is the defaultDocument. If this is not properly configured, performance suffers whenever IIS attempts to resolve each request that does not specify a file. IIS is automatically configured with a default list which must be searched in order until either a match is made or a 404 occurs. By explicitly setting your defaultDocument – normally index.php – you can eliminate this search and improve the performance. For more details on configuring the default document, see https://www.iis.net/ConfigReference/system.webServer/defaultDocument.

The second method is to use the appcmd command line management tool for IIS. This is a powerful way to fine-tune your configuration. This method gives you the ability to configure the web application (web.config) as well as the IIS instance (applicationHost.config). Using appcmd allows you to configure logging, dynamic and static compression, FastCGI, and many other aspects of the web role. This is still a very common way to configure the FastCGI settings to host the PHP runtime. If you’re interested in understanding this tool, I would recommend reading https://support.microsoft.com/kb/930909, and https://learn.iis.net/page.aspx/114/getting-started-with-appcmdexe/. In addition, you can refer to https://www.iis.net/ConfigReference/system.webServer/fastCgi for details on configuring FastCGI for PHP.

There are two considerations when using appcmd in your startup script. First, the deployment process may have modified some settings of your web.config file during the configuration of the server. As an example, the deployment process will configure the <machineKey> element. For this reason, do not assume that you will know the exact state of the web.config file. Second, be aware that your startup tasks may be run multiple times. This means that you must make sure any scripts using appcmd are idempotent.

Debugging

When developing or testing in PHP, it’s common to use a debugger extension, such as XDebug. Make sure that you remove these extensions from the production PHP configuration. Debugging extensions can impair the performance of a production server, so they should be used with care.

One recommendation in this regards is to place a debug flag in the service configuration (cscfg) file. You can then use one of the methods described in Configuring the Web Role to modify the PHP.INI at startup to configure the debugging extensions if this flag has been set. This method can also be used to install, enable, and configure other debugging components such as WebGrind.

Don’t forget that it is also possible to use Remote Desktop to connect to an instance and enable debugging manually. If you do this, remember to remove the debugging extensions from PHP.ini or rebuild the instance when you are done!

Logging

Logging can make it much easier to understand the behavior of your application, but it can also slow down the execution of your program. Make sure that you minimize logging on your production instance unless you need it for diagnosing an issue. You can enable logging dynamically from the service configuration file using the same method described in Debugging PHP.

Session Management

In case I haven’t mentioned it – Windows Azure Compute instances are stateless. This means that you cannot guarantee that any two requests will be handled by the same server. In addition, this means that the state on the instance is not guaranteed to be consistent with the state of any other instance. Since a load balancer can direct requests to any instance, it would be ill-advised for an instance to use any form of in-memory session management.

By nature, sessions provide state. This state must be shared and consistent across multiple instances. In order to make this work, you need a consistent and reliable means of storing the shared state. At the moment, Windows Azure Caching does not yet have support for PHP. This means that it is not currently an option for session management with PHP. The current recommendation is to use Windows Azure Table Storage. I would highly recommend reading Brain Swan’s excellent article on this topic, https://blogs.msdn.com/b/silverlining/archive/2011/10/18/handling-php-sessions-in-windows-azure.aspx. Make sure to also read the follow-up article which explains the importance of batching the data being inserted, https://blogs.msdn.com/b/silverlining/archive/2012/01/25/improving-performance-by-batching-azure-table-storage-inserts.aspx. It’s also important to remember that when storing session data, you must account for transient failures. Since Windows Azure uses shared resources, it is possible that resources will be briefly unavailable for short periods of time.

Caching

There is not always a need to dynamically create content for the user. In many cases, resources change infrequently. Caching provides a way of quickly responding to a request with content that already exists. It also provides a way of storing content closer to the user; in some circumstances, the content can be stored in the user’s browser, reducing the number of server requests.

Static Content

If the files are not changing often, it’s a perfect candidate for caching. This type of content can be placed in Window Azure Blob Storage and served using the Windows Azure CDN. This reduces the load on the server and places the content closer to the user. By default, content on the Windows Azure CDN will be cached for 72 hours. If you are not familiar with using the Windows Azure CDN, a hands-on lab is available at https://msdn.microsoft.com/en-us/gg405416.

Output Caching

Output caching preserves dynamic content for a period of time. When the content is requested before the cache expires, IIS will automatically serve the cached content and will not execute the PHP script again. This reduces the number of times a script has to be executed, improving overall performance. This topic is covered in depth in a blog, https://www.microsoft.com/web/post/performance-tuning-php-apps-on-windowsiis-with-output-caching.

File-based Caching

Of course, caching sometimes requires more advanced control. One strategy which we have seen used is to generate content on the server (or directly to Azure Blob Storage). If this file is being presented directly to the user, the page can redirect the user’s browser to this cached content. If the data is one or more database objects, then PHP serialization can be used to store and retrieve the object. Until circumstances change that require the code to create new output, the application can continue to use this cached document.

A word of warning here – if you are using this method, make sure that you are not inadvertently exposing the data publicly. Also, keep in mind that unless you are using Windows Azure Blob Storage, these caches are local to the server. If you are using Windows Azure Blob Storage, remember that there is a change for transient issues, so you must have some form of retry logic to ensure the persistence. Finally, don’t cache anything which relies on synchronization between the instances of the role. Remember that each role is independent and stateless.

If all of this seems quite challenging and risky, that’s because it can be. Of course, this is PHP so there are always more options for caching this data. Two of the most common are Wincache and Memcached.

Wincache

Most PHP veterans with any experience using Windows are familiar with Wincache (and for everyone else, you are already most likely using something similar, APC). This extension increases the speed of PHP applications by caching the scripts and the resulting byte code in memory. This improves the overall performance and reduces the I/O overhead associated with reading the script files. Configuring the extension is quite simple:

  1. Copy php_wincache.dll to the PHP extensions folder
  2. Register the extension in the php.ini file: extension = php_wincache.dll
  3. Optionally, enable the WinCache Functions Reroutes as described here: https://www.php.net/manual/en/wincache.reroutes.php. This improves the performance of certain file-system related calls.
  4. Deploy the application with the new settings. For local testing, restart the Application Pool in IIS so that the change is applied.

You will observe the session.save_handler was not configured to use Wincache. This is one change for Azure that is quite easy to miss! Remember that Windows Azure instances are stateless. The Wincache session management relies on using local memory. As a result, the session state would not exist across multiple servers. For this reason, session management has to use one of the persistence mechanisms provided by the platform.

Memcached

Many PHP applications cache data using Memcached to minimize database access. On Windows Azure this is even more important. SQL Azure is a shared service which has limits on the overall resource utilization. In addition, SQL Azure throttles client connections.

Memcached can be deployed on both Web and Worker roles, and it can be accessed in cluster mode from PHP applications. Maarten Balliauw has made a scaffolder available use with the Windows Azure SDK for PHP at https://github.com/interop-Bridges/Windows-Azure-PHP-Scaffolders/tree/master/Memcached. This site includes details about the implementation and usage instructions.

PHP and SQL Azure

The performance of an application can be limited by the performance of the slowest operation. With our Joomla instance, we are taking advantage SQL Azure for the database support. The performance when using this resource – which is external to the application server – can directly impact the ability of the web role to perform its job. A poorly tuned query can quite easily make the difference between a sub-second response time and an activity time out when dealing with large amounts of data. When a PHP application is performing slowly, in many cases the database queries can be the root of the problem. In a distributed system, this is even more likely to be true since there is more latency. To get the best performance out of your application, you must ensure that any database access is performed as efficiently as possible.

The Driver

If you need to access a database, you need to use drivers. In the past, you might have used the bundled extension to access the mssql functions. Starting with PHP version 5.3, these functions are no longer included with the PHP installation for Microsoft Windows. These functions have been replaced by a new driver from Microsoft which is open-source and available at https://sqlsrvphp.codeplex.com. The new sqlsrv functions are more efficient and vendor-supported. This driver supports both SQL Server and SQL Azure. In general, I recommend trying to keep your drivers up to date so that you can take advantage of bug fixes and platform enhancements. Like any other extension, you will need to place the non-thread safe (NTS) version of the files in your extensions folder and include any required configuration in the PHP.INI file. Full instructions are included with the Getting Started guide. Be aware that version 3.0 of the drivers does not include support for PHP 5.2 and earlier. For that, you will require the older version 2.0 drivers.

We found it quite simple to port legacy code to the new driver; the APIs are nearly identical with the exception of changing the prefix from mssql to sqlsrv. The biggest difference tends to be improved error handling methods in the new APIs: you will need to replace mssql_get_last_message (which returns a single result) with the method sqlsrv_errors (which returns an array of arrays).

From version 2.0 onwards, the drivers include support for PHP Data Objects (PDO). PDO provides an abstraction layer for calling database driver functions. You can read more about PDO in the PHP Manual. While a discussion of PDO is outside the scope of this article, the principals discussed here apply equally whether you are using PDO or not.

Understanding SQL Cursors

To understand how to optimize the PHP code, you must first understand a bit about the cursor types you see in PHP. Each type is optimized for specific uses and has certain performance tradeoffs. Selecting the right type for the job is therefore very important. In most cases, the default cursor type (SQLSRV_CURSOR_FORWARD) is the preferred choice. For more details on these cursor types, I recommend reading the MSDN article: https://msdn.microsoft.com/en-US/library/ee376927(v=SQL90).aspx.

Static

A static cursor (SQLSRV_CURSOR_STATIC) will generally make a copy of the data that will be returned. This is done by creating a work table to store the rows used by the cursor in a special database called tempdb. If there is enough data, it also starts an asynchronous process to populate the work table to improve the performance. This process has performance cost since the database server must retrieve and copy the records.

Dynamic

By comparison, a dynamic cursor (SQLSRV_CURSOR_DYNAMIC) works directly from the tables, avoiding the overhead of copying the data but having an increase in the time it takes to find the data for a single row. Dynamic cursors do not ‘snapshot’ the data typically, meaning it is possible for the underlying data to change between the time the cursor is created and the time the values are read.

Forward Only

The forward-only cursor (SQLSRV_CURSOR_FORWARD) is a specialized type of dynamic cursor which improves the performance by eliminating the need for the cursor to be able to navigate both forwards and backwards through a result set. As a result, a forward-only cursor can only read the current row of data and the rows after it. Once the cursor has moved passed a row, that row is no longer accessible. This is the default cursor type and is ideal for presenting grids or lists of data. Because the results are materialized and sent to the client dynamically, dynamic and forward-only cursors cannot use the sqlsrv_num_rows function.

Keyset

A keyset cursor (SQLSRV_CURSOR_KEYSET) behaves similar to a static cursor, but it only copies the keys for the rows into a keyset in tempdb. This improves performance, but it also allows the non-key values to be updated. Those changes are visible when moving through the data set, similar to a dynamic cursor. If one or more tables lack a unique index, a keyset cursor will automatically become a static cursor. Both keyset and static cursors can use sqlsrv_num_rows to retrieve the number of records in the result set. This means that when calling sqlsrv_num_rows, the database server will be working with a copy of the data stored in tempdb.

Using the Right Cursor

In one of the third-party software components we utilize, the application needed to count the total number of records on the server to implement a paged display. A second query would then requests the current page of data for display. Paging was being used to limit how many records were returned since the dataset could be quite large. This is a fairly common scenario for displaying results in a grid. Both of these queries were originally configured to use a static cursor so that sqlsrv_num_rows could be called. This pattern is quite common in many PHP scripts. Unfortunately, this pattern has a serious flaw.

When the query was invoked to determine the total number of records was invoked, this resulted in the large dataset being copied into tempdb. As the number of records grew, the time required for this processing also increased. This query was not used to generate the actual data view, so none of the data copied into tempdb was used by the client. Before long, this query was taking several seconds to process. It didn’t take long before users began to complain about the time required to view each page.

Fixing this issue is surprisingly simple. First, we converted both queries to use a forward-only cursor. This allowed us to work with the dataset more efficiently since we were no longer copying the records into tempdb. This was also the ideal cursor type for returning the paged results since the grid view was rendering each row in order. For the query which determined the total number of records, the call to sqlsrv_num_rows was replaced by a standard SELECT COUNT query. The modified queries took only a few milliseconds to return their results.

Prepared Statements

Another way to improve your PHP code is to take advantage of prepared statements. Prepared statements basically provide the database server a cacheable template of your query which can then be executed multiple times with different parameters. This reduces the time it takes for the database server to parse the query since the server can reuse the cached query plan. More importantly, prepared statements automatically escape the query parameters. By eliminating string concatenation and the need to escaping the query parameters manually, prepared statements can significantly reduce the risk of a SQL injection attack if used correctly. A final benefit is that prepared statements separate the query template from actual parameters. This can improve the maintainability if the code. It can also be very beneficial if you are trying to support multiple databases!

This feature is more than just a best practice recommendation. It is considered to be so important that it is the only feature PDO drivers are required to emulate if it is not actually supported by the database. You can view a complete example of how to use prepared statements with the SQL Server driver here.

The Tools of the Trade

The Microsoft SQL Server platform provides several tools which can be helpful for maintaining and optimizing your code when dealing with SQL Azure. You can download the current set of tools from https://www.microsoft.com/sqlserver/en/us/get-sql-server/try-it.aspx.

SQL Server Data Tools (SSDT)

SQL Server Data Tools is an integrated collection of tools for managing, maintaining, and debugging SQL scripts. It provides a GUI editor for table definitions, enhanced query debugging, and system for managing and maintaining your scripts in source control. In addition, SSDT has the ability to create a data-tier application (DAC). A DAC provides a way of managing, packaging, and deploying schema changes to SQL Azure. When a DAC is deployed to SQL Azure, it can perform most of the common schema migration tasks for you automatically, drastically reducing the effort required to update your application. In addition, a DAC project can make managing scripts for SQL Azure easier; it provides Intellisense and identifies invalid SQL statements. A hands-on lab is available which walks through creating, managing, and deploying schema changes to SQL Azure using this tool. You can find it at https://msdn.microsoft.com/en-us/hh532119.

You can learn more about SSDT from https://msdn.microsoft.com/en-us/data/gg427686.

SQL Server Profiler

The SQL Server profiler provides a way of capturing and analyzing the queries being sent to a SQL Server instance. By capturing the queries being sent and examining the execution times for each query, you can more easily identify the queries that are taking the most time. Not only that, you can identify additional issues such as repetitive calls and excessive cursor usage.

Using this tool, we noticed that one particular application made a database call for each row that was going to be displayed on the screen. Looking more closely, we noticed that that this query was being used to return a single value from another table. This was accounting for almost 80% of the time required to render that page. By modifying the original query to include a JOIN, we were able to eliminate this overhead and reduce the number of calls to the database.

SQL Server Query Analyzer

Once you’ve identified a query that is taking an excessive amount of time, you need to learn why. The Query Analyzer provides the tools for executing and debugging SQL queries and analyzing the results. This tool also provides a graphical visualization of the query plan to enable you to more effectively find the bottlenecks in your query. This tool is now part of SSDT.

Database Engine Tuning Advisor

If you’ve taken the time to profile your application, you might be interested in learning about optimizations you can make to your schema which might improve the overall performance. The Database Engine Tuning Advisor (DTA) examines your queries and suggests indexes, views, and statistics which can potentially improve the overall performance of your application. Of course you still need to examine whether the proposed changes provide real improvement, but this can be a great help in identifying changes which can improve the overall performance of the application.

Because both SQL Server and SQL Azure have the ability to identify optimal query execution plans, you may find some surprising suggestions. SQL Server can use indexes for schema structures that are not part of the actual query, so this can open up the potential for unexpected optimizations. In our case, a particularly slow query was able to be optimized by creating an indexed view. Because this index provided coverage for the query, SQL Azure was able to use the index and view when retrieving the requested data.

A tutorial for the 2008 R2 edition is available at https://technet.microsoft.com/en-us/library/ms166575(v=sql.105).aspx.

SQL Azure Federations

Federations are the newest addition to the SQL Azure platform. Federated databases provide a way to achieve greater scalability and performance from the database tier of your application. If you’ve done any large scale development on PHP, you will be familiar with the concept of “sharding” the databases. Basically, one or more tables within the database are distributed across multiple databases (referred to as federation members). By separating the data across multiple databases, you can potentially improve the overall performance of your database system. Using SQL Azure Federations via PHP covers this topic in greater detail.

For users that are more familiar with the SQL Server platform and tools, it is worth mentioning that the current tools – Microsoft Visual Studio and SQL Server Data Tools – do not yet support this feature if you are deploying the database as a data-tier application (DAC). As a result, federations must be managed manually at the current time.

Resiliency

The SQL Azure database is a shared resource which can be throttled or restricted based on how it is being used. Because it is a shared resource, it can also be unavailable for short periods of time. While this is not always the case, it is the reality of the cloud. For that reason, your code must not blindly assume that every connection will be successful or that every query will succeed. Two strategies will help you in this regards.

The first strategy is to use a retry policy with effective error handling. That is, if the connection or query fails due to a transient condition, you may create a new connection and attempt to perform the SQL query again. Don’t forget that SQL Azure can become unavailable during a query (closing the connection with an error) or between two queries. Poor error handling can lead to data corruption very easily in a distributed environment.

The other strategy is to isolate your read logic and write logic as much as possible. By separating these two concerns, it becomes possible to continue using your application even if there is a major service outage. In many applications, a large portion of the code is devoted to allowing the user to review stored information. It tends to be a smaller portion of the code which is involved in modifying the data. In these types of applications, a clean separation makes it possible to allow reading to continue independently from write operations. This allows users to continue to use your application, although possibly at a reduced capacity.

During the leap day outage, the SQL Azure instance responsible for serving news feeds to our users had availability issues. During this time, we reconfigured the compute instances which were hosting these services. The new configuration retrieved the data from a copy of the database in a different datacenter and did not allow updating or storing new data. Although we could not create new content or modify existing feeds during this time, our users were able to continue receiving the news feeds.

Sizing the Servers

When deploying to Windows Azure one of the first considerations you have to make is the server size. Windows Azure is not a one-size-fits-all solution. It is a highly configurable set of many different types of services and servers. One of those configuration options is the size of the virtual machine instances that will host the web role. The size of the instance controls the availability of resources such as memory, CPU cores, drive space, and I/O bandwidth; you can read more on the configurations of the available virtual machine sizes at https://msdn.microsoft.com/en-us/library/windowsazure/ee814754.aspx.

This is an important decision for optimizing the performance of your PHP application. The various sizes each have limitations on the available resources – CPU, RAM, disk space, bandwidth, and overall performance – which have to be balanced with the application’s needs. For example, if the application requires extensive network bandwidth, a large server instance may be necessary in order to keep up with the system’s demands. On the other extreme, if the application requires mostly CPU resources and spends significant time in small, blocking operations, it may be advantageous to use multiple small instances so that Windows Azure can load-balance the incoming requests. In each case, the performance is very dependent on how the system is being used.

We have observed that an application running on 2 medium instances can behave very differently from the same application on 4 small instances. In at least one case, the small instances were able to more effectively balance the loads. This appears to be due to more effective load balancing for the application. The load balancer would send each request to the next available instance, preventing the CPU from any one instance from becoming saturated. I would caution that this was a specific case and that you should examine how the resources are being used by your application to understand the most efficient size for your web role. If you are unsure, start with a small instance and work up from there.

Our Stateless World

Remember that the Windows Azure servers are currently designed to be stateless. This means that any changes made locally on the server are not guaranteed to be available. The only way to have persistent storage is to commit the storage to one of the available persistence stores such as a Windows Azure drive, SQL Azure, or blob storage. Windows Azure servers can be reallocated and restarted at any time, so any manual configuration adjustments or changes can be lost unless they are part of your deployment package. For this reason, you cannot make any assumptions about how long local changes will persist. This can be confusing for new developers who might assume that changes made through a Remote Desktop session will continue to work. We have seen cases in which a server suddenly stopped working because a manual change was made to one or more files, and those changes were lost when the instance was redeployed suddenly by the Windows Azure controller in the middle of the night. The same thing can be said for any log files or other persisted content which does not use the proper Windows Azure storage mechanisms – you can’t guarantee the content will not be removed. The servers are stateless.

In one case, we discovered a component of our Joomla installation which was storing image content in a local folder on the server. This had two immediate side effects. First, not all of the servers had copies of this content. This meant that any time we increased the instance count of our Windows Azure deployment, customers would begin to receive 404 errors in the event they were routed to an instance which did not contain the physical file. Since local folders are not synchronized between instances, only the server which initially received the image would be able to serve that content. Second, we observed that the content was permanently lost if the Windows Azure instance was recreated, upgraded, or restarted. The fix to both problems was to make sure that any user provided content was stored directly and subsequently retrieved from Windows Azure’s blob storage. By making this adjustment, we also noticed a substantial performance change when accessing the content. The content was no longer using bandwidth on the server and we could now serve it using the Windows Azure CDN. One additional benefit – the content was now protected by the redundancy built-in to Windows Azure Storage.

Replacing local content storage with remote storage does not come without some cost. It can take longer to transfer a file to remote storage than to the local file system, especially if the server is acting as a gateway to the resources. This can require changes to the end-user experience, or configuring the code to allow the content to be directly uploaded into blob storage. This tradeoff is minor compared to the benefits you can gain. It is also a minor tradeoff compared to the risk of data loss from incorrectly assuming that your content will continue to persist.

Other Windows Azure Features

There are a number of other services available in Windows Azure which can greatly benefit your application. Taking some time to understand these other features can help you to get more out of Windows Azure.

Worker Role

Up until this point, we’ve discussed compute instances using the Web Role almost exclusively. For more advanced development, a Worker Role is also available. The Worker Role allows you to create processes and services which are continuously running and can provide additional features. Worker Roles can be used for calculations, asynchronous processing, notification, scheduling, and numerous other tasks. In short, if you need the equivalent of a background process or daemon, then the Worker Role is ideally suited for performing that job. Worker Roles are a type of compute instance, so they are billed the same way as a Web Role. This means that you still have to consider the related costs.

Windows Azure Blob Storage and CDN

As discussed earlier, one of the first considerations on Windows Azure is the server sizing. Part of sizing the server correctly is to understand the way the server is being used. A server which provides exclusively dynamic content has completely different usage than a server which is mostly static content. The smaller instances of Windows Azure offer much lower bandwidth that the larger instances. One approach is to use a larger server instance if you find you need more bandwidth. This certainly solves the problem, but you are now left with an under-utilized (and rather expensive!) server. A more scalable solution is to move the static content to the Windows Azure Blob storage and enable the CDN. This places the content closer to the end-user, improving the delivery characteristics. It also reduces the amount of network and I/O bandwidth required by the Role. Images, style sheets, static HTML pages, JavaScript, Silverlight XAP files, and other static content types can be placed onto blob storage to allow significantly larger scale at a lower total cost. For applications which are mostly static content, it is possible to use Extra Small instances and effectively host your web role.

Integrated Technologies (Advanced)

Remember that you’re running on platform based on the Windows 2008 server technology. This means that any of the technologies available on that platform are available to you on Windows Azure. From within a compute instance, it is possible to leverage the platform to take advantage of additional features. Be forewarned that if you’re using some of these additional technologies that you must account for the stateless nature of the server, the need for idempotent scripts, and the fact that redeployments can result in your resources moving to a different drive or location.

Some of the features available to you:

  • ASP.NET. If you need a lightweight integration into Windows Azure or have components that run using the .NET runtime, you can use that technology side-by-side with PHP without any issue. For users comfortable with .NET, you can even create startup tasks and handle Role events through event handlers. You’ll likely want the Windows Azure SDK for .NET.
  • Background Startup Tasks. If you don’t need the resiliency and restarting features of a service, but you do need a simple background (daemon) task, then you can use a background startup task in Windows Azure to run a script or executable in the background of your deployment. Background startup tasks are started with the instance and (especially .NET based tasks) can respond to the RoleChange and RoleChanging events. This is the basis of several of the plugins provided with the Windows Azure SDKs.
  • Windows Azure Entry Point. By default, an entry point exists for every Web Role and Worker Role. You are allowed to provide a .NET based DLL containing a customized entry point based on the RoleEntryPoint class in your deployment and to include an <EntryPoint> element in your Service Definition (csdef) which provides the details required by Azure to use the entry point. This allows you to respond to the RoleChanged/RoleChanging events and to control the state of your role. For a VM role, this is not available and the recommendation in the Azure SDK is to use a Windows Service. For more advanced cases, you can even override the Run() method to run background tasks. When the Run() method ends, the Role is restarted. This provides you another mechanism for executing background tasks from a Web Role.
  • IIS. This technology is at the heart of every Web Role and provides the web hosting environment. As a result, all of the features of this platform (including Smooth Streaming support) are available to you. To really make the most of this feature, you’ll want to become intimately familiar with the appcmd tool described in Configuring the Web Role. You’ll also want to make sure to explore WebPI

Comments

  • Anonymous
    February 17, 2014
    Great Article. Can you add project code(Joomla/php specific) on codeplex / github ? Thanks.