Partager via


User's Perspective of Software Performance

The software performance is usually defined as the responsiveness and scalability of the software* and it is a quality factor. Simply put, the responsiveness is how fast the software responds to an event and the scalability is how long the software can maintain the responsiveness under different loads. Note that both of the these software characteristics can be precisely defined (a goal(s) is set for each) and measured, and therefore, they are quantitative.

Now let me tell you an story about an experience I had when I was in college. I worked on a project which was supposed to make me rich (you know, "student rich" not "business rich"!). There was a demand for a system that could transfer live videos through the phone line. Different industries were interested in this system because managers and top engineers wanted to monitor what was going on in the manufacturing plants, usually placed in remote areas with no high speed internet, fiber optics, or satellite equipment.

My friend and I were offered to develop the software to stream videos captured by different types of industrial cameras over the phone line. Basically, the software should encode the live video captured by the camera in station A, encode the video and send it through the phone line to another instance of the software sitting on the other end of the phone line, station B. The software in station B should decode the stream and play it. Everything looked straight forward until the customers introduced and insisted on a set of requirements:

  1. The software should stream and play the video very fast (live, without a delay)
  1. The quality of the video should not change during the transmission
  1. The video should play smoothly in station B

 

We struggled a couple of months and reduced the delay between the time the video is fed to the software and when the video is played to six seconds. The quality changed during the transmission due to the encoding/decoding method we chose to reduce the delay. And also the video was playing with a few hiccups per minute because of the type of line (phone line) was used to transfer the stream.

When we demoed the software for the first time, one of the users who had never experienced video streaming before got excited and gave two thumbs up. He thought the delay was not really bad, the quality was acceptable, and the hiccups could be ignored. The other one, who regularly used high-speed internet video chatting, did not think the same way and he wanted all the requirements together... I did not become a rich student after all!

 

 

OK, what we missed was defining proper performance expectations. I believe setting the correct set of performance goals, i.e. defining what our performance expectations are, is the half way through having a performant software. Sometimes, the goals are clear. For instance, the software that monitors a nuclear reactor should calculate different criteria, such as pressure and temperature, and alert the abnormal cases in less than one second (I made up the number just for the sake of the example), or consequences could be sever. This goal can be calculated scientifically and there is no compromises. However, in most cases unlike the nuclear reactor, customers just want the software to be fast, like our customers. There are some studies to be done and discussions to happen with the customer before setting the goals. 

 

From the user standpoint, the performance is mostly a perception; the user's impression of how fast the software responds. How a user perceives the performance actually depends on many factors including:

  • The user's previous experiences and how the user used to do the task that the software does now: just compare our two users in the above example. One had used similar products (although for different purposes) so his expectation was different from that of the other user.
  • What the software does and what end users do while the software is trying to respond to an event: users usually consider a software showing some sorts of visual indication on what it is doing, faster. The reason for that is the users do not like the "Not responding" state. Responding to the same event, a software with "not responding" state for five seconds looks slower than one that shows a progress bar for seven seconds. Do you think our customers would have considered the six second delay a bad performance if they were not sitting in front of the monitor and watching the live video?
  • The scalability of the software: imagine that our software streamed the video with no delay for two minutes and then it started streaming with a five second delay afterward. Users would still describe the software as "slow".

 

We definitely should translate the users' feelings into numbers to be able to measure them. For instance, the goal for the responsiveness of the software in the above project should have been set to something like "no more than six second delay".

 

The next question we need to answer is how the end users, and more importantly the customer, benefit from the software reaching a specific performance goal. When we asked our customers why there should be no delay on the video streaming, we did not hear any valid reason. There was no real time evaluation or analysis on the video in station B, thus, there was no business value on reaching a zero delay. So, you should always talk to your customer about the value achieving a performance goal delivers. Most of the customers do not do a cost-benefit analysis of reaching a certain performance level. In addition, to achieve a performance goal we might need to sacrifice other requirements. To reduce the streaming delay, we had to choose an encoding method that compressed the video to a level that impacted the quality, contradicting another objective. The decision maker here is again the customer, considering the business value each objective delivers.

 

Another important aspect to consider is the environment and hardware equipment the users are going to use to run the software. You probably can feel the performance difference running a software on different machines or OSs. Had we gotten a fiber optic infrastructure, we would have not struggled to reduce the delay. So there was a restriction on the equipment available to us. We also managed to improve the video hiccups by changing the video card. However, the customer could not afford buying the video card for all the users.

 

The last but not the least, there are some alternatives to compensate for a lower performance. A user who loves classic music could live with a two minute response time if the software plays a piece from Mozart for her (an exaggeration, but hopefully you get my point!). So open your mind and be innovative.

 

In summary, before thinking about how to develop a performant software, first, try to understand how the performance can impact the users'. Spend some time with your customer discussing the performance objectives and the values they deliver; help the customer to do a cost-benefit analysis. Also pay attention to contradicting objectives and restricting requirements.

 

Good day to you

Hamed

 

--

* C. U. Smith and L. G. Williams, Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software, Boston, MA, Addison-Wesley, 2002