Wilfred Jamison

Subscribe to Wilfred Jamison: eMailAlertsEmail Alerts
Get Wilfred Jamison: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Java EE Journal

J2EE Journal: Article

Practicing Software Performance Engineering

Practicing Software Performance Engineering

Information technology is changing very rapidly right before our eyes, and so are user requirements. Along with the paradigm shift that resulted from the advent of a worldwide communication infrastructure and the advancement of distributed computing, software performance has evolved from being merely a consideration into a primary requirement. This is especially true for those developing mission- critical applications on the IBM WebSphere Application Server. This article poses a challenge to existing software engineering methodologies to evaluate and check whether they are capable of delivering the expected performance level of the applications under development.

Make It Run, Then Make It Run Fast
As software engineers, we were indoctrinated with this "golden rule." There is indeed a valid reason to adhere to the rule since it emphasizes one basic principle - that every software engineer makes sure that a product fulfills its functional requirements and specifications first. However, there is also an undeniable truth that more often than not, we only satisfy the first part of this rule and rarely the second, which is to make it run fast.

Managing the development of a large software application is never an easy task. That is why the discipline of software engineering was introduced - to offer us a systematic and methodical approach to writing software that is functional, reliable, robust, maintainable, and - most of all - useful to the end users. If you are in the business of writing software that is expected to deliver high performance such as middleware programs (application servers, operating systems, communication tools) or even mission-critical Web applications (financial applications, video conferencing, intelligent search engines, etc.), then this rule is just not going to work. In addition, waiting until an application is running before addressing performance is an expensive proposition.

So implementing the second part of the rule is the harder problem. We software engineers know very well that oftentimes we scramble to meet tight schedules, spending a lot of time understanding somebody else's code, fixing bugs, overcoming unexpected roadblocks, and the list goes on. In this competitive business, there is no time left to make it run fast by the time we get the product up and running.

Make It Run Fast
I propose that we change the rule by making it run fast from the outset. In other words, why not do it now instead of later? I believe that this can be achieved only if we treat performance as a function itself, rather than as a feature. The tenet of this article is that we should be able to adjust our existing software engineering methodologies to be more performance-oriented, so that the first running version of the application we write actually does run fast.

Be Proactive, Not Reactive
My discussion is not complete without a description of how most organizations manage performance in their software development process. In simplest terms, I describe this approach as being reactive. Figure 1 is a typical model used in many organizations.

In the reactive model, the traditional software engineering methodology is followed, but any performance-related issues and concerns are discovered only toward the end of the cycle. This is based on the belief that nothing can be measured until a running product is built. Thus, performance work starts typically during system quality assurance testing. At this time, some benchmarking activities are performed by a designated "performance team" test workload. Based on the results, the performance of the application is analyzed and verified against a set of performance requirements or criteria. More often, even after the application is in production, bottlenecks are identified and a solution is proposed or written up. There are two possibilities:

  • The owner of the component that is identified as the cause of the problem is notified. A solution is proposed or the owner finds a way to fix the problem. The fix is then put into production and the cycle repeats.
  • A solution is formulated by the performance team and prototyped to measure its performance gain. Once an acceptable solution is found, it is handed down to the owner for final implementation. Note that during this time, communication between the component owner and the performance team is very important.

    Depending on the type and nature of the application, performance analysis can also produce some tuning parameter settings that are used for performance improvements. Tuning is a fundamental practice in performance engineering, especially in a complex system where there are too many interacting components, making it harder to understand or grasp an absolute solution that works for all cases. Also, it is typically appropriate for systems in which the overall behavior depends on the workload coming onto the pipe. Finally, the performance team oftentimes produces a performance report that describes the resulting performance level of the application. It can be used both for end-user reference and as an input to the next development cycle.

    This mode of operation is very reactive in nature because technically the performance team's work is to monitor the latest build of the application, raising a flag whenever something is not right and making sure that the problem is fixed. It is reactive because their actions are always in response to what they are given, and there is minimal, if any, influence in the overall design of the application from a performance perspective. Furthermore, the responsibility for making sure that performance is looked after is left to a single person or group.

    There are serious problems with this model, in which performance analysis and improvements are done toward the end of the development cycle.

  • Cost: Imagine that a team of sports car engineers design their car to make sure that it actually runs before attempting to make it go fast. Once convinced that it will run, they enhance it with all the necessary high-performance features. Undoubtedly, this is the wrong way of doing it - not just because the car is supposed to run fast to begin with, but because it is a very costly approach. It will cost them time and money, as they have to repeat the process.

    Similarly, software often goes through many revisions because of the lack of performance-enhancing features. Consequently, this entails more testing cycles as well as the risk of functional regression. In the end, the product is shipped with poor performance due to lack of time. Revenues go down because of customer dissatisfaction and therefore more hours are spent on improving performance to pump up revenues.

  • Suboptimal solution: Imagine again that our car engineers realized that the performance enhancement they designed cannot be easily incorporated into the car because of the way it was originally designed. (Obviously, they did not think ahead.) So what options do they have? (1) Disassemble the car and redesign it so that the new feature can be accommodated. (2) Forget about the cool enhancement and think of some other performance enhancements.

    Similarly, when software is not designed with performance in mind, there is a very high chance that a given performance solution will not be viable because of some architectural restrictions or implementation-dependent constraints - and the cost to revise all these things will be significant in terms of time and money. Thus, the solution is rejected and a weaker alternative is devised. The lesson is that once an architectural design or infrastructure is in place for a sufficiently long period of time, major changes are rarely made. More often we get into the "patches" syndrome where ad hoc solutions are provided until the whole application becomes a crazy quilt of patches.

  • The tuning trap: Tuning has a negative aspect. We have already mentioned what tuning is good for. However, when performance is on the edge of being acceptable and time is ticking away, the last recourse is often to hope that the system can be tuned rather than making application design or code changes. In general, tuning helps, but it does not solve the real problem of the underlying application.

    Thus, the better way to approach these problems is to practice software performance engineering and be proactive in providing performance solutions from top to bottom. The software engineer needs to embrace performance as part of the overall requirements and therefore address it right from the very beginning. The car engineers should have made performance the focal point of their design. Like all aspects of the application being developed, project management should be able to track, control, and verify its performance status.

    Who Should Address Performance?
    Everyone needs to consider performance - analysts, architects, developers, testers, document writers, etc. Thus, everyone is a software performance engineer. Software performance engineering is a concerted effort. It is a methodology that everyone in the organization should get involved with. Figure 2 shows a proactive model of this methodology. As we can see, the original methodology is kept intact with additional bubbles surrounding it.

    Performance requirements should be identified, to begin with, typically by the architects or the marketing organization. For Web applications, requirements focus on user load and response time, such as how much load the system needs to handle and still respond in no less than 5 seconds. The key steps are:
    1.   Define the performance requirements of the application, in both qualitative and quantitative terms.
    2.   Define a set of measurable criteria to verify these performance requirements.
    3.   Define a set of specific test scenarios to use during the test stage.

    Once the performance requirements are put in place, designing the architecture of the application and the individual components must involve viewing them from a performance perspective. Typically, an experienced engineer with expertise on performance should be included in the design process. Another very important inclusion in this model during the design and implementation stages is the consideration of current performance technologies, such as caching, performance design patterns, and fast algorithms. An expert in this field must be identified to help in the process.

    I consider performance analysis to be the most important part of this model. It is done throughout the development cycle. During the design review, for example, a design document specification must be given to a group of performance analysts for the purpose of catching potential bottlenecks. They may also provide recommendations on how to modify the design for better performance.

    Performance experts typically look for problems like communication pathway and protocols, path lengths (both for design and implementation), resource requirements such as memory and CPU cycles, critical nodes, etc. If the operational environment is also specified, they also analyze the characteristics of the components in the environment such as the network bandwidth, storage devices, gateways, etc. Thus, the expertise is very crucial in performance analysis. The performance analysts may also go back to the requirements specification if necessary.

    During the implementation stage, developers should be made aware of the significance of high-performance programming. They need to be trained to learn and think of fast algorithms and to use any language-specific features that can boost performance. Also, it will be productive to provide them with a list of best practices for high-performance application programming, especially in the areas of Java and J2EE. The performance code review also includes performance analysis (not necessarily by the same group that did the design review). This group is composed of engineers with excellent technical skills who can pinpoint poor algorithms as well as recommend better solutions. This process should be conducted repeatedly until all are satisfied.

    Testing is an integral part of the performance engineering process. Performance-specific test scenarios must have been defined early on and carried out during this stage. One important thing that should be mentioned is that every functional unit must be tested for performance. Thus, testers and developers are trained to do performance testing and to use tools such as profilers and performance analyzers. They should also be trained to perform micro-benchmarking on their own code. This leads me to my principle that performance is distributable; that is, it can be modularized such that performance of every unit contributes to the whole.

    Benchmarking can be done once a running product is built. In both benchmarking and system testing, performance analysts may get involved once again to help diagnose problems found. Any problems and/or solutions should be communicated either at the design or implementation level, depending on the case. Performance problems encountered should be documented and monitored by management.

    Benchmarking is a discipline in itself and there are numerous things that can be written about it, such as the proper way of conducting benchmarks. I will not discuss them in this article. Suffice it to say that benchmarking efforts do help in discovering the problems and weak points of the application. Also, benchmarking helps to keep track of where the application is at in terms of performance at any given point in time. Typically, benchmarking results in performance reports as well as tuning guidelines.

    Another important aspect of testing, as well as maintenance, is performance regression. As the application evolves and different builds are created, performance must be monitored to see if any performance regression occurs. Regressions need to be reported to the appropriate people - designers, analysts, developers, architect, etc. - and corrected.

    The Bottom Line Is Team Performance
    Contrary to what others may think, software performance is a broad area - it encompasses issues such as scalability, high availability, and capacity planning, to name a few. Although the metrics may be as simple as throughput and response time, there are hundreds of factors that affect performance. The responsibility of making sure that good performance is achieved belongs to the entire organization. By executing performance analysis early on - from requirements analysis to maintenance - we cut down the total development cost by catching potential problems early. Another advantage is that we are able to design the architecture in a performance-oriented manner and therefore we get the optimal performance boost. Making changes later on will not be as hard.

  • Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.