Posted by: Ian Molyneaux | September 5, 2013

A rough guide to extrapolating capacity from Performance Testing

I was asked an interesting question recently about how to extrapolate capacity from performance test results.

I have pondered this question many times and I don’t think in most cases you can end up with more than a rough approximation, after all this is a major reason why we run performance tests in the first place. Here’s my reply (feel free to agree or disagree as you see fit).

Hi Supreeth,

Not sure there is an easy answer to creating a reliable extrapolation model. There are frankly too many variables. However you might consider the following as a starting point:

  1. Take a single “leg” deployment of the application infrastructure i.e. If this is normally 3 web servers , 3 app servers and a DB cluster then configure 1 web, 1 app and the DB cluster. Arguably you could do this with any subset of the full deployment model but I prefer to do this using the smallest practical subset.
  2. You then need to instrument the servers and network with appropriate monitoring.
  3. Run a progressive ramp-up performance test against this deployment until you reach a point where performance and /or availability is unacceptable.
  4. Repeat this process several times so you have a representative sample.

At a basic level this gives you the server capacity limits for the infrastructure deployment subset together with the network footprint. Computing capacity rarely scales in a linear fashion so assuming 3 times the deployment subset equals 3 times the capacity is more than a leap of faith, however it is at least a starting point based on real load.

What you can be more certain of is the amount of network bandwidth consumed by a given number of users.

Advertisements

It’s interesting to review the many discussion threads concerning automation support for Mobile Technology. By this I mean within Functional and Performance Testing tool sets. There seems to be a bit of one-upmanship going on at the moment between tool vendors trying to provide the “best” level of overall support.

I am starting to wonder what all the fuss is about…

If you look at the performance testing requirement for Applications deployed on Mobile Technology a couple of things stand out:

  1. True mobile applications, (i.e. not m.mywebsites or those browser apps masquerading as mobile apps to get round app store costs) are effectively fat clients. The tech stack is pretty much a closed shop as far as simple record / playback is concerned unless you instrument the code-base in some fashion or re-direct the device comms via some sort of proxy.
  2. Most mobile apps need to exchange data with external services. This typically uses one or more middlewares and some sort of API. (In my experience often poorly documented.)

So as you can see there really isn’t a lot different from performance testing any other variety of service based application.

Having carried out a number of successful performance testing projects involving Mobile clients by far the most straight-forward approach seems to be as follows: (Feel free to disagree)

Step 1

Testing the external services

  • Confirm the uses cases (No change here)
  • Get hold of the API documentation (Good luck with this)
  • Sort out the test data requirements. (Always required)
  • Script the API calls generated by the uses cases ( Done this many times way before Mobile turned up and is almost always a manual process. )
  • Build the performance test scenarios. (Straight-forward enough).

Doing the above gives you the ability to load up the API(s) used by the mobile app so can see if the service can cope with peak demand.  (So far so good).

Step 2

Now introduce the Mobile Client(s)

  • Decide how many different types of Mobile device you need to take into account. (Usually at least one iOS and one Android but the choice is yours).
  • Stub out the external service calls from the Mobile App. ( Probably requires code instrumentation but there are some increasingly clever stubbing solutions out there.).
  • Repeat and time uses cases by device ( Doesn’t necessarily require automation unless the app is very complex although this will greatly simplify the collection of perf metrics.).
  • This gives you a performance benchmark for one user by device type with the API(s) working in simulated BAU mode.
  • Now remove the stubbing and repeat the individual use case test by device.
  • Compare to the results from the stubbed tests and note the delta. This should represent the latency and propagations delays introduced by cellular and application infrastructure.(and the service of course).

Step 3

With these benchmarks in place you can now repeat the volume tests against the service API(s) whilst at the same time repeating the use cases on each device and observing any performance regressions. The fact that you have already established the capacity limits of the external service(s) should make it easier to identify how the mobile app reacts to and conceals data starvation and time-out events from the end user.

You can do this one device at a time so still not essential to automate the device however the ability to combine performance and functional device automation has obvious benefits.

I think this approach works pretty well as you are loading the service API in a realistic manner (assuming your API scripts are accurate) whilst observing mobile app performance actually connecting over the full cellular stack rather than trying to emulate apps on mobile devices over LAN/WAN. After all a bad mobile end user experience is going to be down to any or all of the following:

  • Lousy app design (Think ASYNC!)
  • Poor app device compatibility
  • Transient cellular network / ISP latency (Pretty much out of your control as always unfortunately)
  • Capacity problems with the external service(s) either hosted in-house or by 3rd party. (No reason why you can’t nail this one.)

The interesting piece in all this is that much of the device testing in Step 2 is of course what should have already come out of QA as part of Functional testing sign off. However I would argue that there is benefit in repeating the process as your use case focus will be different and hopefully you will be using a different environment/service sandbox  to carry out performance testing.

As a final thought while I have found that the above process usually sits well with clients when push comes to shove they often defer the mobile device testing piece altogether and are happy enough to just do some informal mobile app prodding and poking on their own.  As I said at the start.. Performance test tooling support for Mobile apps – What’s all the fuss about?  Oh well.

Happy (mobile) performance testing!

Posted by: Ian Molyneaux | June 11, 2013

Which role is most key to a successful IT performance project?

In delivering highly successful performance projects, I would divide the market place into Performance Testers and Performance Consultants. To deliver a performance testing project effectively both skillsets are required.

At the end of the day Performance Testers are primarily focused on building and validating the test scripts from the use cases provided and then doing the same for the performance test scenarios. They should be working from a statement of work put together by the Performance Consultant. Testers will generally execute tests and may do some basic analysis but the Consultant (should do) the heavy lifting when it comes to analysing results.

I would further divide performance testers into two categories:

  • Those that have strong development experience; which by extension implies knowledge about how software is designed, coded, tested and deployed, and
  • Those that don’t.

Development centric performance testers generally have used multiple test tools and are comfortable extending and enhancing code beyond vanilla scripting requirements.(“Adapt and Overcome” to quote a military cliché.)

Non-development centric performance testers can still be very effective however they will have often based their career knowledge around a single toolset such as LoadRunner. There’s nothing necessarily wrong in this approach, however this tends to shape the requirement around the tool which may not always the best choice (often another tooling option works better from a technological and/or cost perspective).

Whatever your background I would strongly advise becoming familiar with at least a couple of mainstream toolsets.

Consultants gather and align the business and technical requirements and function as subject matter experts for the project. They generally do the deep-dive analysis and come up with recommendations. You only become a good performance consultant through experience. You can load-up on tech stack knowledge through study but there is no substitute from learning on the job. You need to be application, Server, and Network savvy ideally with experience in Dev, QA and Ops and as many related disciplines as you can cram into your career.

Posted by: Ian Molyneaux | May 21, 2013

The case for the CPO

Achieving and maintaining performance assurance at the Enterprise level is a difficult enough challenge for most organisations. The time has definitely arrived to augment the “C” level head-count with a new role , the CPO or Chief Performance Officer. Some companies are doing this already on an informal basis but I doubt that there are many who have appointed someone whose sole responsibility is to make sure that infrastructure and applications remain available and performant.

You might argue that this already sits with the CTO however the CTO’s job is hectic enough and visibility of cross-silo performance can easily slip under the radar. The key requirement is for someone to have overall responsibility for application performance. This needs to be across all business units and all projects whether they be in discovery, in-flight or in production.

Fundamentally underpinning this role is the need to correctly align infrastructure and application KPI’s. This means that an applications footprint within the IT Estate is always a known quantity in terms of resource provisioning and consumption and how It does (or will) interact with other applications and services both internal and external.

I see the CPO as working with the CTO/CIO to provide governance on Business expectations for performance and what IT can realistically deliver. They should also be involved in the software procurement process to ensure that appropriate performance SLA’s are part of every supplier contract.

The dawn of a new age in IT or just common sense?

Posted by: Ian Molyneaux | April 4, 2012

Podcast: Achieving Performance Excellence

To ensure the performance of business critical systems, it is vital to implement a strategic approach across your entire IT estate. Join Ian Molyneaux as he presents Intechnica’s Performance Assurance Framework, which compliments his discussion on the importance of good performance, how to achieve it, and how automation tools can be used to implement these strategies.

What do you think about Ian’s podcast? Leave your comments below.

Posted by: Ian Molyneaux | March 2, 2012

The Challenge of Service Proliferation

Something interesting that has come out of several recent consultancy projects is how the scope of service consumption within your IT Estate can rapidly lose visibility to Operations. A service that may have been originally created to address a specific requirement suddenly gains universal appeal so the scope of (undocumented) consumption extends far beyond what was originally intended.

This then slips under the radar of Operations and is only discovered when an apparently innocent server or network change impacts multiple applications  rather than just the original consumer and creates unintended (and expensive) chaos. It all comes back to understanding your applications and having that central repository of Application Performance Statements.

This helps to ensure that any change in service consumption is documented and visible to all stake-holders, especially the team in Operations.

Posted by: Ian Molyneaux | February 1, 2012

Performance Assurance through really understanding your Applications

In my experience a lot of performance problems stem from an incomplete understanding of how an application interacts with the environment it is deployed into.
Something I always recommend is to create a performance “statement” for every core application in your IT Estate. Think of this as describing the characteristics, footprint and touch-points of an application. This should include:

Design:
The way an application uses memory, CPU and I/O.

Operations:
The deployment model for the application
How it’s monitored
How it’s configured

KPI’s:
What metrics need to be collected to monitor performance, application and infrastructure?
What thresholds should be set so that infrastructure and application KPI’s are correctly aligned?

Interaction:
What services the application interacts with, external and internal
What other applications this application interacts with

Business:
What BI metrics are collected?
The typical usage profile, day to day and peaks

With this information at hand it becomes a much easier task to determine if you application is behaving normally or has regressed in terms of performance and/or capacity. In other words you minimise the false positives and negatives. For example It may be a conscious design decision that an application grabs as much memory as it can at the application server layer. This means that for this application, generic memory usage KPI’s may have completely inappropriate thresholds and generate redundant alerts.

A little knowledge can be a dangerous thing but not when it comes to maintaining application performance!

Categories