Posted by: Ian Molyneaux | June 27, 2013

Mobile support in Performance Test tools – What does it really offer?

It’s interesting to review the many discussion threads concerning automation support for Mobile Technology. By this I mean within Functional and Performance Testing tool sets. There seems to be a bit of one-upmanship going on at the moment between tool vendors trying to provide the “best” level of overall support.

I am starting to wonder what all the fuss is about…

If you look at the performance testing requirement for Applications deployed on Mobile Technology a couple of things stand out:

  1. True mobile applications, (i.e. not m.mywebsites or those browser apps masquerading as mobile apps to get round app store costs) are effectively fat clients. The tech stack is pretty much a closed shop as far as simple record / playback is concerned unless you instrument the code-base in some fashion or re-direct the device comms via some sort of proxy.
  2. Most mobile apps need to exchange data with external services. This typically uses one or more middlewares and some sort of API. (In my experience often poorly documented.)

So as you can see there really isn’t a lot different from performance testing any other variety of service based application.

Having carried out a number of successful performance testing projects involving Mobile clients by far the most straight-forward approach seems to be as follows: (Feel free to disagree)

Step 1

Testing the external services

  • Confirm the uses cases (No change here)
  • Get hold of the API documentation (Good luck with this)
  • Sort out the test data requirements. (Always required)
  • Script the API calls generated by the uses cases ( Done this many times way before Mobile turned up and is almost always a manual process. )
  • Build the performance test scenarios. (Straight-forward enough).

Doing the above gives you the ability to load up the API(s) used by the mobile app so can see if the service can cope with peak demand.  (So far so good).

Step 2

Now introduce the Mobile Client(s)

  • Decide how many different types of Mobile device you need to take into account. (Usually at least one iOS and one Android but the choice is yours).
  • Stub out the external service calls from the Mobile App. ( Probably requires code instrumentation but there are some increasingly clever stubbing solutions out there.).
  • Repeat and time uses cases by device ( Doesn’t necessarily require automation unless the app is very complex although this will greatly simplify the collection of perf metrics.).
  • This gives you a performance benchmark for one user by device type with the API(s) working in simulated BAU mode.
  • Now remove the stubbing and repeat the individual use case test by device.
  • Compare to the results from the stubbed tests and note the delta. This should represent the latency and propagations delays introduced by cellular and application infrastructure.(and the service of course).

Step 3

With these benchmarks in place you can now repeat the volume tests against the service API(s) whilst at the same time repeating the use cases on each device and observing any performance regressions. The fact that you have already established the capacity limits of the external service(s) should make it easier to identify how the mobile app reacts to and conceals data starvation and time-out events from the end user.

You can do this one device at a time so still not essential to automate the device however the ability to combine performance and functional device automation has obvious benefits.

I think this approach works pretty well as you are loading the service API in a realistic manner (assuming your API scripts are accurate) whilst observing mobile app performance actually connecting over the full cellular stack rather than trying to emulate apps on mobile devices over LAN/WAN. After all a bad mobile end user experience is going to be down to any or all of the following:

  • Lousy app design (Think ASYNC!)
  • Poor app device compatibility
  • Transient cellular network / ISP latency (Pretty much out of your control as always unfortunately)
  • Capacity problems with the external service(s) either hosted in-house or by 3rd party. (No reason why you can’t nail this one.)

The interesting piece in all this is that much of the device testing in Step 2 is of course what should have already come out of QA as part of Functional testing sign off. However I would argue that there is benefit in repeating the process as your use case focus will be different and hopefully you will be using a different environment/service sandbox  to carry out performance testing.

As a final thought while I have found that the above process usually sits well with clients when push comes to shove they often defer the mobile device testing piece altogether and are happy enough to just do some informal mobile app prodding and poking on their own.  As I said at the start.. Performance test tooling support for Mobile apps – What’s all the fuss about?  Oh well.

Happy (mobile) performance testing!

Advertisements

Responses

  1. […] his post “Mobile support in Performance Test tools – What does it really offer?“, Ian Molyneaux makes some useful (and thought-provoking) points.  As always, however, […]

  2. hey I am trying to understand if there is a value in testing the mobile apps with a turn key solutions which provide us the real deviceto execute tests on pls advice.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: