Show menu Show popup Search

Addressing fragmentation in mobile testing

On Friday we conducted the first in a series of 4 webinars that we’re running about the challenges of mobile testing and how eggPlant tools can help you address those challenges.

We decided to run this series of 4 webinars because, like many of you, we hear a lot of presentations about the difficulties of mobile testing, but very few of these talk about the real issues that effect us in our own testing, and that we hear about from our customers. These presentations tend to focus on things like low-power mode or threading, which are genuine technical differences between mobiles and PCs, but they aren’t the issues we see impacting test teams.

The real challenges of mobile testing aren’t created by inherent technical differences; they are mostly created because we use mobile devices differently to how we use PCs, and because the smartphone industry has evolved differently to the PC industry. They are use-case differences rather than technology differences. So, we decided to run a series of 4 webinars about the real key issues we see impacting testing teams and how eggPlant can help you address these issues.

Friday’s webinar was all about fragmentation, a very well known issue in mobile development and testing, but again one for which there isn’t a huge amount of good practical advice. The other three topics we’ll be talking about over the next month are:

* End-to-end test-cases; i.e. test-cases that go across multiple systems, from mobile, to desktop, to server, to legacy back-end systems, and how you can easily test those.
* Performance testing; which is so much more common in mobile testing than it ever was on desktop.
* Network behaviour; i.e. testing how your app behaves under different network conditions.

So we’ll be covering those topics over the next month, but today I’m going to talk about fragmentation.

Fragmentation is all about the explosion of our test matrix. 10 years ago most of us only had one environment we needed to test (typically Windows XP SP1/3); and a lot of our approaches are based on this very simple world. But on mobile we have multiple operating systems, significantly different versions of those operating systems, hundreds of devices which may have customised the operating system, and operators who again may have customised the system and who push out updates.

Now not all these thousands of variants are significant and applicable to everyone, but even with some reasonable pruning, there are still 10s of combinations that most teams should be testing, and that’s literally 10/20/30 times more testing effort required. Clearly that’s just not viable and a different approach is needed.

So how are people dealing with fragmentation today?

Well honestly the most common situation we still see today is people who are frankly not dealing with it. They are still doing manual testing for their mobile apps but they can’t keep up. They can’t test releases as fast as the development team is producing them, so they are becoming a bottleneck in the release process, and at the same time they are sending out low quality apps. Relying on manual testing is just not a viable position in the face of fragmentation, but still we see a lot of people – too many people – trying to continue this way, and failing.

An increasing number of people, however, are starting to use automation to address the fragmentation issue. And automation has to be the foundation for any reasonable approach to fragmentation. The issue we see here though, is that traditional testing tools (like Selenium or QTP) weren’t designed with cross-platform in mind. Typically they interact with the system-under-test at the UI Framework API level, i.e. they control and validate the system-under-test by calling operating system APIs to interact with specific code-level widgets. Now while this approach can make sense when you’re working with a single platform, the moment you’re working with multiple platforms you start having to add a whole lot of platform-specific differences into your test scripts, and in many cases you simply end up with separate test scripts for each platform, which increases creation and maintenance effort.

So automation is the key to dealing with fragmentation, but we need a better approach.

eggPlant Functional, in contrast, was originally designed specifically to deal with cross-platform issues; it was actually created to help porting applications from one platform to another.

eggPlant Functional is able to do this, because it is an image-based test tool. The key difference with ePF vs Selenium or QTP is that we are entirely image based, not code based, so we work entirely through image search, image recognition, and OCR; we don’t talk to the UI framework API. And also we inject events via the operating system rather than directly into the application. So unlike other tools we are really automating the way a user interacts with the application and device. An application absolutely cannot tell whether it is being driven by a real user or ePF.

OK – so why is image-based testing the right approach for cross-platform and dealing with fragmentation?

For example, the Financial Times app runs on Android, iOS, and Windows devices; mobiles, laptops, and tablets. Each of these devices has a hugely different technology stack including the UI framework and the language used to implement the Financial Times app from C# to Java to Objective C. The code-level is completely different, but the UI level is the same. Because the Financial Times actively seek to give their readers a consistent user-experience across all their devices, and so ensure that they keep the UI the same. This is the same for really any app. All companies want to give a consistent user-experience across all devices.

So when we talk about “fragmentation” we mean “technology fragmentation” and “platform fragmentation”, but not “UI fragmentation”. The UI of most applications is not fragmented, it’s very consistent. And that’s why eggPlant Functional as  UI/image-based tool, rather than a technology code-based tool, is much better suited to dealing with technology fragmentation.

So that’s the theory! If you want to see eggPlant Functional testing across devices in action, please check out our demonstration videos and look for the recording of our Fragmentation webinar which will be posted later this week.

Addressing the real challenges of mobile testing

Everyone talks about “the challenges of mobile testing”, but few people actually describe clearly what these challenges are, and even fewer give you tangible practical advice on how to address these challenges. Over the next 4 weeks we will be running a series of webinars clearly describing the major challenges impacting teams creating mobile apps, and show how eggPlant tools can help you address them. Please register NOW for these webinars using the links below.

But why the lack of clarity around mobile testing challenges? I believe it is because people assume that these challenges must be inherent technical differences between mobile devices and PCs, so they look for differences of this type, and end up talking about low-power mode and threading differences. But these aren’t really causing problems for the vast majority of testers. The real challenges of mobile testing aren’t created by inherent technical differences; they are mostly created because we use mobile devices differently to how we use PCs, and because the smartphone industry evolved differently to the PC industry.

So what are the key challenges of mobile testing that we see?

(1) Fragmentation. This is a well-known problem, but still few teams know how to address it effectively.

(2) End-to-end test-cases. Mobile apps are almost all part of a distributed software product. So testing a mobile app usually involves test-cases that touch multiple mobile devices, client PCs, servers, databases, admin consoles, and legacy systems.

(3) Performance testing. Less than 10% of PC client applications have performance test-cases, but over 60% of mobile apps do. People typically say this is due to mobile apps being more consumer-facing where user-experience is very important, whereas PC applications are mostly enterprise applications. Whatever the reason ‘mobile’ has made performance testing something that every tester needs to know about.

(4) Network behaviour. Precisely because they are mobile, mobile devices are constantly attaching to different networks, from bad congested 3G networks to high-speed LANs (via a PC and USB). It is this variation in connectivity, and the constant dropping of network connections, that makes network behaviour so hard to test for mobile apps.

Each week in April we will be taking one of these topics, giving an overview of the difficulties, and then showing live examples of how our users address these difficulties. Please register now for these webinars using the links below!

 Thanks and please send us feedback about your major mobile challenges.

Not yet convinced by image based test automation? Here are six arguments that may change your mind…

There are two common objections to eggPlant Functional that we come across from time to time. They relate to aspects of the tool that are unique and which I definitely believe are strengths, so I’d like to tackle them head on.

The first objection is to image based automation which lies at the core of eggPlant’s unique two system approach to GUI automation. Objectors have a presumption in favour of object based GUI automation which is by far the most common technique among functional testing tools. This may be because they are familiar with object based automation or perhaps they have a sensible scepticism of technology that differs from the mainstream. It is also true that some attempts at image based automation were (are) less than convincing. It isn’t just the superior implementation of eggPlant Functional that counts – it is also aligned with the evolution of technology. For instance, Optical Character Recognition (OCR) has advanced by leaps and bounds and the processing power for speedy OCR execution is readily available and cheap.

So here are six arguments for image based GUI test automation as delivered by eggPlant Functional:

1. Test your application running on (almost) any device

eggPlant Functional can test almost any GUI technology including mobile devices, desktop systems and custom devices with bespoke displays. All that is required is either a VNC server on the target system (we can help supply this), RDP access or the use of a suitable KVM switch.

Web based applications can be tested on any browser and device combination. The day of a single device type and a single UI is fast disappearing and object based functional testing tools are struggling to keep up.

2. Testing from the user’s perspective catches more bugs

So an (object based) functional test passes because GUI component az2315B is enabled and the title of the open document is correct. What the test does not check is what the user will see! eggPlant verifies what ultimately matters: what is presented to the end user.

3. Easier to script 

There is no need to identify object identifiers or to navigate through deeply nested object hierarchies. Instead scripts directly reflect the user’s view of the application and actions are performed on images, screen content is verified by image matching or by reading text.


4. Reduced maintenance effort

Straightforward easy to understand scripts are easier to maintain. You will probably have one set of scripts applicable to all device types and client types – that will also reduce the maintenance burden.

But what about application changes? How much re-scripting will be required? Whether internal GUI objects or the appearance of the UI are more volatile will depend upon the implementation technology and the nature of the changes. For changes in appearance only images may need to be replaced (they are held externally to scripts). It is true that some changes in appearance will not require changes to an object based script. To counter that apparent advantage eggPlant Functional has effective mechanisms to rapidly “repair” a script including an interactive image doctor tool.

5. No need to understand client-side internals

This reduces the burden on testers and increases the pool of testers who can develop automated tests. It also avoids the need for training and retraining when the GUI technology changes or multiple device types are targeted.

6. Realistic end to end response times

Here there are two advantages to image based automation. The first is minimal intrusiveness. VNC, RDP or hosting via a KVM switch do not have any practical impact on client-side performance. That is not true of tools that employ an automation object situated within the client platform. The second advantage is that the complete client-side stack is being included in a true end to end measurement rather than taking timings at an internal interface.

Are there any drawbacks? Potentially there are:

1. Additional hardware needed

This is a small and diminishing price to pay for the advantages! Also, when eggPlant Functional is part of a load test (via eggPlant Performance) or a continuous development environment then multiple instances can execute simultaneously on a single system.

2. What if significant text processing is required during testing?

eggPlant incorporates world-leading OCR software from ABBYY. In most situations there is no gap in functionality between eggPlant Functional and other leading functional testing tools. eggPlant Functional also has excellent text processing and mathematical functions for checking data values.

3. What if I need to check the internal client state?

There are a number of features and techniques that can address this but it is too large a topic for this article! Please contact us to discuss your requirements.

And the second objection that is sometimes made? That’s opposition to our innovative high-level scripting language SenseTalk. I will discuss this in a future blog article.

Mobile World Congress – getting back the buzz!

“Mobile World Congress – 10 years in Barcelona” greeted us at the airport. Inevitably it made us think about how much ‘mobile’ has changed in the last 10 years, but it also made us think how little MWC had changed in the last 5 years. Since “convergence” around 2006 and the release of the first iPhone it feels like MWC has been searching for “the next BIG THING” without much success. And while commercially it’s always been a great show for TestPlant, some recent MWCs have certainly lacked that industry buzz that was so intoxicating in the years leading up to 2006.

But this year the buzz was back! It’s difficult to put my finger on one BIG THING, but the buzz was definitely back, and I’m glad. Here are my key take-aways:

(*) Things! As expected the “Internet of Things” is invading MWC; and perhaps freeing itself up from pure “mobile” is what helped the event. All the stands in Hall 3, and many more throughout the rest of the event, had “things” on show. LG had a full connected kitchen, a mirror with head-up display, Sony had connected speakers, Qualcomm had everything from connected lamps to robots, everyone had connected cars, everyone and their dog had a smart watch, and there were probably more health wrist-bands than mobile phones on show. So everyone was showing that they are ready for the “Internet of Things”, but actually I thought only LG showed any real vision about how IoT was going to be useful.

(*) Phones! OK that may sound a little unsurprising, but we were taken back by the number of new handset vendors. Most of the handset vendors I was working with 10 years ago are gone (Sony Ericsson, Siemens, Panasonic, Sagem, even Nokia), and reports tell us that other than Apple and Samsung the ones that remain are losing money (and Samsung aren’t looking great). So in this fairly bleak market for handset OEMs, and the lack of differentiation on smartphones these days, why are people deciding to enter this market? I can’t answer that one.

(*) Nokia? I went to the Nokia stand (really just out of habit), but of course it’s the Microsoft stand now. Wow!

(*) Security. Security was everywhere. The digitalisation of retail and banking (and others), BYOD, and the continual high-profile attacks highlighting that security concerns are real, mean that security is a real, huge, market.

(*) Near-field networks. Presumably motivated by the IoT and ubiquitous computing, there were 10s of vendors selling local ad-hoc networking hardware (especially in Hall 5). The bandwidth and latency required for the IoT are driving a fundamental re-think of local networking and that was on show this year. Interesting technologies, vital enablers, but conceptually just faster bit pipes.

(*) Payments. Mobile payments has been a hot area for about 4 years, but there was a reality about it this year that was missing in previous years. People can now point at deployed systems and significant levels of transactions, and the next generation of services based on real-world experience are starting to appear. This reality was a key ingredient in bringing back the buzz.

(*) Testing! Last but not least. From our perspective we definitely felt that quality and testing are now a top priority for the mobile software industry. 5 years ago we rarely encountered a test manager at MWC, and those responsible for delivery/development didn’t have quality as a priority. But with fast and brutal customer reaction to poor user experience, and the measurable commercial impact of this, quality and testing are now definitely a top priority for most mobile software vendors.

So that’s it. On reflection I think that the buzz has come back because MWC has allowed itself to move beyond pure “mobile” (and maybe that IS the next big thing) and there is finally reality behind the topics that have been evangelised for the last few years (e.g. payments). In any case, for the first time in a while, I’m already looking forward to next year.

eggPlant Network Launch day + 1

I’m really excited: At Mobile World Congress yesterday we launched our brand new eggPlant Network product.  You spend such a lot of time planning how a product will work and then it’s out there.

It’s been enlightening to see the questions we get asked.  Many of them we thought about during the design, but real users always put an interesting twist on it.

What’s eggPlant Network about?  [Ah the fundamental question, to life the universe and everything; well, from my current standpoint it is!]  I wasn’t going to trouble you with the answers to most of these questions, but that one’s fundamental, so here goes: We’re aiming to solve the problem of “How do I test a [mobile] application in a real world network in an easy way.

Then we have all the usual questions:  Can you control it from eggPlant Functional (our functional test tool)? Yes, next…

All easy and straightforward, until: Can you emulate a 3G network?   [I hesitate for a moment, because usually I know much more about this than the questioner.  The answer our sales folk would give is: YES.  But I know that the problem is with the question, not the answer.  It cannot be answered as stated! And, sometimes to prove it I ask if people believe that 3G (or any ‘G’) performance is the same on one side of a building to the other… Ah, now you get it.  There’s no such thing as 3G performance, it’s highly variable.

As they said in “A few good men” to “Just tell me the truth” – I know that mostly “You can’t handle the truth!” well not the full truth anyway, so a simplified version will have to do. And knowing that, we have provided lots of different 3G network experiences on eggPlant Network; from good through to terrible. So I (truthfully) say: Yes

A couple of milliseconds later we move on, and the questioner never realises my quandry.  But now you do: Can you handle the truth?

Our top 4 takeaways from IBM InterConnect 2015

InterConnect is IBM’s new customer and partner event running for the first time in 2015. It is the consolidation of three existing IBM events – Rational ‘Innovate’ (which TestPlant has been supporting for years), ‘Pulse’ for cloud computing, and the Websphere ‘Impact’ event. So 20,0000 people in Las Vegas talking about all things IBM.

So what are they talking about and what aren’t they talking about? Here are my 4 takeaways from the event so far.

(1) Since the 1960s enterprise IT has been focussed on improving the efficiency of internal operations; and since the 1980s vendors have been selling to CIOs. A couple of years ago Gartner highlighted that IT was now being used increasingly for customer engagements and CMOs had IT budgets as big as CIOs. This realisation has been spreading around the industry since and is front-and-centre at InterConnect. IBM are talking a lot about both “systems of engagement” (for CMOs) and “systems of record” (for CIOs).

(2) These different systems are managed and updated in very different ways, “systems of engagement” are responding to AppStore feedback and are being updated monthly (or faster), whereas “systems of record” updates are measured in years. How to manage these two very different development processes? That’s a big topic being discussed here under the heading of “bi-model” development, and “hybrid” development where the same team is operating in both modes. Fortunately we can easily answer which test tools are already widely used for both – eggPlant!

(3) ‘Watson’ is clearly the big new innovation that IBM is keen to promote. IBM CEO Ginni Rometty has said that Watson will be generating $10bn in revenue per-year within the next 10 years. And it is pretty cool. The main take-away for me is that the key innovations in Watson seems less about the underlying artificial intelligence (though clearly that is there), and more about the ability for the system to execute queries articulated in natural language. So it’s less about the analytical ability of the system and more about making that analytical power available to people. It’s also interesting that IBM are leading new innovation in the area of natural language interpretation, something that so far had been Google’s domain.

(4) And finally what they are NOT talking about. No-one from IBM is talking about ‘Rational’. At the beginning of this year IBM (very quietly) re-organised the software team and ‘Rational’ as a brand and as an organisation no-longer exists. The existing tools have been moved into different (and separate) teams; e.g. UrbanCode is now part of the “Cloud” team whereas RFT is part of the “Systems” team. So RIP Rational – but long live eggPlant as we support IBM and its customers test and monitor in IoT, mobile, and systems engineering.

‘Dynamic test control’ and the new approach to performance and stress testing

The traditional approach to performance and stress testing is:

(1) Define objective
(2) Create test
(3) Run test
(4) Analyse results
(5) Go back to #2 and repeat until clear conclusion

But modern load testing tools such as eggPlant Performance have rich ‘dynamic test control’ functionality which allows the tester to change the test at run-time. For example, increase the number of virtual users or start monitoring memory on a particular server. ‘Dynamic test control’ enables a very different approach to performance and stress testing which is far less structured. A tester can now run up the system-under-test, start the load testing tool, and “play about” with the system mixing-desk-style until they have achieved their objective and clearly understood the system behaviour. This more flexible approach to load testing is increasingly popular, with almost 50% of our customers’ testing being done this way, but is that a good thing?

‘Dynamic test control’ was primarily introduced to deal with limited test windows against production systems. Performance and stress testing, for various reasons, often has to be done against the production system, and obviously no-one will allow testers to do this while their customers (or employees) are also trying to use the system. So testers are often given a short window (usually in the middle of the night) to do their testing. In such environments a tester needs the quick flexibility that ‘dynamic test control’ gives to adapt to new information – and ‘dynamic test control’ is a very good thing.

People now use ‘dynamic test control’ in all situations; and it can be a great way to speed-up the iterations of #2 – #4 in the traditional approach described above. The problem I see is that too many people are now using the flexibility to skip #1. They don’t define their objectives, i.e. what it is they expect the system to do, and what they are trying to verify. They just run some tests against the system, look for anything “strange” or “bad”, and otherwise mark the test as passed. But if you don’t know what scenarios you need to test, and you don’t know what problems you’re looking for, then performance/stress testing is like looking for a needle in a haystack. You need to know what you’re testing, the failure criteria, and the key risk areas.

So ‘dynamic test control’ is great functionality that can speed-up performance/stress testing and help you more quickly isolate root-causes. But you need to ensure that you still clearly define your test objectives (requirements, criteria, risks) or it’s a very false level of assurance.

Remember – the most important thing is to prevent defects in production!

The most famous and over used graph in the testing world is the one below. In everyone’s mind it simply says that the earlier you find defects in the lifecycle the cheaper they are to fix.


Most test process change project proposals start with this graph and really use it to justify the majority of their recommendations. I never find such proposals convincing and over the years have identified four reasons why not.

(*) What the graph really says to me is that by far the most important thing to do is not let defects get into production. That is what everyone should be focussed on; and only when we’re awesome at that should we start worrying about moving from “Implementation” to “Design”. Sure, effective QA requires activities througout the lifecycle, but those activities should be focussed on preventing defects getting into production, not moving detection up the (internal) lifecycle for efficiency reasons while many defects are still being found by customers. But most proposals I see are focussed on moving detection from “Testing” to “Implementation” or “Design” while a huge number of defects are still being found in production.

(*) Most of the recommendations actually don’t follow at all from the graph. The most common one these days is “we will save money by testing earlier so we are only going to do unit testing from now on”. Huh? Why does earlier testing mean unit testing? Just because on a Waterfall or V-model it comes first? Aren’t we all a bit more educated than that these days?

(*) The graph lacks a key piece of data – the cost of finding defects. In production the cost of finding defects is $0. But finding defects at the requirements stage is rather expensive. Especially if your process applies the finding process to requirements that may never be implemented. I’m not saying that the cost of finding a defect at the requirements stage is such that it balances out the cost of fixing at the maintenance stage, but maybe it is in some environments, this is an important bit of data that’s missing.

(*) The defects that are showing up in production – where were they injected? If they are all late stage simple code bugs then doing a whole lot of requirements reviews isn’t going to help much. Few people include data about where their defects are injected.

So the graph is interesting, but I find it’s massively over-used these days, and most often it’s used when people haven’t really thought through what they are doing.

eggPlant Manager 4.0 – you can’t manage without it

eggPlant Manager has always allowed you to have a central location for scripts, run tests concurrently and give you a high level graphical display for your results. With the release of eggPlant Manager 4.0 we have built on this core functionality to deliver an even more effective test management tool. We have added in…

  • The ability to run external jobs, which includes Selenium and JUnit tests.

We understand that customers may already have lots of testing assets in other tools and languages, so we have now added in the ability to execute any arbitrary command that you can run through the command line. This means, you can use eggPlant Manager to run a full end to end test, that might incorporate all of eggPlant Functional, Selenium and JUnit tests, and then have the results of all of those pushed back into eggPlant Manager, to give you a central place for all of your results. Essentially, we are allowing you to execute any command line call from eggPlant Manager.

  • Integration with eggCloud.

If you already use eggCloud, then eggPlant Manager just became a whole lot easier. You can now import all of your eggCloud connections into eggPlant Manager, and then schedule tests to run against those devices. This means that as part of executing your tests, eggPlant Manager can book the device you want to use.

  • The ability to run the table tests you have created in eggPlant Functional v15.

In eggPlant Functional v15, we added in the ability to create tables so that it was possible to do keyword-driven testing. Of course, understanding that people want to schedule table tests to run whenever they want, we’ve now added in the ability to run tables from eggPlant Manager.

If you are already a TestPlant customer you can download eggPlant Manager here and contact your Account Manager for a trial key. Otherwise, fill out a form here and one of our team will be in touch shortly.

What I learned from a bunch of bugs

Like many test managers, I track some metrics, some of which are helpful some of the time. My favorite tends to be Customer-Reported Unique Defects (CRUD, aptly) which tells me how many bugs have made it into the current version of a product and been found by an unfortunate user. Recently, however, we had a software release that felt quite smooth to me, but one month after the release, the CRUD count was higher than I had expected: we had eight more CRUDs than we’d had a month into the previous release. Rather than panic (or at least in addition to panicking) my team sat down to figure out where these bugs had been hiding. What we found was two parts humbling, nine parts encouraging, and three parts liberating, which was not a half-bad mix.

Our first step was to go through the issues and divide our bugs into categories (UI, Connectivity, Memory, and so on), and then rate each bug on the following scale:

Simple: 3. We should have caught these.
Complex: 5. We might have caught these with deeper testing.
Time Intensive: 2. We could have caught this by letting our tests run longer.
Resource Intensive: 1. We could have caught it stressing the system harder.
Unexpected: 3. We could not have predicted these.

What we ended up with was a quite comforting table. We identified some extremely easy ways we can do better:

1. Run our nightly regression longer: same tests, more repetition. This would have revealed a memory leak in the last release, and now that we think of it, probably a couple of other issues over the past year. (Go, go automation!)
2. Use lots of resources. For some users, this would mean large volumes of data. In our case, it means big test suites. Some performance problems just don’t present until you tax the system.
3. Try to ward off last-minute changes. OK, maybe not so easy, but worth a friendly mention to development. Of the three bugs we identified as simple, one was introduced at the 11th hour, and there was no time to catch it.
4. Mind the low-profile functionality. This is the one I hate to admit, but two of our simple bugs involved functionality that, frankly, just doesn’t get much buzz around the office. No excuse here.

Five of our bugs were needles in a haystack– specific sequences of actions that don’t follow any predictable workflow. We might never have found them, but to stand a chance we’d have needed to do some seriously intense exploratory testing, or run more finely modularized and randomized automated tests. We’re opting for the automated tests. This will take some time, but it will be an investment.

Finally, there were three bugs (our favorite three bugs) that we absolved ourselves from in good conscience. One came from a user running in a unique environment that we couldn’t have anticipated, and we’re just happy that we can help him out now. Two bugs came from small features that weren’t tagged in the release; we simply didn’t know they were there. Civilized, productive conversation ensued.

Of course, no one would ever tell you that you shouldn’t look back on your bugs, but for me, this exercise was a reminder to do it regularly and deliberately. Fourteen bugs are fourteen great teachers. 

Happy testing,