- In Blog
It’s the holiday season which means it’s the retail season! I’m definitely not much of a shopper, but even I’ve been on-line buying all kinds of Arduino gadgets to give to my friends. And as I see facts such as “Black Friday on-line sales up 17%” I naturally start thinking about how many on-line retailers are doing proper load and performance testing and how many are just hoping everything is fine. I suspect that fewer than 10% are really doing load/performance testing.
It’s a bit of a mystery to me why so few on-line retailers (and other digital businesses) do proper load/performance testing. I can only assume that they don’t believe it’s really that important or sensitive. That as long as the performance is “OK” when they randomly try it one day then optimising the performance isn’t going to translate into more sales. But if this is the case then they are wrong. Very wrong.
Gartner recently published an article with some great facts about the impact of website performance.
- Amazon has measured that a 100ms performance delay costs them 1% in revenue.
- Yahoo measured that a 1s delay in their search results cost them 3% revenue.
- Google have measured that a 100ms to 400ms increase in search result times costs them about $90M per year.
- Microsoft have measured that a 100ms slow down in Bing search result times costs them 0.6% of their revenue.
Wow! These facts are all talking about just 100s of milli-seconds costing serious revenue. But how many companies do you know who are doing load/performance testing anywhere near this granularity?
So perhaps I’m missing something, but most retailers don’t yet seem to have understood just how sensitive on-line shoppers are, and are losing lots of revenue due to poor website performance. And then there’s mobile which just adds a whole new set of places where delays can be introduced. Personally I think that 2015 is going to be a huge year in on-line since any retailer who doesn’t have a first-class on-line story is now seeing it directly impact their share price in a major way. So it will be interesting to revisit this post at this time next year and see how much things have really changed.
Test automation tools are all about helping people make higher quality software with a faster time-to-market. This is why I’m really excited about the keyword driven testing framework that has been introduced in eggPlant Functional v15. It’s one small step for eggPlant Functional, but could be a giant leap forward for many of our users!
Keyword driven testing is solving two pain areas that I see many teams come across. First, it makes scripting and maintenance simpler by encouraging you to modularize scripts using easy to understand keywords and phrases, like “Login” and “Search”. Second, it allows users to put the tool into the hands of product experts like business analysts (BAs) who have very high product knowledge, but perhaps don’t have much technical knowledge. If BAs are able to quickly write tests then you have real TDD!
Keyword driven testing allows you to create tests in a simple Excel-style table format. In the first column you put the name of the UI object, in the second column you put the action (e.g. “tap”), and in the third column you put any additional information (e.g. for a “TypeText” command you would put the text you want to type). The second column is a drop-down list of all the primitive actions, plus any custom keywords that you’ve defined, which is what encourages modularization. If there is a “Login” action already sitting in the drop-down why would you write another one? The approach is so simple and natural that even our CFO is now writing test scripts!
So, for example, consider you have a new promotion screen and this adjusts the user journey for a login. This means you can just go directly to the Login keyword, adjust the code to work for the new screen, then wherever that code is being used the adjustment you made will persist. Thus, you only need to edit the code in one place, even if that code is being used in many different tests.
Are you testing what counts with your mobile apps?
Research from Gartner and Forrester shows that mobile app users are applying different quality standards. Performance, user experience, and portability are just as important as functionality when it comes to engaging and retaining users (and sometimes more important).
Mobile apps are also now part of distributed software systems that include desktops, servers, tablets, and mobiles. Real use-cases interact with all these platforms.
eggPlant is a range of test tools designed for mobile software and digital businesses. It covers mobile and desktop (all real devices), performance and functional, agile and traditional. The eggPlant range is recognised as “visionary” by Gartner in the Magic Quadrant for Integrated Software Quality Suites 2014, which highlights eggPlant’s ease-of-use, completeness, and continual stream of innovation.
If mobile is a significant part of your business and you care about user experience, then join us for this webinar on Wednesday 10 December.
|Date||Wednesday, December 10, 2014|
|Session 1||3:00pm IST, 09:30am GMT|
|Session 2||9:00am PST, 5:00pm GMT|
Register now: https://attendee.gotowebinar.com/rt/3781712332503911682
- In Blog
One of my standard sayings these days is that “there’s no such thing as a mobile application”. What I’m getting at is that (apart from games) there are almost no mobile apps that deliver interesting functionality to users by themselves. They are all part of a distributed application that typically comprises a mobile component, a desktop and/or web client component, an administrative component, a database, etc. Nationwide, Bloomberg, and the Financial Times are all popular mobile apps that can only be fully tested in conjunction with non-mobile components. So I think we have lots of mobile software, but very few mobile-only applications (and actually very few desktop-only applications for that matter). In our multi-channel, collaborative, social world it simply must be this way.
So mobile apps are part of heterogenous distributed applications. This has a lot of implications for testing which we’ve been talking about for a while:
- Integration and system test-cases must be able to execute across multiple computers. Consider a mobile banking test-case for transfering money from your account to mine. The test-case should interact with your Android phone, my iOS phone, the Chrome browser running on my desktop (since that should see the transaction in real time), the Internet Explorer browser running on the bank’s administrative screens, and the bank’s AS/400 transaction server.
- Almost all applications should be performance and load tested.
- Testing needs to deal with a lot of different technologies.
Mobile apps tested this way, i.e. as part of a wider distributed application, will be more robust, faster, and have better user experience than those that are tested in isolation.
Finally, there is one part of the puzzle that we don’t normally talk about, the network that connects all the parts. This is because the network is generally outside your control so you’re not going to test it, but the behaviour of the network has a huge impact on your application:
- Will our mobile app work gracefully on a 2.5G network?
- Will our mobile app work gracefully when someone loses connection for 30 seconds (e.g. driving through a tunnel)?
- When someone is using our mobile app on their fast corporate LAN will it take advantage of the bandwidth?
We’ve been working on these questions for a few months now at TestPlant and will have some exciting announcements in January.
A lot of companies I talk to are “shifting left”. This is primarily driven by the question of how test fits into agile and feels good because it aligns with the classic fact that defects are cheaper to fix the earlier you find them.
Makes sense and personally I’m a big fan of shift left; but lately it seems to me that “shift left” has taken on a very waterfall form. This concerns me since I can see our industry about to take a massive back-step in terms of quality (both in terms of effectiveness and efficiency) similar to the naive test outsourcing done in the early 2000s. It took at least four years for the industry to right itself and get a net benefit from outsourcing and the same could easily happen again; and if nothing else that just makes my job (and all those genuinely trying to advance software engineering) frustrating and boring. This waterfall form appears most commonly in phrases such as: Read more…
This week we are exhibiting at the Gartner Symposium in Barcelona! At about 6,000 attendees it’s certainly not the largest show we attend (Mobile World Congress and Dreamforce are both over 100,000 people), but in terms of understanding what’s going on in the world of enterprise IT, nothing beats it. How are retailers going multi-channel? How are banks and insurance companies engaging with customers? How are utilities shifting to continuous deployment with safety critical systems? The answers (or at least the current thoughts) are all here as the CIO/CTOs of all these companies are here. Read more…
I’m writing in the midst of a fun and fascinating week at the Software Test Professionals Conference (STPcon) in Denver. For testers, this is a week to connect with the community– learn, teach, and make friends with people who think your job is cool. For test-tool vendors, it’s an opportunity to share the coolness of your products with a huge room full of super testers. Living in the union of the tester/vendor Venn diagram, I am too excited to sleep. Read more…
Performance and load testing are becoming more popular as digital business drives the testing focus towards user experience. Amazon recently stated that a 100ms decrease in the response time of their website reduces revenues by 1%. That’s $700M! So this popularity is justified. But traditional tools for performance and load testing are now entirely inadequate for testing the user experience of mobile and web software and a new approach is needed. This post is all about that new approach, which we call “application-level load testing”. Read more…
Gartner, the world’s leading information technology research and advisory company, has recognized the eggPlant range as a “visionary” in its ‘Magic Quadrant (MQ) for Integrated Software Quality Suites’! We’re excited to be included as one of the top vendors in the world, and especially by the specific comments made by Gartner who highlighted the “ease-of-use” of the complete eggPlant range combined with a “consistent stream of innovation” to deliver “a full complement of functionality including industry specific solutions”.
Ease-of-use and innovation are exactly what we focus on so it’s great to be recognised for this. It’s also what our users want. Our recent survey again highlighted that ease-of-use in all its forms (user experience, productivity, maintenance, reliability, and quick to learn) is the #1 thing our users want from their tools.
Probably my favourite comment in our customer survey this year however was “I see that my requests for improvement are taken onboard”. It’s a cliche, but our product development really is driven by feedback from our demanding and passionate users. They challenge us on innovation and they demand ease-of-use. So again we’d just like to say thank you to everyone who uses eggPlant and especially those who take the time to send us feedback.
Houston, Tranquility base here. The Eagle has landed. No, TestPlant may not have put man on the moon, but they have done the next best thing and released eggPlant Performance 5.4, which builds on all of the powerful features of 5.3 and makes things much easier to use. This means that people are less dependent on specialist performance testers and can use the testing team they already have to carry out the test script development. Read more…