Show menu Show popup Search

eggPlant Manager 4.0 – you can’t manage without it

eggPlant Manager has always allowed you to have a central location for scripts, run tests concurrently and give you a high level graphical display for your results. With the release of eggPlant Manager 4.0 we have built on this core functionality to deliver an even more effective test management tool. We have added in…

  • The ability to run external jobs, which includes Selenium and JUnit tests.

We understand that customers may already have lots of testing assets in other tools and languages, so we have now added in the ability to execute any arbitrary command that you can run through the command line. This means, you can use eggPlant Manager to run a full end to end test, that might incorporate all of eggPlant Functional, Selenium and JUnit tests, and then have the results of all of those pushed back into eggPlant Manager, to give you a central place for all of your results. Essentially, we are allowing you to execute any command line call from eggPlant Manager.

  • Integration with eggCloud.

If you already use eggCloud, then eggPlant Manager just became a whole lot easier. You can now import all of your eggCloud connections into eggPlant Manager, and then schedule tests to run against those devices. This means that as part of executing your tests, eggPlant Manager can book the device you want to use.

  • The ability to run the table tests you have created in eggPlant Functional v15.

In eggPlant Functional v15, we added in the ability to create tables so that it was possible to do keyword-driven testing. Of course, understanding that people want to schedule table tests to run whenever they want, we’ve now added in the ability to run tables from eggPlant Manager.

If you are already a TestPlant customer you can download eggPlant Manager here and contact your Account Manager for a trial key. Otherwise, fill out a form here and one of our team will be in touch shortly.

What I learned from a bunch of bugs

Like many test managers, I track some metrics, some of which are helpful some of the time. My favorite tends to be Customer-Reported Unique Defects (CRUD, aptly) which tells me how many bugs have made it into the current version of a product and been found by an unfortunate user. Recently, however, we had a software release that felt quite smooth to me, but one month after the release, the CRUD count was higher than I had expected: we had eight more CRUDs than we’d had a month into the previous release. Rather than panic (or at least in addition to panicking) my team sat down to figure out where these bugs had been hiding. What we found was two parts humbling, nine parts encouraging, and three parts liberating, which was not a half-bad mix.

Our first step was to go through the issues and divide our bugs into categories (UI, Connectivity, Memory, and so on), and then rate each bug on the following scale:

Simple: 3. We should have caught these.
Complex: 5. We might have caught these with deeper testing.
Time Intensive: 2. We could have caught this by letting our tests run longer.
Resource Intensive: 1. We could have caught it stressing the system harder.
Unexpected: 3. We could not have predicted these.

What we ended up with was a quite comforting table. We identified some extremely easy ways we can do better:

1. Run our nightly regression longer: same tests, more repetition. This would have revealed a memory leak in the last release, and now that we think of it, probably a couple of other issues over the past year. (Go, go automation!)
2. Use lots of resources. For some users, this would mean large volumes of data. In our case, it means big test suites. Some performance problems just don’t present until you tax the system.
3. Try to ward off last-minute changes. OK, maybe not so easy, but worth a friendly mention to development. Of the three bugs we identified as simple, one was introduced at the 11th hour, and there was no time to catch it.
4. Mind the low-profile functionality. This is the one I hate to admit, but two of our simple bugs involved functionality that, frankly, just doesn’t get much buzz around the office. No excuse here.

Five of our bugs were needles in a haystack– specific sequences of actions that don’t follow any predictable workflow. We might never have found them, but to stand a chance we’d have needed to do some seriously intense exploratory testing, or run more finely modularized and randomized automated tests. We’re opting for the automated tests. This will take some time, but it will be an investment.

Finally, there were three bugs (our favorite three bugs) that we absolved ourselves from in good conscience. One came from a user running in a unique environment that we couldn’t have anticipated, and we’re just happy that we can help him out now. Two bugs came from small features that weren’t tagged in the release; we simply didn’t know they were there. Civilized, productive conversation ensued.

Of course, no one would ever tell you that you shouldn’t look back on your bugs, but for me, this exercise was a reminder to do it regularly and deliberately. Fourteen bugs are fourteen great teachers. 

Happy testing,
Pamela

Who, what, and when?

Almost any discussion about software development process could be improved by clearly separating the questions – who, what, and when. That may seem ridiculously obvious, but I can’t remember the last discussion I had where someone didn’t assume the answer to one of these questions from the answer to another (most commonly assuming “who” and “what” from “when”). Read more…

Our requirements are chaos!

“Our requirements are chaos”. It’s something I’m sure you’ve all heard and many of you have said. An exclamation of frustration and self-deprecating admission. But it’s only this morning that I realised it’s also a correct statement of fact. Requirements are indeed chaotic in the mathematical sense, that is a small variation to the inputs (i.e. the requirements) can lead to a dramatic change in the outputs (i.e. the product). This is sometimes called the “butterfly effect” since theoretically a butterfly beating its wings at a certain moment could cause a hurricane on the other side of the world (weather is the most studied chaotic system). Read more…

Why aren’t more retailers doing performance testing?

It’s the holiday season which means it’s the retail season! I’m definitely not much of a shopper, but even I’ve been on-line buying all kinds of Arduino gadgets to give to my friends. And as I see facts such as “Black Friday on-line sales up 17%” I naturally start thinking about how many on-line retailers are doing proper load and performance testing and how many are just hoping everything is fine. I suspect that fewer than 10% are really doing load/performance testing.

It’s a bit of a mystery to me why so few on-line retailers (and other digital businesses) do proper load/performance testing. I can only assume that they don’t believe it’s really that important or sensitive. That as long as the performance is “OK” when they randomly try it one day then optimising the performance isn’t going to translate into more sales. But if this is the case then they are wrong. Very wrong. Read more…

Keyword driven testing with eggPlant Functional

Test automation tools are all about helping people make higher quality software with a faster time-to-market. This is why I’m really excited about the keyword driven testing framework that has been introduced in eggPlant Functional v15. It’s one small step for eggPlant Functional, but could be a giant leap forward for many of our users!

Keyword driven testing is solving two pain areas that I see many teams come across. First, it makes scripting and maintenance simpler by encouraging you to modularize scripts using easy to understand keywords and phrases, like “Login” and “Search”. Second, it allows users to put the tool into the hands of product experts like business analysts (BAs) who have very high product knowledge, but perhaps don’t have much technical knowledge. If BAs are able to quickly write tests then you have real TDD! Read more…

Webinar: Cross-platform mobile test automation with eggPlant

Are you testing what counts with your mobile apps?

Research from Gartner and Forrester shows that mobile app users are applying different quality standards. Performance, user experience, and portability are just as important as functionality when it comes to engaging and retaining users (and sometimes more important). Read more…

There is no such thing as a mobile application

One of my standard sayings these days is that “there’s no such thing as a mobile application”. What I’m getting at is that (apart from games) there are almost no mobile apps that deliver interesting functionality to users by themselves. They are all part of a distributed application that typically comprises a mobile component, a desktop and/or web client component, an administrative component, a database, etc. Nationwide, Bloomberg, and the Financial Times are all popular mobile apps that can only be fully tested in conjunction with non-mobile components. So I think we have lots of mobile software, but very few mobile-only applications (and actually very few desktop-only applications for that matter). In our multi-channel, collaborative, social world it simply must be this way. Read more…

“Shift left” has become “drop right”

A lot of companies I talk to are “shifting left”. This is primarily driven by the question of how test fits into agile and feels good because it aligns with the classic fact that defects are cheaper to fix the earlier you find them.

Makes sense and personally I’m a big fan of shift left; but lately it seems to me that “shift left” has taken on a very waterfall form. This concerns me since I can see our industry about to take a massive back-step in terms of quality (both in terms of effectiveness and efficiency) similar to the naive test outsourcing done in the early 2000s. It took at least four years for the industry to right itself and get a net benefit from outsourcing and the same could easily happen again; and if nothing else that just makes my job (and all those genuinely trying to advance software engineering) frustrating and boring. This waterfall form appears most commonly in phrases such as: Read more…

Gartner Symposium – technology themes, testing challenges, and eggPlant

This week we are exhibiting at the Gartner Symposium in Barcelona! At about 6,000 attendees it’s certainly not the largest show we attend (Mobile World Congress and Dreamforce are both over 100,000 people), but in terms of understanding what’s going on in the world of enterprise IT, nothing beats it. How are retailers going multi-channel? How are banks and insurance companies engaging with customers? How are utilities shifting to continuous deployment with safety critical systems? The answers (or at least the current thoughts) are all here as the CIO/CTOs of all these companies are here. Read more…