TestPlant CTO, Antony Edwards, was interviewed by Mobile World Live at Mobile World Congress in Barcelona this year. Antony talks about the key trends in mobile and IoT, and how testing needs to change in order to be more focused on UX.
Hope you enjoy the interview!
Over the past few weeks I’ve spent a lot of time playing with the various “deep learning” libraries that are available as we prototype the best ways to use this exciting new technology to testing.
For those of you who haven’t used “deep learning” libraries here’s a quick summary of how they work:
- You define a decision/arithmetic formula. Maybe something like, “Will I have a salad for lunch” is determined by today’s weather, what I had for dinner last night, what my lunch buddy Chris is having for lunch, and whether I read an article about healthy eating in the last 3 days.
- You then run a large number of examples through the formula; e.g. today it was 22C, I had a healthy dinner last night, Chris had a salad, and I haven’t read any articles in the last 3 days, and I did NOT have a salad.
- The “deep learning” library uses these examples to determine the relationship between the inputs and the output, e.g. Chris having a salad does not make it more likely that I will have a salad, but if he doesn’t have one then I definitely won’t have one.
It takes a little time to get your head around it, but there are only a few core concepts, so once you understand them it’s easy to play around with. I’d recommend playing around with Google’s TensorFlow as it’s easy to use, has some decent visualization tools, and relatively good documentation.
So how does this help testing? The clear application for me is bug hunting. Understanding the relationships between inputs (e.g. user data) and the control flow (i.e. which paths are taken through the application) and finding defects. For example, tests which change user settings in the middle of a test often find bugs, going through the help screens never increases the change of finding bugs, going through transaction screens has as low likelihood of finding bugs but the bugs found are highly severe. If we had an engine that could automatically generate paths through an application (stay tuned for that product release coming soon), automatically generate user data, and had “deep learning” providing the feedback loop, that would be an awesome bug-hunting machine.
Today we are excited to bring you a guest blog post from Kevin Dunne, VP, Strategy and Business Development at QASymphony. TestPlant announced a technology partnership with QASymphony on July 12, 2016. To read more about the partnership, click here.
Every engineering organization will encounter struggles in trying to get reliable and repeatable insights into their processes. The world now relies heavily on interconnected software systems to do things as simple as sending an email to something as complex as finding cures for rare diseases.
When software doesn’t work as expected, teams need to know what the root cause of the particular failure was. More importantly, they need to understand what they can do going forward to detect a similar risk before it is able to cause harm. This problem only continues to grow in complexity as companies scale and need to manage various release trains, development processes, and tool sets. Read more
Recently, we released eggPlant Functional v17, which introduces a variety of productivity features. Heavily inspired by user requests, v17 contains a little something for everybody…
The cross-platform tester. We know that a lot of our customers use helper suites to organize their related test assets across platforms– and that working on several suites at once can get messy. That’s why we’ve added the option to view helper-suite assets within your main suite window. This is a favorite around the Boulder office, and a favourite in the London headquarters. Read more
We’ve been looking at how to get started with functional testing using eggPlant Functional. The training video series has taken us through a number of key topics, from the basics of how to script, to understanding OCR, modularization and debugging. Finishing the series, we have three last videos which explore scripting across different platforms, and then looking at results reporting.
The first video teaches us all about cross-browser testing, a very important and real challenge in today’s complex technology landscape, with the many differing devices and interfaces available. Secondly, we will delve deeper into running scripts, discussing all the different methods of test execution with the respective advantages and purposes for each. Lastly, we look at results reporting, the most important part of testing – where you review the end result.
You can watch these three training videos below: Read more
Last week we covered the first three videos in the series, taking you through an introduction to eggPlant Functional, setting up your test environment, and how to script. This week we will look at the next three videos in the series, which will help you get organised with your testing.
Firstly, we look at modularization and parameterization, which is very important when considering the scalability of your test suite. After writing your test, you’ll want to re-use and scale it up, so it can be used in hundreds of different test runs.
Secondly, we discuss how to use Optical Character Recognition (OCR); built in, it allows eggPlant Functional to read text off the screen of your system-under-test by deciphering the pixels. OCR is incredibly powerful, allowing you to create robust scripts, that will run across multiple platforms, and continue to work if you change the font in your application.
Lastly, we look at debugging, a critical part of automated test creation. Properly debugging tests ensures stability and reliability of tests, yielding accurate results.
You can watch these three training videos below: Read more