Show menu Show popup Search

The latest BA outage: Outdated QA approaches equal unhappy customers

Back in May, BA suffered an outage that cost them £150 million and left 75,000 people stranded. After days of speculation, BA announced that the outage was due to an engineer causing a power cut. Surprisingly, BA has suffered another outage earlier today, with its spokespeople announcing that they experienced “temporary check-in problems” but the “earlier problem has been resolved”.

So, what is the likely cause of BA’s technical meltdowns? When it comes to established companies like BA, they have a legacy system and are trying to create digital offerings on top. This is causing a lot of creaking infrastructure, allowing downtime like this to happen. These companies need to modernize their back-end systems, rather than putting plasters over issues. Slow testing is creating gaps, and they can’t keep blaming it on power outages.

Travel companies used to manually test their technology and check-in systems every couple of months or weeks, however, with the increase of cross-platform technology, the need to testing has increased to almost constant. The rate of change is so high these days, organizations need to automate their testing rather than relying on manual processes.

In addition, travel outages are becoming a worrying trend. They are irritating but tolerable, but this could be much more serious if it begins to happen to more serious technology, since technology is coming far deeper into our lives than it used to, with the IoT in healthcare, cars and the home. Organizations need to take quality seriously for customers or risk losing them.

Want to know how companies like BT have addressed similar test automation challenges in complex technology environments? Read our BT case study here.

IoT and testing: The new rules of engagement

TestPlant CTO, Antony Edwards shares on the IoT Agenda why, in a hyper connected world of digital experiences, you must adopt a user centric approach to testing and the five keys to success.

Read the full post at TechTarget’s IoT Agenda…

The connected world and the role of testing

TestPlant CTO, Antony Edwards shares on the IoT Agenda how cloud, consumerization, DevOps, microservices architectures and, of course, the internet of things is changing the game and disrupting the established approach to testing.

Read the full post at TechTarget’s IoT Agenda…

Video interview – how testing needs to change to support the UX needs of mobile and IoT

TestPlant CTO, Antony Edwards, was interviewed by Mobile World Live at Mobile World Congress in Barcelona this year. Antony talks about the key trends in mobile and IoT, and how testing needs to change in order to be more focused on UX. 

Hope you enjoy the interview!

 

Deep Learning, Testing, and Bug-hunting

Over the past few weeks I’ve spent a lot of time playing with the various “deep learning” libraries that are available as we prototype the best ways to use this exciting new technology to testing.

For those of you who haven’t used “deep learning” libraries here’s a quick summary of how they work:

  • You define a decision/arithmetic formula. Maybe something like, “Will I have a salad for lunch” is determined by today’s weather, what I had for dinner last night, what my lunch buddy Chris is having for lunch, and whether I read an article about healthy eating in the last 3 days.
  • You then run a large number of examples through the formula; e.g. today it was 22C, I had a healthy dinner last night, Chris had a salad, and I haven’t read any articles in the last 3 days, and I did NOT have a salad.
  • The “deep learning” library uses these examples to determine the relationship between the inputs and the output, e.g. Chris having a salad does not make it more likely that I will have a salad, but if he doesn’t have one then I definitely won’t have one.

It takes a little time to get your head around it, but there are only a few core concepts, so once you understand them it’s easy to play around with. I’d recommend playing around with Google’s TensorFlow as it’s easy to use, has some decent visualization tools, and relatively good documentation.

So how does this help testing? The clear application for me is bug hunting. Understanding the relationships between inputs (e.g. user data) and the control flow (i.e. which paths are taken through the application) and finding defects. For example, tests which change user settings in the middle of a test often find bugs, going through the help screens never increases the change of finding bugs, going through transaction screens has as low likelihood of finding bugs but the bugs found are highly severe. If we had an engine that could automatically generate paths through an application (stay tuned for that product release coming soon), automatically generate user data, and had “deep learning” providing the feedback loop, that would be an awesome bug-hunting machine.

Establish a quality metrics strategy at scale

Today we are excited to bring you a guest blog post from Kevin Dunne, VP, Strategy and Business Development at QASymphonyTestPlant announced a technology partnership with QASymphony on July 12, 2016. To read more about the partnership, click here.

Every engineering organization will encounter struggles in trying to get reliable and repeatable insights into their processes.  The world now relies heavily on interconnected software systems to do things as simple as sending an email to something as complex as finding cures for rare diseases.  

When software doesn’t work as expected, teams need to know what the root cause of the particular failure was.  More importantly, they need to understand what they can do going forward to detect a similar risk before it is able to cause harm.  This problem only continues to grow in complexity as companies scale and need to manage various release trains, development processes, and tool sets.   Read more