Recorded: June 14th, 2017

An introduction to better practices and some of the common tool sets used when creating a comprehensive and scalable automated testing strategy. The purpose of this webinar is to lay the groundwork for a more comprehensive conversation with your team(s) about what overall strategy and architecture is appropriate for your products and customers both now and for the future. The topics we’ll be covering will range from the basics of automation, modern DevOps terminology, and some of the more commonly used (and vetted) tools on the market right now. If you’re looking at getting started on an automation strategy or want to course correct an existing one, this webinar is a great place to start that effort.


Join the STP Mailing List to get notified about upcoming webinars, just like this one!

Q&A

  • Iram: Can we also do automation for samity test cases?.
    [Curtis’s Reply] Yes, you can and many people do as a first step or low hanging fruit. It’s a great way to show success while you’re essentially doing a demo of your automation strategy and architecture. I would caution on using them exclusively since as I said in the webinar your computer program will only check for exactly what you expected to happen. A computer waits for a condition and says “yes” or “no.” It does not say “weird, let’s investigate that.” You should always allow for some “weird” in your sanity checks.
  • John:Years ago I worked for a solid company that looked into Fitnesse for their BDD. It did not work. That was 6 years ago. Has the environment really changed to make this a reality? Thanks!
    [Curtis’s Reply] It depends on why you say it failed. Behavior Driven Development, Specification by Example, and Natural Language Testing are like any other tool. They require people who know how to use them and are invested in building them out and maintaining them. I can say for sure I have never seen an effort like this work in a siloed environment or one where only the test team is building it out. As I said in the webinar the primary value of BDD is not in defect detection; it’s in defect prevention. Using BDD forces the developers to interact and work with the product owners to refine how they talk to each other with regards to the user stories or requirements. Using it for other purposes is similar to using a pipe wrench to drive nails while shingling. It will probably work, but …
  • Shobha:What is your take on Jest testing? Thank you very much.
    [Curtis’s Reply] I had to look that up. I don’t have any direct experience with it so I can’t comment on it either way.
  • Asscher:What is the best practice for Test Automation in Agile Sprints. we have got release every two weeks and we need to automate test cases. How does it fit? Automate while not whole business process is defind?
    [Curtis’s Reply] Shameless plug … I have a half day workshop I’m giving at the upcoming STPCon in Dulles this September on this very topic. The “unshameless” answer is this is a very complicated answer that goes to the heart of what teams should be doing within an agile context. The short answer is everyone owns testing on an agile team. Go back to the pyramid and first ask yourself what you mean when you say “Test Automation in Agile Sprints.” Are you making assumptions that “Test Automation” is only the top two slices for UI and end to end functionality?
  • Canaan:How would you start automating something? Do you have a sequence?
    [Curtis’s Reply] This is a very open question that has many different answers depending on what you’re hoping to figure out. The best answer I can give is “yes, yes I do have a process.”
  • Laurie:What industry tool is used for SAP Web Intelligence and AWS – amazon web services?
    [Curtis’s Reply] SAP comes with its own native end to end test framework but I don’t know if that is useful for Web Intelligence. My experience with SAP is recent and is limited to Hybris and Hana. As for Amazon Web Services I’m not sure I understand your user requirements. AWS is a cloud service provider that (mostly) sets up virtual server instances to which you deploy code. Checking the virtual servers for performance or stability is not generally an automated testing task.

    Small context, performance testing is NOT automated testing. You create small tasks or bots that simulate specific behaviors but they are essentially meaningless by themselves. Your “test” is an experiment where you turn 10,000 of these simulations loose on a system and watch how your structure handles it. If a very low number of simulations is returning good test data then you have gaps in either your simulations or your functional or regression testing.

  • Joel:What is the difference between continuous delivery to production and continuous deployment?
    [Curtis’s Reply] I misspoke in the webinar. Continuous Delivery is the concept of doing everything up to the point of deploying your code changes to production. I can see where there could be valid use cases for not implementing continuous deployment at that point but it smells wrong in most cases. If you are truly adhering to the guidelines of anything in that is in that deployment package can be deployed to production then waiting feels unnecessary unless you really aren’t adhering to that. Again there can be use cases where this is not the case due to external factors like Apple Store certification or marketing launches.
  • Satender:We used to put lot of effort on selenium GUI testing and in maintenance of same. Do we need to focus more on other area present bottom of shown pyramid?
    [Curtis’s Reply] Yes. Everything becomes more costly in terms of authoring, execution time, and maintenance as you move up the pyramid. Selenium tests live at the top so they are some of the most expensive tests. If you want to reduce your opportunity costs you need to start moving down the pyramid or better yet start at the bottom and work up to meet Selenium.

Webinar Speaker:

Curtis Stuehrenberg Curtis Stuehrenberg is a Specialist Master with Deloitte Digital based out of their San Francisco office where he focuses on software testing and quality assurance better practices and building out and managing a companywide center of excellence. Before joining Deloitte he spent nearly two decades pursuing the career and his craft with companies spanning multiple industries and sizes. Curtis is actively involved in the larger software testing and quality assurance professional community through his writing, speaking, mentoring, and organizing work. He is a co-founder of the Bay Area Software Testers professional organization. He is a published author of multiple print articles on software testing and is a contributing author to the book “How to Reduce the Cost of Software Testing” by CRC Press. He is also a sought after speaker and has presented talks and classes at such conferences as the ALM Forum, STPCon, CAST, AgileSF, and the Wearable Tech Conference in addition to his commitment to smaller events around the San Francisco Bay Area.

Speaker Contact Details:

Curtis Stuehrenberg – FSS – Deloitte Consulting LLP
Twitter: @CowboyTesting
LinkedIn: Curtis Stuehrenberg
Blog: CowboyTesting.blog