Yesterday at 10:00 AM
Regarding the dependences of the platform-, technology- and/or operating system is something that we already know for years with regular applications. For example, try to install the same office suite on multiple machines (platforms) or a good CD-burning program that works the same on each operation system. Another great example is Spotify (native) and Netflix (web), try to install Spotify on Linux or try to play Netflix movies in a browser that is not running under Windows (because the website requires Silverlight). It is a known issue. Also (test)tools are mostly made on the platforms of the masses. I rather like to see (test)tools which can, for example, only be installed on a Windows operating system but is able to access other plotforms-, technologies- and operating systems on one kind or another. Besides, I think even more important is that (test)tool vendors open up their interfaces (API's) even more, to be able to interact with other tools and interfaces. Or to provide the test engineer the possibility to supplement technology and functionality to where technic and/of functionality is missing (self programming ).
When selecting a (test)tool first thing you need will be the Business Case. Not only look at the Total Cost of Ownership and/or calculate your Return on Investment, but when you follow the process and made the tool requirerments SMART, you will be able to have a proper selection. A suitable tool should not only be for the short-term, but for the long-term. A good tool selection should indeed also look further to the future. But also the amount- and types of projects in the organization should be relevant. And things like knowledge, user type, the number of releases, the test specifications and of course the existing tools. The latter is not always wise to make it work with what you have. Trying to write a report with Excel can work fin, but will really go better with Word... ;-)
Anyway, I agree with you that today's tool vendors should be capable to port their tool(s) to another platform. On the other hand, the infrastructure nowadays are powerful enough to virtualize, so you're less dependent on specific platforms and technologies. Incidentally, I see the trend of more and more tools that are not scriptable, nice to automate a lot of manual input (testcases) by users with less technical skills. A disadvantage however in many cases can be that if you put a experienced- and technical test engineer at the controls that he maybe limited in the possibilities of the tool. But, in that case he'll will programm the tool himself... :-)
Thursday May 16th 3pm
Hi Saurav. That is a great question and very crucial. The main answer is test design. I will assume for this answer that you have been in my talk earlier today, so I can use some terminology from the Action Based Testing method. Ideally the team should start with main business level test modules, and do interaction modules later. If modules have a good scope, and their design follows that scope, it will help keeping up with changing requirements.
Thursday May 16th 2pm
We have so far never done so that I'm aware of. What we also have are "test suites", meant to organize the execution of test modules or individual test cases. In those it is more likely to see the same test cases being executed twice, for example against two different versions of the application under test.
Brett, thanks for the questions. Feel free to email me directly as well in case you would want a more detailed discussion.