Yesterday at 8:59 PM
As a performance testing specialist, I've browsed a few reads here and there, usually falling considerably short. I've enjoyed your article, and its rare I actually read one in its entirety. It did omit some important factors, but for an abstraction it hit the key touch points well. I should point out that performance test automation is not only alive and well, but being more frequently used every day (although most often orchestrated in tandem with regression/functional suites).
I enjoyed your article :)
Wednesday May 22nd 10am
Regarding the dependences of the platform-, technology- and/or operating system is something that we already know for years with regular applications. For example, try to install the same office suite on multiple machines (platforms) or a good CD-burning program that works the same on each operation system. Another great example is Spotify (native) and Netflix (web), try to install Spotify on Linux or try to play Netflix movies in a browser that is not running under Windows (because the website requires Silverlight). It is a known issue. Also (test)tools are mostly made on the platforms of the masses. I rather like to see (test)tools which can, for example, only be installed on a Windows operating system but is able to access other plotforms-, technologies- and operating systems on one kind or another. Besides, I think even more important is that (test)tool vendors open up their interfaces (API's) even more, to be able to interact with other tools and interfaces. Or to provide the test engineer the possibility to supplement technology and functionality to where technic and/of functionality is missing (self programming ).
When selecting a (test)tool first thing you need will be the Business Case. Not only look at the Total Cost of Ownership and/or calculate your Return on Investment, but when you follow the process and made the tool requirerments SMART, you will be able to have a proper selection. A suitable tool should not only be for the short-term, but for the long-term. A good tool selection should indeed also look further to the future. But also the amount- and types of projects in the organization should be relevant. And things like knowledge, user type, the number of releases, the test specifications and of course the existing tools. The latter is not always wise to make it work with what you have. Trying to write a report with Excel can work fin, but will really go better with Word... ;-)
Anyway, I agree with you that today's tool vendors should be capable to port their tool(s) to another platform. On the other hand, the infrastructure nowadays are powerful enough to virtualize, so you're less dependent on specific platforms and technologies. Incidentally, I see the trend of more and more tools that are not scriptable, nice to automate a lot of manual input (testcases) by users with less technical skills. A disadvantage however in many cases can be that if you put a experienced- and technical test engineer at the controls that he maybe limited in the possibilities of the tool. But, in that case he'll will programm the tool himself... :-)