Yesterday at 3:13 PM
Hi Saurav. That is a great question and very crucial. The main answer is test design. I will assume for this answer that you have been in my talk earlier today, so I can use some terminology from the Action Based Testing method. Ideally the team should start with main business level test modules, and do interaction modules later. If modules have a good scope, and their design follows that scope, it will help keeping up with changing requirements.
Yesterday at 2:53 PM
We have so far never done so that I'm aware of. What we also have are "test suites", meant to organize the execution of test modules or individual test cases. In those it is more likely to see the same test cases being executed twice, for example against two different versions of the application under test.
Brett, thanks for the questions. Feel free to email me directly as well in case you would want a more detailed discussion.
Yesterday at 1:15 PM
I think he might be referring to a specific testing tool, such as QTP. When you develop automation, you typically need to understand what objects (such as buttons, lists, etc) you have in the application you're testing, and testing tools know how to do that. But they need to know what technology the application under test uses (HTML, .NET, Java, SAP, etc), in order to discover these objects.
So my answer would be to talk to the developers. Ask them what technology the application is using, and check to see if your tool supports that technology, or if supports an alternative that might also work.
As a last resort, you can use image-based automation, which uses complex image processing algorithms to discover and identify objects. This has been shown to be successful in cases where there is no programmatic way to discover the objects.
And specifically regarding add-ins, it's important to only include the add-ins you actually need, because each add-in increases the footprint of the testing tool.
I hope that answers the question.