Ed Glas, a Group Manager on Team System, asked me to post this response to Tim Weaver recent post on the testing features of Team System [Team System – Team Test].
No way to share test context between tests
This is a limitation in the beta. We have addressed this problem for RTM by enabling one coded test to call another coded test or a declarative test and correctly pass along the context. Unfortunately we will not have time to enable including one declarative test in another.
The first problem sounds like you are running out of perf counter memory. You can allocate more perf counter memory by changing your machine.config. We've fixed this problem for RTM.
<performanceCounters filemappingsize=\"100000\" />
The second problem sounds like you are running out of memory. If you look at memory usage by process, which process is consuming the memory? Running with server GC may help. Edit vstesthost.exe.config to enable server GC:
<configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="PrivateAssemblies;PublicAssemblies" /> </assemblyBinding> <gcServer enabled="true"/> </runtime></configuration>
<probing privatePath="PrivateAssemblies;PublicAssemblies" />
In the beta, this especially helps with NTLM sites.
Can’t change the table name: This is true and unfortunate that you can rename the table. We didn’t implement this because we viewed it as an infrequent operation. We’ll consider fixing this for RTM, but at this point it is unlikely to make it in. You could open the web test in the source code editor and do a find and replace on the string.
No validation on data sources. While it would nice to have a validation command that checked connections, you can do this by running the test. You can delete the bad data source.
Test stops if data table empty. The key here is that you hadn’t bound anything to that table. While you can’t edit the table name, you can delete it from the test.
You bring up a very good point. This is actually a tricky issue. What you really want is for the tool to automatically use the same scale for all “like” counters (e.g., any counter showing response times). There are quite a few such counters for requests, transactions, and tests. Also, some system counters return the same info. This is tricky because it requires meta data about the counters that we don’t currently have. A good first cut at this is to use the same scale for all instances of the same counter. However, I think for the feature to really be useful we have to take it to the next step, and provide a way to group a bunch of counters (e.g., a “range group name” name on a counter set). For RTM, we will change % counters to always set the range to 100. We are also considering letting you set the range.
Location of Results
Definitely a problem in the beta. We’ve fixed this such that results are by default stored under the solution folder.
We wanted to make the wizard re-entrant, but it would have required substantially more work than we had time to complete since the wizard doesn’t support a bunch of the features in the load test (e.g., goal-based load, multiple scenarios, and multiple run settings). As a compromise, we’ve re-used parts of the wizard in some contexts, such as Add Scenario. We also made the wizard tree reflect the editor, so that it would be easier to find settings.