By popular demand, here is the next exciting Performance Quiz -- your chance for fame, glory and recognition or at least a few minutes of amusement anyway :)

Below are two methods for taking an array of strings in some random order, removing the duplicates from the array and then counting up how many distinct strings there were.

For the test setup the number of strings is the input "cStrings" -- we make the strings "0", "1", and so forth up to cStrings as our test strings and then set "cTests" to be 5 times the number of strings.  "testOrder" is an array of cTests random numbers in the range [0, cStrings-1]

The questions are:

Q1: What is the time complexity of each algorithm?
Q2: The space usage?
Q3: Is the ListDictionary ever faster than the hashtable or smaller?  If so, for what values of "cStrings" is this true?

static int TestViaListDictionary()
{
ListDictionary t = new ListDictionary();

int i;

for (i = 0; i < cTests; i++)
{
String key = testString[testOrder[i]];

if (!t.Contains(key))
}

int count = 0;
foreach (Object k in t) count++;

return count;
}

static int TestViaHashtable()
{
Hashtable t = new Hashtable();

int i;

for (i = 0; i < cTests; i++)
{
String key = testString[testOrder[i]];

if (!t.ContainsKey(key))
}

int count = 0;
foreach (Object k in t) count++;

return count;
}

The specific initialization code looks like this:

static String[] testString;
static int[] testOrder;

static int cStrings;
static int cTests;
static void InitializeTest()
{
int i;

testString = new String[cStrings];

for (i = 0; i < cStrings; i++)
testString[i] = i.ToString();

testOrder= new int[cTests];

Random r = new Random();

for (i= 0; i < cTests; i++)
testOrder[i] = r.Next() % cStrings;
}