A group blog from members of the VB team
We we released a new community article onto the Visual Basic Developer Center by one of our MVPs, Jeff Certain, called Scaling ADO.NET DataTables. In this article Jeff shows shows us how to query and aggregate data using the built in DataTable methods as well as LINQ to DataSets. He compares the performance on indexed and non indexed DataTables in a variety of scenarios. See for yourself what Jeff recommends on large sets of data.
Enjoy, -Beth Massi, Visual Studio Community
PingBack from http://blog.a-foton.ru/index.php/2008/12/18/community-article-scaling-adonet-datatables-beth-massi/
I reed this article many times, very carefully.
I do not understand why LINQ have good result when resrch no indexed DataTable.
When DataTable is not Indexed,then data is not in BinaryTree data structure.
How LINQ is that good?
My answer is: Because on bug in experimet.
Lets look at some code in article
Dim numberOfStops As Integer = 10
Dim x As Integer = (i / numberOfStops)
Dim key As String = "Key" & x.ToString
Dim value As String = "Value" & x.ToString
Variable key always between Key0 and Key9.
This values is on beginig of array and because that, evry sarch engine will finish fast.
I modify this like:
System.Diagnostics.Stopwatch sw1 = new System.Diagnostics.Stopwatch();
var row = from r in dt.AsEnumerable() where r.ToString() == "Key123456" select r;
string sValue = row.ElementAtOrDefault(0).ToString();
long sec = sw1.ElapsedMilliseconds / 1000;
and result is very bad for LINQ.
I found that this block of code run faster:
System.Diagnostics.Stopwatch sw2 = new System.Diagnostics.Stopwatch();
foreach (DataRow dr in dt.Rows)
if (dr.ToString() == "Key123456")
long sec2 = sw2.ElapsedMilliseconds / 1000;