You hear people give a lot of lip service to large lists and SharePoint.  You need to remember that is not the large list itself that killing your performance but rather retrieving and displaying the large list that's actually killing your servers.  Yes I know, large lists can also kill your crawl performance but we'll talk about that later. If you want to determine which of your lists are troublemakers, you can use a couple of tools, LogParser and this command:

 

 

logparser –i:tsv "select substr(extract_filename(filename),0,sub(strlen(extract_filename(filename)),18))
as Machine,timestamp,rtrim(extract_token(ltrim(extract_token(message,1,':')),0,'
')) as Duration, trim(extract_token(extract_token(message,3,','),3,' ')) as
ListId, extract_token(extract_token(message,5,','),1,'\"') as URL from
*.log to 8sli.csv where eventid = '8sli' and message not like '...%'"

 

Here is the thing to do.  Collect all your ULS logs and drop them into a single folder.  Launch LogParser and change directory to the location of the folder with the ULS logs. Run command noted above. Once complete it will spit out a list in CSV with any large list currently impacting  your farm in a negative way. Impact on the server will be minimal since we are only parsing through logs.

 

Thanks and happy SharePointing

 

Manny Acevedo (PFE)