Using MOSS Search in scenarios where low latency is needed, requires special attentions to get the best possible results in terms of performance and reliability of the Search sub-system. This is particularly true when one or more of the following conditions are also present in your scenario:

·         the Search sub-system is critical to the service level of the overall solution, e.g. you have direct or indirect service level agreements related to the Search service availability and/or latency;

·         the rhythm of changes in the indexed content is high;

·         the volume of indexed content is large.

The recommendations that I list below come from a direct experience on a solution that required a maximum latency of 5 minutes for new items to be searchable, with millions of items in the corpus, and with an average of about 5,000 new items per day to be indexed (more than 20 new items per minute in peak times).

These recommendations are not rules that you should blindly apply. Rather, they are ideas and pointers that you may want to consider if you need to design or you are supporting a scenario with the characteristics described above with MOSS 2007. There is nothing very special or new here, but I thought it was valuable to collect these recommendations in a single list.

So, here it goes:

1.       Host your Search database files on a dedicated set of disks, and apply due diligence with SQL Server best practices.

See also “Planning and Monitoring SQL Server Storage for Office SharePoint Server: Performance Recommendations and Best Practices” at http://go.microsoft.com/fwlink/?LinkID=105623&clcid=0x409 and “Optimizing tempdb Performance” at http://msdn.microsoft.com/en-us/library/ms175527(SQL.90).aspx.

2.       Split the Search database tables on two different filegroups, and host the corresponding files on different sets of disks, to keep the crawl and query loads segregated and minimize I/O contention.

See the details in “SQL File groups and Search” on the Enterprise Search blog at http://blogs.msdn.com/enterprisesearch/archive/2008/09/16/sql-file-groups-and-search.aspx.

Note however that it makes no sense to split the tables if you are not able to physically host the two filegroups on different sets of disks.

3.       Schedule regular (daily or weekly) indexes defragmentation of the Search database. The following query can be used on SQL Server 2005 or higher to obtain all the indexes with a fragmentation level higher than 10%:

USE <SearchDB>

 

DECLARE @currentDdbId int

SELECT @currentDdbId = DB_ID()

 

SELECT DISTINCT

      i.name,

st.avg_fragmentation_in_percent

FROM sys.dm_db_index_physical_stats (@currentDdbId, NULL, NULL , NULL, 'SAMPLED') st

INNER JOIN sys.indexes AS i

ON st.object_id = i.object_id

WHERE st.avg_fragmentation_in_percent > 10

 

Indexes resulting from the above query should be defragmented.

See also “Database Maintenance for Microsoft SharePoint Products and Technologies” at http://go.microsoft.com/fwlink/?LinkId=111531&amp;clcid=0x409 and “SQL Index defrag and maintenance tasks for Search” on the Enterprise Search Blog at http://blogs.msdn.com/enterprisesearch/archive/2008/09/02/sql-index-defrag-and-maintenance-tasks-for-search.aspx.

4.       Configure the appropriate SQL Maintenance Plans. SQL Server maintenance plans should be configured with the following guidelines:

a.       Search Database

                                                               i.      Check Database integrity using the ‘DBCC CHECKDB WITH PHYSICAL_ONLY’ syntax to reduce the overhead of the command. This should be ran on a weekly basis during off-peak hours. Any error returned from DBCC should be analyzed and solved proactively. The full ‘DBCC CHECKDB’ command should be ran with a lower frequency (e.g. once per month) to provide deeper analysis.

                                                             ii.      Do not Shrink the Search database.

                                                            iii.      Index defragmentation should be executed following the recommendation above.

b.      Content Databases

                                                               i.      Check Database Integrity

1.       Include indexes

                                                             ii.      Shrink Database

1.       Shrink database when it goes beyond: maximum expected size of your content database + 20%

2.       Amount of free space to remain after shrink: 10%

3.       Return freed space to operating system

                                                            iii.      Reorganize Index

1.       Compact large objects

2.       Change free space per page percentage to: 70%

                                                           iv.      Maintenance Cleanup Task

See also “Database Maintenance for Microsoft SharePoint Products and Technologies” at http://go.microsoft.com/fwlink/?LinkId=111531&amp;clcid=0x409 and “SQL Index defrag and maintenance tasks for Search” on the Enterprise Search Blog at http://blogs.msdn.com/enterprisesearch/archive/2008/09/02/sql-index-defrag-and-maintenance-tasks-for-search.aspx.

5.       Pre-size your databases. Avoid the auto-growth behavior for content databases by pre-setting the size to the maximum expected size (ALTER DATABASE … MODIFY FILE … SIZE property). Configure the autogrowth values to a fixed percentage (e.g. 10%) instead of a fixed space.

See also “Planning and Monitoring SQL Server Storage for Office SharePoint Server: Performance Recommendations and Best Practices” at http://go.microsoft.com/fwlink/?LinkID=105623&clcid=0x409.

6.       Configure and test the crawler impact rules for best performance. If you have only one host to index, start with a crawler impact rule to “request 64 documents at a time” in order to maximize the number of threads the crawler can use to index content, with the ultimate goal of increasing the crawl speed. Resources usage on the indexer, dedicated front-end and search database boxes should be monitored, and in case the crawler generates too much activity, the crawler impact rule should be tuned by decreasing the parallelism.

See also “Manage crawler impact rules” at http://technet.microsoft.com/en-us/library/cc261720.aspx, and “Creating crawl schedules and starvation - How to detect it and minimize it” on the Enterprise Search Blog at http://blogs.msdn.com/enterprisesearch/archive/2008/05/09/creating-crawl-schedules-and-starvation-how-to-detect-it-and-minimize-it.aspx.

7.       Configure a separate box as the dedicated web front-end for indexing. Using a dedicated WFE for indexing that is physically separated from the indexer box is recommended, in order to avoid resource contention on the same server for front-end and crawling activities.

There are some restrictions for this configuration, see the details at http://technet.microsoft.com/en-us/library/cc261810.aspx.

8.       Fine tune the Gatherer. The three registry keys below (under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\12.0\Search\Global\Gathering Manager) can be changed for better performance (this will require a service restart for the changes to take effect).

a.       FolderHighPriority: Represents the number of high priority folders that can be processed at one time.  If this is too high then the cache in the daemons will constantly be running out of space.  If this is too low then the crawl will be throttled waiting for more items to process.

b.      FilterProcessMemoryQuota: Represents how much memory can be consumed by the search daemon process before it gets killed by the crawler. The OOB default has been chosen based on 4 GB of memory on the indexer. If the customer has higher RAM, they can increase this value to cache more data during the crawl.

c.       DedicatedFilterProcessMemoryQuota: Same as for FilterProcessMemoryQuota except this is the size of the single-threaded daemons.

As an example, if your indexer box is 64-bit with 16 GB of RAM, the following values have been tested successfully:

a.       FolderHighPriority: set to 500

b.      FilterProcessMemoryQuota: set to 208857600

c.       DedicatedFilterProcessMemoryQuota: set to 208857600

9.       Configure multiple content sources for the same web application, if applicable. This depends of course on the information architecture of the content to be indexed. In many cases, however, you may identify one subset of content in your web application where most of the changes happen (“fresh” content), and another subset that is mostly static (archived or older content). If you can, configuring more than one content source to target the “fresh” content areas separately from the “static” areas will give you more flexibility for crawl scheduling. The correct balance needs to be identified, to avoid too many content sources as well (the maximum number of content sources per SSP is 500). Multiple content sources will also help to mitigate the impact of a long running operation of the crawler (like a full crawl) in terms of latency for fresh content to appear in search results, because you will be able to selectively activate the crawling on the desired content sources only, postponing less important crawling activities to off-peak hours, etc.

10.   Implement a “stand-by” SSP. It is well known that SharePoint Server 2007 does not support redundancy on the indexer role with automatic failover. However, multiple SSPs can be implemented to provide a failover strategy in case the main index gets corrupted, or it needs to perform long running operations like full crawls, affecting the latency of fresh content in search results.

By configuring a secondary, or “stand-by”, SSP in the same farm, and having it index the same content as the main SSP (or just the most important subset of your content to keep the stand-by index catalog smaller), it will be possible to switch from one index catalog to the other when needed, just by changing the SSP association of the web application from the Central Administration UI.

In order to ensure the possibility to switch the association back and forth as needed, without losing the content of indexes, it is required that the content source does not target the whole web application (as the default content source does, through the option “Everything under the host name for each start address”), but instead targets the specific site collections (through the option “Only the SharePoint site of each start address”).

Be aware that multiple SSPs sharing the same indexer box will contend for server resources. This is the reason why most of the benefits of configuring a stand-by SSP can only be obtained if using a separate indexer machine.

See also “Plan for availability” at http://technet.microsoft.com/en-us/library/cc748824.aspx.

If there is interest on this topic, I will post the procedure that we used to “clone” an existing SSP for creating the stand-by SSP in the same farm.