Problem: When I was design/performance reviewing an ISV ODBC application, I turned on SQL Server profiler and noticed that the application was using server cursors. This application simply sent a select query to the server and read all the results into its local cache. So I asked the application developer why he was asking for a server cursor instead of a default result set. It turned out that the developer did not explicitly ask for a server cursor. But when he did block fetches, as a side effect, the SQL Server ODBC driver asked for a server cursor...that is unexpected! So I went to my favorite search engine and queried on “SQL_ATTR_ROW_ARRAY_SIZE firehose” and I found out I was not alone asking the question: How can I do block fetches over a Default Result Set (fire hose cursor) instead of a server cursor?

Solution: It turns out that ODBC determines whether a server cursor or a Default Result Set should be used during prepare/execute time. If the multiple array binding happens before prepare/execute (row array size > 1), server cursor is used by ODBC. It could even re-prepare if the binding with array size > 1 is left before the next execute. So the solution is to temporarily set the row array size to 1 before the prepare/execute/re-execute and reset it back to the >1 value before fetching. That way, you get a fire hose cursor and the block-fetch behavior.

Results: When we made this change to the ISV application and executed a query returning 500K rows we noticed a 33% improvement (reduction) in elapsed time to process the result set - it cut down 1130 round-trip server cursor fetch calls to 0!

Sample Code:

Siva Raghupathy
ISV Program Manager
SQL Server Development Team