Recently, a colleague asked me to look at a network capture in which a .NET client application’s communication with a web service was not meeting their performance goals. In particular, he noted that this was primarily a problem on high-latency networks; each of the dozens of requests took hundreds of milliseconds, even when existing connections were being reused.

Even when he took a capture locally on a fast network, most of the time was spent in an unexpected place:


You can see that the client and server were both reusing connections (the session above took place on a client connection that was established for an earlier session 4 seconds earlier, and the server connection was established 63 seconds earlier).  In particular, note that the entire session took 468 milliseconds, but the time between ClientBeginRequest and ClientDoneRequest is 359 milliseconds, nearly 77% of the total time. As Fiddler is a locally-running proxy, and Fiddler only took 93 milliseconds to send the request to the remote server and get back the first byte of the response, it’s surprising that it took over three times as long to actually read the request from the client program on the same computer.

So, why is the client taking so long to send the request to Fiddler?

I noticed that each of these requests was a HTTP POST, uploading a small SOAP XML body, each just over 1kb. So, it’s unlikely that the client was taking a long time to generate the request body or load it off of the disk. However, each request did contain an Expect header.


Section 8.2.3 of RFC 2616 explains what the Expect header is used for—the basic idea is that a client sending a POST body may send an Expect: 100-continue request header to the server. The server is expected to immediately evaluate the (incomplete) request’s headers, and if they seem reasonable (e.g. of a MIME-type the server supports, and with an appropriate Content-Length) the server sends back a non-final response header set with a status code of 100, indicating that the client’s plan to send this body is acceptable. Alternatively, if the headers suggest that the client’s pending post is unacceptable (e.g. the client needs to authenticate itself), the server may immediately return an error code (e.g. a HTTP/401 with an Authentication challenge), saving bandwidth by allowing the client to abort the request before sending the body.

In the case where the request is acceptable, the client receives the HTTP 100 response and then transmits the body as it had promised. The traffic for this scenario looks[1] somewhat like this:

POST /Page HTTP/1.1
Content-Type: text/plain
Content-Length: 26 
Expect: Continue

HTTP/1.1 100 Continue
Server: Microsoft-IIS/7.5
Date: Wed, 02 Nov 2011 18:51:56 GMT

Here is my request body...

HTTP/1.1 200 OK
Server: Microsoft-IIS/7.5
Date: Wed, 02 Nov 2011 18:51:58 GMT
Content-Type: text/html


Importantly, HTTP’s Expect feature isn’t supported by all clients and servers (e.g. Internet Explorer doesn’t use Expect, instead using an authentication-only heuristic) and thus a client generally will only wait a short period for a reply from the server before proceeding to transmit the request body anyway. In the .NET Framework implementation, that wait is capped at 350 milliseconds.

The Fiddler Web Debugger buffers all HTTP requests completely before transmitting the request to the server (in order to enable request tampering), so the .NET application was always waiting the full 350 milliseconds between sending the request headers and the request body when it was running behind Fiddler. In my colleague’s production scenario, he wasn’t running Fiddler, but the connection from the client to the server was high-latency, so the client didn’t rapidly get back the HTTP/100 response from the server, and thus it was forced to wait hundreds of additional milliseconds on each request.

There are some cases in which the Expect feature is useful—these are the scenarios where the client is expecting to transfer a large request body (e.g. a multi-megabyte upload). In my colleague’s case, using Expect isn’t really appropriate, because the requests are small enough that performance is significantly impacted and the Expect feature provides no real savings. The .NET Framework allows callers to disable the Expect behavior using the ServicePointManager.Expect100Continue property (or by setting a property in the application’s manifest[2] ). You can read a bit more about this property here.

If you'd simply like to have Fiddler itself return the 100 Continue, you can do that by clicking Rules > Customize Rules. Add the following function inside the Handlers class:

static function OnPeekAtRequestHeaders(oSession: Session) {
   if (oSession.HTTPMethodIs("POST") && oSession.oRequest.headers.ExistsAndContains("Expect", "continue"))
     if (null != oSession.oRequest.pipeClient)
["ui-backcolor"] = "lightyellow";
       oSession.oRequest.pipeClient.Send(System.Text.Encoding.ASCII.GetBytes("HTTP/1.1 100 Continue\r\nServer: Fiddler\r\n\r\n")); 


[1] Note that HTTP 100 Responses are handled specially by Fiddler because they are an odd architectural quirk of HTTP—they’re non-final responses, which means that a server that sends the 100 Continue will then send a final response header set (usually a HTTP/200). Representing a single session with multiple sets of response headers would be both complicated and confusing, so Fiddler does not attempt to do so. By default (preference default: True), Fiddler will return any 100 responses it receives to the client and makes note of this using a Session Flag named x-fiddler-Stream1xx.

[2] Add the following to clientappname.exe.config:

    <servicePointManager expect100Continue="false" />