Share via


EWS ServerBusyException - The server is too busy - for you

So… you got an ServerBusyException, eh?

You might have seen it thrown as this in your code:

Microsoft.Exchange.WebServices.Data.ServerBusyException: The server cannot service this request right now. Try again later.

The server is not overloaded overall and the call is reaching EWS and EWS is responding with a throttling error. When you see ServerBusyException in the response from an EWS call it usually means the server is too busy… for you. Well, it's too busy for the application account doing the EWS call. It's seeing the calls from the service account as having too much of a load against the server, so it's telling you to back off and come back later. An exception to this is when the call involves returning too many items for a search operation – which is covered later in this document.

When you see a throttling error don't panic or assume there is some issue with the sever – this is a common error which can be worked-around. What it means is that the code was not written to self-throttle.

Its best to change your code to self-throttle rather than change a throttling policy. Servers like Exchange Online Servers have throttling limits which you cannot get changed. You can change throttling settings with on-premise servers and maybe for dedicated 365 servers; however, its usually a bad idea to do so.

History and perspective:

In older versions of Exchange (2007 and prior) there was almost no throttling. Customers would have to pay attention to applications they were running against Exchange and to be sure that applications were not dominating servers or otherwise causing performance issues. When there were few applications going against Exchange it usually worked out. However, the world changed and a lot more applications were going against Exchange with Exchange Web Services (EWS), Exchange Server ActiveSync (EAS) and other APIs. Some of those applications were written in ways only taking in consideration of the performance of that application not considering the impact on the servers overall. Performance issues were often not found because programmers were not going against real life production servers in testing and checking performance impact or issues were introduced release by updates. The traffic from those applications was enough to bring Exchange servers to their knees and effectively taking them down. Something had to change to improve situation and to help prevent call traffic which would have such undesirable impact on Exchange servers. In Exchange 2010 throttling polices were added to force applications to be better behaving so as to not cause such issues. Exchange 2013 added more polices and improvements. Now applications need to behave by self-throttling so that Exchange won't have to.

Please review my blog article on throttling and each of the links it contains:

Exchange throttling is your friend… well, more like a police officer.
https://blogs.msdn.microsoft.com/webdav_101/2015/11/18/exchange-throttling-is-your-friend-well-more-like-a-police-officer/

What ErrorServerBusy usually is:

An ErrorServerBusy error is caused by many things.  So, when the error occurs we look at the full request and response made (headers and bodies) and check the overall calls being made.

In Exchange 2013 and later, details on the throttling error will be in the response. There will be a BackOffMilliseconds setting noted which says when Exchange will next accept a request from the application account. This timeout is often 3-5 minutes. You will also be able to see the throttling error logged to the Application event log on the Exchange server as a 2915 event.

The article below goes into the details of when the error is thrown:

EWS throttling in Exchange
https://msdn.microsoft.com/en-us/library/office/jj945066(v=exchg.150).aspx

When throttling is exceeded it will usually be due to one of three throttling polices being exceeded:

EWSPercentTimeInMailboxRPC:   Defines the percentage of time per minute during which a specific user can execute Active Directory requests.
EWSPercentTimeInCAS:  Defines the percentage of time per minute during which a specific user can execute Client Access server code.
EWSPercentTimeInAD:   Defines the percentage of time per minute during which a specific user can execute mailbox RPC requests

Those throttling polices are for the overall load the application service account is putting against the server.  Different EWS calls with different request payloads which go against different data in the mailbox produce different loads on the server.  As an example, a call resolving a lot of recipients will result on a load against Active Directory (AD).  MSDN says this about what happens when those are exceeded:

ErrorServerBusy: Occurs when the server is busy. The BackOffMilliseconds value returned with ErrorServerBusy errors indicates to the client the amount of time it should wait until it should resubmit the request that caused the response that returned this error code.

You can get a ErrorServerBusy in the response body with either a 500 or 200 for the HTTP response code.   I pulled the following from the MSDN article above:

200:  Contains an EWS schema-based error response with an ErrorInternalServerError error code. An inner ErrorServerBusy error code may be present. This indicates that the client should delay sending additional requests until a later time.

500:  Indicates an internal server error with the ErrorServerBusy error code. This indicates that the client should delay sending additional requests until a later time. The response may contain a back off hint called BackOffMilliseconds. If present, the value of BackOffMilliseconds should be used as the duration until the client resubmits a request.

503:  Indicates that EWS requests are queuing with IIS. The client should delay sending additional requests until a later time.

When you get this error with 2013 and later there will be back off error message in response and it will say that you need to wait a number of milliseconds – it often is 4-5 minutes.  There will also be a 2915 event written to the application event log on the server noting the throttling event.

Look at each of the three polices in the context of the call the error is thrown and as importing if not more is the overall load the code is putting against exchange.  The solution to throttling will depend upon what is being sent to the server. Below are some suggestions:

  • Check the calls being made.  See if you can optimize them to ask for less data, do less processing, etc.  Sometimes some changes on calls can make a big difference not only in throttling but in performance.  Any calls returning lots of properties, resolving recipients and doing searches should be looked at carefully.  Lots of long running calls take up time on the servers and can trigger throttling.  So, by lessening the load you may get throttled less. Be sure to set the X-AnchorMailbox header for every call.  For public folder access also set the "X-PublicFolderMailbox" header.  These steps should be taken even if you are not being throttled and can help with throttling issues.

  • Heavy searches are often throttled and should be optimized carefully. Filtering on index fields, limiting information returned, only retrieving IDs only then going back later for the other needed properties all help. One other thing to do which will reduce the load on the server and help with a faster result is to sort on a field used in the filter criteria. As an example, if you filter on ClientSubmitTime then also sort using ClientSubmitTime. With many/most databases you would want to sort on the client side as much as possible; however, Exchange has optimizations around indexing and sorting that provide a performance boost when searching with a sort on the server side.

  • If you have multiple applications using the same service account, then consider using different accounts for each application. This is the easiest change to make.

  • Some have worked-around such issues by adding new servers running the application under different accounts.

  • One possibility is to put in a delay in code to slow down the calls to Exchange, however that only works in some scenarios.  Sometimes there is just too much work to be done – so this usually only works well for light to medium loads and might work for larger ones. However, for a large load with many threads doing calls you should look at the last suggestion.

  • Your code could alternate accounts when doing calls. This works in some cases; however, some find they have to keep adding code to add additional accounts.

  • A processing queue could be used. This goes back to old-school queue processing. The idea is to put objects into a queue for each EWS call to be made. An application would then read the queue and do EWS calls at a specified rate. This method can be used for limiting calls made at a time and over time and can be used for working around several throttling limits. It's not a popular approach, however it is an option.

  • Common recommended: The industrial-strength work-around which works well is to use a pool of accounts in a round-robin queue for doing the EWS calls using EWS Impersonation.  By doing this the server will see the same account having a lesser load.  This is the usual recommended approach and needs to be strongly looked at – especially when the other suggestions above don't work. It is the most scalable and will handle load from little to massive huge.  Small companies all the way up to the largest companies use this approach.

  • Notifications related: Throttling is a bit different for notifications. Read the suggestions in the ErrorServerBusy section in the following article:

    Handling notification-related errors in EWS in Exchange
    https://msdn.microsoft.com/en-us/library/office/dn458788%28v=exchg.150%29.aspx

     

Streaming Notifications:

Code doing streaming notifications should have separate accounts used for handling the subscription versus processing the subscription.  As an example, the subscription thread would spawn worker threads which would use an account that would process the notification.  The subscription handling the account will be under special throttling handling and the account(s) which will use the notification information returned from Exchange would be under normal throttling policies.  A pool of accounts could be used for the subscription accounts and a separate pool used for the notification processing accounts.

Also read the following on the section covering ErrorServerBusy:

Handling notification-related errors in EWS in Exchange
https://msdn.microsoft.com/en-us/library/office/dn458788%28v=exchg.150%29.aspx

Here is related informaiton on throttling:

How to: Maintain affinity between a group of subscriptions and the Mailbox server in Exchange
https://msdn.microsoft.com/en-us/library/office/dn458789(v=exchg.150).aspx#bk_throttling

A Note on EWS MaxConcurrency:

Don't confuse the ServerBusyException error or the EWSPercentTimeInMailboxRPC, EWSPercentTimeInCAS, EWSPercentTimeInAD and FindCountLimit polices with EWSMaxConcurrency, which is for the simultaneous connections to the server at the exact time and is a totally different policy.  Using EWS Impersonation is usually the way to work around the EWSMaxConcurrency limit, however it won't help with the ErrorServerBusy policy.

Here is some information from the above article on EWSMaxConcurrency:

Defines the number of concurrent open connections that a specific user can have against an Exchange server that is using EWS at one time. The default value for Exchange 2010 is 10. The default value for Exchange 2013 and Exchange Online is 27.

This policy applies to all operations except for streaming notifications. Streaming notifications use the HangingConnectionLimit to indicate the number of open streaming event connections that are available.

Why you might not have been throttled before:

Many applications which use EWS may not see any throttling issues for a long time then start to encounter them. This is most likely due to changing conditions which could include the number of items being processed and the type of data involved changing. Mailboxes grow and companies add users – as this happens, applications which service mailboxes will usually generate a higher load against Exchange. Code modifications can of course increase load. Exchange is always for watching performance issues related to calls being made to it. So, at some point limits get exceeded an application gets throttled. So, writing code to be highly efficient in their call and doing things like using a around robin queue of service accounts are important to do from the start.

EWS FindCountLimit:

An ErrorServerBusy can also be returned when a search returns a number of items over the EWSFindCountLimit when calling FindItem or FindFolder. The EWSFindCountLimit policy controls how many items will get returned at one time from a call using one of those methods.  This limit varies with different servers and if the search is searching using an AQS search string or if the search criteria is targeting specific properties.  An AQS search with EWS is like what OWA does when you search by typing in something to search for.  This limit can run between 250 and 1,000 items.  This limit can be worked-around easily by doing a paged search.  A paged search can return up to the maximum number of items allowed by the EWSFindCountLimit policy and allow you to then pull the next set of results up to the limit, then the next set and onward.  It's best to write code to do a paged search so that you don't get throttled.  Even if the code works without implementing a paged search, you may find out later your code may get throttled if conditions change (i.e. more items might start to come back when more items have been added into the mailbox).  The difference between paged and non-paged code is not much but it can make a world of difference.

Overall Best Practices to help with troubleshooting and prevent issues:

When there is a failure in an application there should be the ability to turn detailed logging at any time. This is critical for troubleshooting. Implementation of detailed logging and adding things to requests to help with tracking calls can make a huge difference in resolving issues. An issue which might be resolvable in hours might take days or longer if such things are not implemented from the start.

All code should have the ability to log full information on the EWS call.  This includes at least:

  • The request body and headers.
  • The response body and headers.
  • Error message.
  • The error call stack.
  • The HTTP response code.
  • The time and time zone of each call.

The following should always be set to provide better efficiency and tracing of the calls:

Each request should always have the User-Agent header set to identify it in traffic.

oExchangeService.UserAgent = "MyApplication";   

The "client-request-id" and "return-client-request-id" headers (ClientRequestId and ReturnClientRequestId on the EWS Managed API ExchangeService object) should be set for every call and logged.  Those headers identify a call to the server and back to the client and are often needed for troubleshooting – especially with Exchange online.  "client-request-id" will need to be set to a new GUID per call.

oExchangeService.ClientRequestId = Guid.NewGuid().ToString();  // Set a new GUID on each call.

oExchangeService.ReturnClientRequestId = true

The "X-AnchorMailbox" header with the SMTP address of the target mailbox should always be added.  "X-PublicFolderMailbox" and "X-AnchorMailbox" should always be set when accessing a public folder.  "X-AnchorMailbox" this may make a difference in performance and is sometimes needed for EWS calls.   Examples:

oExchangeService.HttpHeaders.Add("X-AnchorMailbox ", "mytargetmailbox@contoso.com");  // Set on every call – be sure to set it to the target mailbox or the Anchor mailbox for Streaming notifciations.

oExchangeService.HttpHeaders.Add("X-PublicFolderMailbox",  "mypublicfolder@contoso.com");  // For a call to a public folder

Those headers might seem like unnecessary work; however, they are important to set. The X-headers can increase performance and are often needed to get calls to work. The User-Agent, "client-request-id" and "return-client-request-id" are important as they help identify the traffic against the server and are critical for finding the severs which handled the call. The response headers have information like the names of the servers which processed a request. Time and time zone information logged is often critical in being able to find the server logs and their entries related to EWS calls. Mailbox servers are often in different time zones from an application and many server's failover to sites in other time zones.

IIS logs are seldom helpful for resolving issues with APIs/protocols like EWS – they just don't have enough information. So, be sure to be able to log needed info on the fly.

References:

Search operations can be very expensive resource wise on the sever.  Severs will often suffer when there is no throttling with searches.

How to: Perform paged searches by using EWS in Exchange
https://msdn.microsoft.com/en-us/library/office/dn592093(v=exchg.150).aspx

How to: Perform an AQS search by using EWS in Exchange
https://msdn.microsoft.com/en-us/library/office/dn579420(v=exchg.150).aspx

Throttling Policies and the EWSFindCountLimit
https://blogs.msdn.com/b/exchangedev/archive/2010/03/12/throttling-policies-and-the-ewsfindcountlimit.aspx

EWS Best Practices – Searching
https://blogs.msdn.microsoft.com/webdav_101/2015/05/03/ews-best-practices-searching/

Best Practices – EWS Authentication and Access Issues
https://blogs.msdn.microsoft.com/webdav_101/2015/05/11/best-practices-ews-authentication-and-access-issues/