In IIS6, is there a way to have a top-level filter run in a process space separate from each website's process space? Under IIS5, our filter has been used to store a large data cache to reduce the number of round trips to our database. Now, in IIS6, this large data cache is unfortunately duplicated in every one of our hundreds of different website processes, because the filter is now loaded into each website's w3wp process instead of into the single IIS5 inetinfo process. To avoid having rewrite our filter to access the cached data (cross-process) from some new custom cache manager process, I wonder if there is some way to get IIS to do that work for us, like in IIS5 where the filter ran in the single inetinfo process even when requests were farmed off to websites in different processes.
Initially, research into this has led me to David's blog entry, which is helpful even though it is not intended to address this particular issue:
His entry mentions that in IIS6, filters run in w3wp, rather than inetinfo. Is this configurable? Is there some way to run a global filter in a single process separate from each website's process space in IIS 6 (while not using IIS5 compatibility mode)?
Alas. It sounds like IIS5 Compatibility mode with each website running in High Isolation protection is the closest functional equivalent to what you desire.
IIS6 supports two process model "modes": IIS5 Compatibility Mode and IIS6 Worker Process Isolation Mode.
There are some corner cases for filter configuration/loading which violate the above statements; ISAPI Filter and Application Pools are not exactly congruous concepts. But, let's leave that discussion for another day; I am more interested in the intended behavior at the moment. :-P
In particular, IIS6 does not provide any built-in mechanism to load global ISAPI Filters in a single process and route all requests through the ISAPI Filters in that process prior to execution by the Application Pool's w3wp.exe.
Maybe I misunderstand what you want, but what you are asking for seems odd to me because if all websites share a single process space for global ISAPI Filters, then doesn't that defeat the whole purpose of having Worker Process Isolation? If an ISAPI Filter in that single process goes down, then it would affect all websites... which would defeate process isolation.
I realize that in a shared hosting scenario, the Hoster may want to run his trusted code somewhere to filter/cache all requests and leave the Application Pools running untrusted end-user code. However, please understand that from an IIS perspective, the Hoster's code is no different than the untrusted end-user code - it is all "user code" from our perspective - and we trust none of it with IIS6 Worker Process Isolation Mode.
Sorry... we have probably all been burned too many times by bad ISAPI Filters or misbehaving applications we are "supposed to trust" running in Low/Medium Isolation. We took a stand in IIS6's core design.
Now, the IIS6 mode for running trusted code globally across all requests in a single process already exists - IIS5 Compatibility Mode. I know, I know, it gives up a lot of Application Pool benefits, but you do get some of the monitoring services of COM+.
And before you ask - no, we shot down the idea of having this hybrid IIS mode where ISAPI Filters loaded in a global process like inetinfo.exe, have requests routed through it prior to reaching individual w3wp.exe, and have WAS monitor the w3wp.exe. It is basically IIS5 Compatibility Mode with COM+ replaced by WAS, and we had to choose between emulating IIS5 Compatibility Mode or creating this hybrid, along with supporting the native IIS6 Worker Process Isolation Mode... and compatibility won.
IIS5 Compatibility mode with each website running in High Isolation protection isolates each website into its own dllhost.exe (and configurable process identity) and allows global ISAPI Filters all running in inetinfo.exe to examine every bit of incoming/outgoing data of every website.
Yes, I realize that it makes memory sharing/efficiencies more difficult because you have to write your own data sharing manager process, but isn't that how it goes - resource utilization and resource isolation tend to be opposites - one wants to share everything for efficiency and avoid duplication, while the other wants reliable and individual copies of everything so that no one affects the other.