Hello, my name is Roy. I’m an Escalation Engineer in the CPR platforms team. I’ll be doing a four part series on LPC over the coming month. You’re sure to find this interesting. That being said let’s get started.
Disclaimer: The purpose of this blog is to illustrate debugging techniques with LPC. Please do not rely on this information as documentation for the LPC APIs when writing your own code. As always, this is subject to change in the future.
LPC (Local Inter-Process Communication and NOT Local Procedure Calls) is a high speed message based communication mechanism implemented in the NT kernel. LPC can be used for communication between two user mode processes, between a user mode process and a kernel mode driver or between two kernel mode drivers. One example would be two user mode processes communicating via LPC. Like CSRSS.exe talking to SMSS.exe over SmssWinStationApiPort while creating a logon session or any process talking to LSASS.exe over LsaAuthenticationPort for security reasons. Another example of a user mode process communicating with a kernel mode driver would be KSecDD.sys talking to LSASS.exe for EFS key encryption and decryption during read/write of an encrypted file.
LPC uses two different mechanisms for passing data between the client and the server process. It uses the LPC message buffer (for data size less than 304 bytes) or it uses a shared memory Section mapped to both client and server address space (for data size more than 304 bytes).
Apart from being used as the protocol of choice of Remote Procedure Calls between processes running on the same system, LPC is also used throughout the system e.g. for Win32 Application’s communication with CSRSS.exe, Security Reference Monitor’s communication with LSASS, WinLogon’s communication with LSASS etc.
LPC enforces synchronous communication model between the client and the server processes. Vista deprecates the use of LPC using a new mechanism called Asynchronous Local Inter-Process Communication (ALPC). ALPC has an inherent advantage over LPC in that all calls from client to the server are asynchronous i.e. the client does not need to block/wait for the server to respond to a message. In Vista, legacy application calls to LPC APIs are automatically redirected to the newer ALPC APIs.
LPC APIs are native APIs i.e. they are exported in user mode by NTDLL.dll and in kernel mode by NTOSKRNL.exe. LPC APIs are not exposed at the Win32 level hence win32 applications cannot use the LPC facility directly. Win32 applications can however use LPC indirectly when using RPC by specifying LPC as its underling transport via the protocol sequence “ncalrpc”. All LPC APIs end in the word "Port" which implies an LPC communication endpoint.
Used by server to create a connection port
Used by client to connect to a connection port
Used by server to listen for connection requests on the connection port.
Used by server to accept connection requests on the connection port
Used by server to complete the acceptance of a connection request
Used to send a datagram message that does not have a reply
Used to send a message and wait for a reply
Used to send a reply to a particular message
Used to send a reply to a particular message and wait for a reply to a previous message
Used by server to send a reply to the client and wait to receive a message from the client
Used by server thread to temporarily borrow the security context of a client thread
The following diagram illustrates the steps taken by an LPC server process to listen to inbound connection request from potential client and the steps taken by clients to connect to listening servers.
Figure  LPC Client Server Connection Establishment Sequence
NOTE: Many server processes use the NtReplyWaitReceivePort( ) API instead of NtListenPort( ). NtListenPort( ) drops all LPC messages except connection requests. Hence NtListenPort( ) can only be used for the first connection. For later connection requests NtReplyWaitReceivePort( ) is used.
The following diagram illustrates the steps taken by an LPC client to send a request to an LPC server that is has already established a connection to and the steps taken by the server to responds to the message.
Figure  LPC Client Server Data Transfer Sequence
LPC Data Structures
LPC Port Data Structure
LPC endpoints are referred to as ports. LPC implementation uses the same port structure to represent the various types of ports. The ports used by LPC are Server Connection Ports which are named ports created by the server process to accept incoming connections from clients. Client Communication Ports which are created by the client process to connect to a server process and Server Communication Port created by the server process.
Figure  LPC Port types and their relationships
LPCP_PORT_OBJECT is the internal data structure used by LPC to represent a LPC port. LPCP_PORT_OBJECTs are allocated out of paged pool with the tag ‘Port’.
kd> dt nt!_LPCP_PORT_OBJECT
+0x000 ConnectionPort : Ptr32 _LPCP_PORT_OBJECT
+0x004 ConnectedPort : Ptr32 _LPCP_PORT_OBJECT
+0x008 MsgQueue : _LPCP_PORT_QUEUE
+0x018 Creator : _CLIENT_ID
+0x020 ClientSectionBase : Ptr32 Void
+0x024 ServerSectionBase : Ptr32 Void
+0x028 PortContext : Ptr32 Void
+0x02c ClientThread : Ptr32 _ETHREAD
+0x030 SecurityQos : _SECURITY_QUALITY_OF_SERVICE
+0x03c StaticSecurity : _SECURITY_CLIENT_CONTEXT
+0x078 LpcReplyChainHead : _LIST_ENTRY
+0x080 LpcDataInfoChainHead : _LIST_ENTRY
+0x088 ServerProcess : Ptr32 _EPROCESS
+0x088 MappingProcess : Ptr32 _EPROCESS
+0x08c MaxMessageLength : Uint2B
+0x08e MaxConnectionInfoLength : Uint2B
+0x090 Flags : Uint4B
+0x094 WaitEvent : _KEVENT
Points to the Server Communication Port
Points to the Server Connection Port
Used to signal the server thread about the presence of a message in MsgQueue.ReceiveHead
Head of a doubly linked list containing all the messages that are waiting to be dequeued by the server.
Points to the LPCP_NONPAGED_PORT_QUEUE structure for the client communication port for tracking lost replies.
Head of a doubly linked list containing all the threads that are waiting for replies to messages sent to this port.
LPC Message Data Structure
LPC message are data structures that carry information from the LPC client to the LPC server and can be of various types like connection, request, close etc.
LPCP_MESSAGE is the internal data structure used by LPC to represent a message. LPCP_MESSAGE structures are allocated from a system wide lookaside list with the tag ‘LpcM’.
kd> dt nt!_LPCP_MESSAGE
+0x000 Entry : _LIST_ENTRY
+0x000 FreeEntry : _SINGLE_LIST_ENTRY
+0x004 Reserved0 : Uint4B
+0x008 SenderPort : Ptr32 Void
+0x00c RepliedToThread : Ptr32 _ETHREAD
+0x010 PortContext : Ptr32 Void
+0x018 Request : _PORT_MESSAGE
Generated from the value of a global epoch LpcpNextMessageId. Used to uniquely identify a message.
Points to the LPCP_PORT_OBJECT of the client communication port
Is the list entry that is used to queue the message to the Server Communication/Connection Port’s MsgQueue.ReceiveHead
Is a copy of the message buffer that was provided as the Request parameter to the call to NtRequestWaitReplyPort() or a copy the message buffer that was provided as the Reply parameter to NtReplyWaitRecivePortEx().
LPC related fields in Thread Data Structure
kd> dt nt!_ETHREAD -y Lpc
+0x1c8 LpcReplyChain : _LIST_ENTRY
+0x1f4 LpcReplySemaphore : _KSEMAPHORE
+0x208 LpcReplyMessage : Ptr32 Void
+0x208 LpcWaitingOnPort : Ptr32 Void
+0x228 LpcReceivedMessageId : Uint4B
+0x23c LpcReplyMessageId : Uint4B
+0x250 LpcReceivedMsgIdValid : Pos 0, 1 Bit
+0x250 LpcExitThreadCalled : Pos 1, 1 Bit
The following table describes the fields of the ETHREAD data structure that are used for communication between LPC Client and Server process.
Contains the ID of the message sent to the server while the client thread is waiting on the response.
Used by the LPC Client Thread to stores the LPC client communication port on which it is waiting for a reply.
Used to block the LPC Client Thread while the server is processing the LPC message.
Used by the Lpc APIs to determine if the thread is currently in the process of exiting and if so returns STATUS_THREAD_IS_TERMINATING from the API call.
Used to queue the thread to the list of threads waiting for replies from the server communication/connection port. The list head of this list is in LPCP_PORT_OBJECT->LpcReplyChainHead.
Important global LPC connection ports
Used for the following tasks:
This port is used by the CSRSS to manage Window Stations i.e. multiple sessions.
This port is used by IO manager and other components to send error log entries to the EventLog service.
Important LPC port fields in the Process
LSA uses this port for EFS (encrypted file system) and authentication.
When a thread does not catch an exception an LPC_EXCEPTION message is sent to this port. A subsystem might register an LPC port in this field to receive second chance exception information of process running in the subsystem. Default action of CSRSS is to terminate the process.
The kernel dispatches user mode exceptions to the process debugger on this port. Debug outputs using OutputDebugString( ) are passed to this port as DBG_PRINTEXCEPTION_C.
Important LPC port fields in the Thread
This is not a single port rather a chain of ports registered with the Process Manager. The process manager notifies all ports when the thread terminates.
This field has the address of the client communication port when the client thread waits for a reply to a LPC message from a server thread.
Stay tuned...in the next post we will roll our sleeves up and dig into the debug extensions for LPC and some common states where you may find these calls stuck.
I'm mostly a win32 guy, but this stuff tickles my brain, can't wait for the next post!
This article is great, and I cannot wait for the next post.
Would like to see the rest of these parts. Especially more details on the APIs (params, synopsis, etc.)
You can try to redefine LPC to be some kind of IPC if you like but you wont succeed. LPC is Local Procedural Call. Period.
[Thanks for the feedback, you're right about the definition of LPC being Local Procedure Call. Also, ALPC is Advanced and not Async. We've changed our internal article review system since this was published back in 2007.]