Lately we’ve seen a spate of issues coming up on 64-bit platforms within the Developer Division around usages of P/Invoke signatures which declare a parameter as type “byref byte” where the developer really means “byte” (the corresponding native parameter type being something like LPBYTE). Usually when something works on 32-bit and doesn’t work on 64-bit we quickly get a phone call or email indicating that this must be a CLR problem, and this case was no different.
I received an email which pointed me to the following P/Invoke signature:[DllImport(“kernel32.dll”)]public static extern int ReadProcessMemory( IntPtr hProcess, IntPtr lpBaseAddress, ref byte lpBuffer, IntPtr nSize, IntPtr lpNumberOfBytesWritten);
Looking at MSDN we can see that the C prototype for this function is:BOOL ReadProcessMemory( HANDLE hProcess, LPCVOID lpBaseAddress, LPVOID lpBuffer, SIZE_T nSize, SIZE_T* lpNumberOfBytesRead);
There are a number of problems with the P/Invoke declaration (it’s return type should be BOOL for instance and the nSize parameter should probably be a UIntPtr instead of IntPtr), those aside, the real problem is that the lpBuffer parameter shouldn’t be defined as a byref byte. The intended usage was:
byte b = new byte;ReadProcessMemory(…, …, ref b, …, …);
The expectation being that this would result in a pointer to the beginning of the byte array being delivered to the native code to play with. However that wasn’t happening and ReadProcessMemory was returning a failure (something that was actually very convenient in tracking down this bug). In the end though, this isn’t a CLR problem, it is a usage problem with the P/Invoke signature declaration. If that’s the case then you might ask: why did it work on 32-bit in the first place? Well, because of an “optimization” (I put it in quotes for a reason) in the x86 P/Invoke code “byref byte” means that we just happen to pin the reference to the byte which is passed through the P/Invoke layer and we pass that pinned original reference on to the native code.
This means that if you pass in a reference to the first byte (or any byte for that matter) of an array of bytes then we will pass a pointer to that and the native code can party on the rest of the array just as if we passed an interior pointer into the object (well, we did). It is very possible that this makes a lot of sense to those C++ programmers out there who have become very accustomed to a reference and a pointer being the same thing, and being able to do fancy pointer math on references just by casting them to pointers.
It turns out that in the 64-bit implementation of P/Invoke (which under the covers is radically different than the 32-bit implementation) we decided to more accurately represent a “byref byte” as a reference to a single byte, in fact, we allocate the byte on the interop layer’s stack and pass along a reference to that to the native code. On the way back to managed code we copy that byte back into the GCHeap wherever the managed object identified by the incoming object reference is currently living (in this case some byte in a byte array). This decision was also made as an “optimization” to avoid some of the frequent pinning that the CLR does during interop (as pinning can be rather hard for the GC to deal with, especially for very small objects and generally the less of it the better).
We do this for small native types that we can move around with an instruction or two, however for larger types (like an actual byte array, specified as a “byte” in a P/Invoke signature (or a “byref byte” identified by an array attribute of sorts) we still do go ahead pin the reference in the GC Heap and pass along the pinned reference to native code to party. This is what the developer of the above code intended to happen.
The correct P/Invoke signature would be (conveniently this can be found on http://www.pinvoke.net):[DllImport(“kernel32.dll”)]public static extern bool ReadProcessMemory( IntPtr hProcess, IntPtr lpBaseAddress, [Out] byte lpBuffer, UIntPtr nSize, IntPtr lpNumberOfBytesWritten);
Given this fixed signature we will pin the byte reference and pass along a pointer to the unmanaged code and everything will work as expected. Fortunately in this case for the group that wrote this code ReadProcessMemory was able to return a failure when it received what it deemed to be a bad pointer for lpBuffer, in most cases you will probably just end up seeing spectacular failures when the native code that you’re P/Invoke-ing out to starts overwriting your application’s stack. So it is very important to remember to get your P/Invoke signatures right!!! It will save you some serious debugging later.