December, 2010

  • The Old New Thing

    How do I delete bytes from the beginning of a file?

    • 29 Comments

    It's easy to append bytes to the end of a file: Just open it for writing, seek to the end, and start writing. It's also easy to delete bytes from the end of a file: Seek to the point where you want the file to be truncated and call SetEndOfFile. But how do you delete bytes from the beginning of a file?

    You can't, but you sort of can, even though you can't.

    The underlying abstract model for storage of file contents is in the form of a chunk of bytes, each indexed by the file offset. The reason appending bytes and truncating bytes is so easy is that doing so doesn't alter the file offsets of any other bytes in the file. If a file has ten bytes and you append one more, the offsets of the first ten bytes stay the same. On the other hand, deleting bytes from the front or middle of a file means that all the bytes that came after the deleted bytes need to "slide down" to close up the space. And there is no "slide down" file system function.

    One reason for the absence of a "slide down" function is that disk storage is typically not byte-granular. Storage on disk is done in units known as sectors, a typical sector size being 512 bytes. And the storage for a file is allocated in units of sectors, which we'll call storage chunks for lack of a better term. For example, a 5000-byte file occupies ten sectors of storage. The first 512 bytes go in sector 0, the next 512 bytes go in sector 1, and so on, until the last 392 bytes go into sector 9, with the last 120 bytes of sector 9 lying unused. (There are exceptions to this general principle, but they are not important to the discussion, so there's no point bringing them up.)

    To append ten bytes to this file, the file system can just store them after the last byte of the existing contents. leaving 110 bytes of unused space instead of 120. Similarly, to truncate those ten bytes back off, the logical file size can be set back to 110, and the extra ten bytes are "forgotten."

    In theory, a file system could support truncating an integral number of storage chunks off the front of the file by updating its internal bookkeeping about file contents without having to move data physically around the disk. But in practice, no popular file system implements this, because, as it turns out, the demand for the feature isn't high enough to warrant the extra complexity. (Remember: Minus 100 points.)

    But what's this "you sort of can" tease? Answer: Sparse files.

    You can use an NTFS sparse file to decommit the storage for the data at the start of the file, effectively "deleting" it. What you've really done is set the bytes to logical zeroes, and if there are any whole storage chunks in that range, they can be decommitted and don't occupy any physical space on the drive. (If somebody tries to read from decommitted storage chunks, they just get zeroes.)

    For example, consider a 1MB file on a disk that uses 64KB storage chunks. If you decide to decommit the first 96KB of the file, the first storage chunk of the file will be returned to the drive's free space, and the first 32KB of the second storage chunk will be set to zero. You've effectively "deleted" the first 96KB of data off the front of the file, but the file offsets haven't changed. The byte at offset 98,304 is still at offset 98,304 and did not move to offset zero.

    Now, a minor addition to the file system would get you that magic "deletion from the front of the file": Associated with each file would be a 64-bit value representing the logical byte zero of the file. For example, after you decommitted the first 96KB of the file above, the logical byte zero would be 98,304, and all file offset calculations on the file would be biased by 98,304 to convert from logical offsets to physical offsets. For example, when you asked to see byte 10, you would actually get byte 98314.

    So why not just do this? The minus 100 points rule applies. There are a lot of details that need to be worked out.

    For example, suppose somebody has opened the file and seeked to file position 102,400. Next, you attempt to delete 98,304 bytes from the front of the file. What happens to that other file pointer? One possibility is that the file pointer offset stays at 102,400, and now it points to the byte that used to be at offset 200,704. This can result in quite a bit of confusion, especially if that file handle was being written to: The program writing to the handle issued two consecutive write operations, and the results ended up 96KB apart! You can imagine the exciting data corruption scenarios that would result from this.

    Okay, well another possibility is that the file pointer offset moves by the number of bytes you deleted from the front of the file, so the file handle that was at 102,400 now shifts to file position 4096. That preserves the consecutive read and consecutive write patterns but it completely messes up another popular pattern:

    off_t oldPos = ftell(fp);
    fseek(fp, newPos, SEEK_SET);
    ... do stuff ...
    fseek(fp, oldPos, SEEK_SET); // restore original position
    

    If bytes are deleted from the front of the file during the do stuff portion of the code, the attempt to restore the original position will restore the wrong original position since it didn't take the deletion into account.

    And this discussion still completely ignores the issue of file locking. If a region of the file has been locked, what happens when you delete bytes from the front of the file?

    If you really like this simulate deleting from the front of the file by decommitting bytes from the front and applying an offset to future file operations technique, you can do it yourself. Just keep track of the magic offset and apply it to all your file operations. And I suspect the fact that you can simulate the operation yourself is a major reason why the feature doesn't exist: Time and effort is better-spent adding features that applications couldn't simulate on their own.

    [Raymond is currently away; this message was pre-recorded.]

  • The Old New Thing

    ZOMG! This program is using 100% CPU!1! Think of the puppies!!11!!1!1!eleven

    • 98 Comments

    For some reason, people treat a program consuming 100% CPU as if it were unrepentantly running around kicking defenseless (and cute) puppies. Calm down already. I get the impression that people view the CPU usage column in Task Manager not as a diagnostic tool but as a way of counting how many puppies a program kicks per second.

    While a program that consumes 100% CPU continuously (even when putatively idle) might legitimately be viewed as an unrepentant puppy-kicker, a program that consumes 100% CPU in pursuit of actually accomplishing something is hardly scorn-worthy; indeed it should be commended for efficiency!

    Think of it this way: Imagine if your CPU usage never exceed 50%. You just overpaid for your computer; you're only using half of it. A task which could have been done in five minutes now takes ten. Your media player drops some frames out of your DVD playback, but that's okay, because your precious CPU meter never went all the way to the top. (Notice that the CPU meter does not turn red when CPU usage exceeds 80%. There is no "danger zone" here.)

    Consider this comment where somebody described that they want their program to use less CPU but get the job done reasonably quickly. Why do you want it to use less CPU? The statement makes the implicit assumption that using less CPU is more important than getting work done as fast as possible.

    You have a crowd of people at the bank and only ten tellers. If you let all the people into the lobby at once, well, then all the tellers will be busy—you will have 100% teller utilization. These people seem to think it would be better to keep all the customers waiting outside the bank and only let them into the lobby five at a time in order to keep teller utilization at 50%.

    If it were done when 'tis done, then 'twere well / It were done quickly.

    Rip off the band-aid.

    Piss or get off the pot.

    Just do it.

    If you're going to go to the trouble of taking the CPU out of a low-power state, you may as well make full use of it. Otherwise, you're the person who buys a bottle of water, drinks half of it, then throws away the other half "because I'm thinking of the environment and reducing my water consumption." You're making the battery drain for double the usual length of time, halving the laptop's run time because you're trying to "conserve CPU."

    If the task you are interested in is a low priority one, then set your thread priority to below-normal so it only consumes CPU time when there are no foreground tasks demanding CPU.

    If you want your task to complete even when there are other foreground tasks active, then leave your task's priority at the normal level. Yes, this means that it will compete with other foreground tasks for CPU, but you just said that's what you want. If you want it to compete "but not too hard", you can sprinkle some Sleep(0) calls into your code to release your time slice before it naturally expires. If there are other foreground tasks, then you will let them run; if there aren't, then the Sleep will return immediately and your task will continue to run at full speed.

    And cheerfully watch that CPU usage go all the way to 100% while your task is running. Just make sure it drops back to zero when your task is complete. You don't want to be a task which consumes 100% CPU even when there's nothing going on. That'd just be kicking puppies.

    [Raymond is currently away; this message was pre-recorded.]

    Clarification: Many people appear to be missing the point. So let's put it more simply: Suppose you have an algorithm that takes 5 CPU-seconds to complete. Should you use 100% CPU for 5 seconds or 50% CPU for 10 seconds? (Obviously, if you can refine your algorithm so it requires only 2 CPU-seconds, that's even better, but that's unrelated to the issue here.)

  • The Old New Thing

    WindowFromPoint, ChildWindowFromPoint, RealChildWindowFromPoint, when will it all end?

    • 21 Comments

    Oh wait, there's also ChildWindowFromPointEx.

    There are many ways of identifying the window that appears beneath a point. The documentation for each one describes how they work, but I figured I'd do a little compare/contrast to help you decide which one you want for your particular programming problem.

    The oldest functions are WindowFromPoint and ChildWindowFromPoint. The primary difference between them is that WindowFromPoint returns the deepest window beneath the point, whereas ChildWindowFromPoint returns the shallowest.

    What do I mean by deep and shallow?

    Suppose you have a top-level window P and a child window C. And suppose you ask one of the above functions, "What window is beneath this point?" when the point is squarely over window C. The WindowFromPoint function looks for the most heavily nested window that contains the point, which is window C. On the other hand ChildWindowFromPoint function looks for the least nested window that contains the point, which is window P, assuming you passed GetDesktopWindow as the starting point.

    That's the most important difference between the two functions, but there are others, primarily with how the functions treat hidden, disabled, and transparent windows. Some functions will pay attention to hidden, disabled, and/or transparent windows; others will skip them. Note that when a window is skipped, the entire window hierarchy starting from that window is skipped. For example, if you call a function that skips disabled windows, then all children of disabled windows will also be skipped (even if the children are enabled).

    Here we go in tabular form.

    Function Search Hidden? Disabled? Transparent?¹
    WindowFromPoint Deep Skip Skip It's Complicated²
    ChildWindowFromPoint Shallow Include Include Include
    ChildWindowFromPointEx Shallow Optional Optional Optional
    RealChildWindowFromPoint Shallow Skip Include Include³

    The return values for the various ...FromPoint... functions are the same:

    • Return the handle of the found window, if a window was found.
    • Return the handle of the parent window if the point is inside the parent window but not inside any of the children. (This rule obviously does not apply to WindowFromPoint since there is no parent window passed into the function.)
    • Otherwise, return NULL.

    The entries for ChildWindowFromPointEx are marked Optional because you, the caller, get to specify whether you want them to be skipped or included based on the CWP_* flags that you pass in.

    ¹There is a lot hiding behind the word Transparent because there are multiple ways a window can be determined transparent. The ...ChildWindowFromPoint... functions define transparent as has the WS_EX_TRANSPARENT extended window style.

    ²On the other hand, WindowFromPoint defines transparent as returns HTTRANSPARENT in response to WM_NCHITTEST. Actually, that's still not true. If the window belongs to a process thread different from the one calling WindowFromPoint, then WindowFromPoint will not send the message and will simply treat the window as opaque (i.e., not transparent).

    ³The RealChildWindowFromPoint includes transparent windows in the search, but has a special case for group boxes: The RealChildWindowFromPoint function skips over group boxes, unless the return value would have been the parent window, in which case it returns the group box after all.

    Why is RealChildWindowFromPoint so indecisive?

    The RealChildWindowFromPoint function was added as part of the changes to Windows to support accessibility. The intended audience for RealChildWindowFromPoint is accessibility tools which want to return a "reasonable" window beneath a specific point. Since group boxes usually enclose other controls, RealChildWindowFromPoint prefers to return one of the enclosed controls, but if the point belongs to the group box frame, then it'll return the group box.

    One place I see confusion over the various ...WindowFromPoint... functions is code which uses one of the functions, and then massages the result, unaware that there is already a function that returns the pre-massaged result for you. For example, I've seen code which calls WindowFromPoint followed by GetAncestor(GA_ROOT). This does a pointless down-and-up traversal of the window tree, searching for the deepest window that lies beneath the specified point, then walking back up the tree to convert it to a shallow window. This is the Rube Goldberg way of calling ChildWindowFromPointEx(GetDesktopWindow(), ...).

    Next time, a look at the mysterious RealGetWindowClass function. What makes this function more real?

  • The Old New Thing

    There is no interface for preventing your notification icon from being hidden

    • 38 Comments

    Yes, it's another installment of I bet somebody got a really nice bonus for that feature. A customer had this question for the Windows 7 team:

    Our program creates a notification icon, and we found that on Windows 7 it is hidden. It appears properly on all previous versions of Windows. What is the API to make our icon visible?

    First of all, I'd like to congratulate you on writing the most awesome program in the history of the universe.

    Unfortunately, Windows 7 was not prepared for your awesomeness, because there is no way to prevent your notification icon from being hidden. That's because if there were, then every other program (the ones that aren't as awesome as you) would use it, thereby causing your awesome icon to be lost among all the non-awesome ones.

    I'm sorry. You will just have to find some other medium for self-actualization.

  • The Old New Thing

    What appears superficially to be a line is actually just a one-dimensional mob

    • 35 Comments

    In China, queueing is honored more in the breach than in the observance. If you see a line for something, you must understand that what you are seeing is not really a line. It is a one-dimensional mob. You must be prepared to defend your position in line fiercely, because any sign of weakness will be pounced upon, and the next thing you know, five people just cut in front of you.

    I first became aware of this characteristic of "Chinese queueing theory" while still at the airport. When the gate agents announced that the flight to Beijing had begun boarding, a one-dimensional mob quickly formed, and I naïvely joined the end. It wasn't long before my lack of attentiveness to the minuscule open space in front of me resulted in another person cutting in front. At that point, I realized that the Chinese implementation of queueing theory was already in effect even before we left the United States.

    As another example: After the plane pulls up to the gate after landing, the aisles quickly fill with people anxious to get off the plane. In the United States, you can rely upon the kindness of strangers to let you into the aisle so you can fetch your bags and join the queue. But in China, you must force your way into the aisle. Nobody is going to let you in.

    Colleagues of mine who have spent time in both China and the United States tell me that it's an adjustment they have to make whenever they travel between the two countries. For example, in the United States, it is understood that when you are waiting in line for the ATM, you allow the person at the ATM a few feet of "privacy space." On the other hand, in China, you cannot leave such an allowance, because that is a sign of weakness in the one-dimensional mob. You have to stand right behind the person to protect your place in line.

    Bonus airport observation: How ironic it is that your last meal in your home country often comes from a crappy airport crfeteria.

  • The Old New Thing

    2010 year-end link clearance

    • 30 Comments

    Another round of the semi-annual link clearance.

    And, as always, the obligatory plug for my column in TechNet Magazine:

    • Beware the Balloon.
    • Hiding in Plain Sight.
    • History—the Long Way Through. In their zeal to make this article meet length, the editors cut what I consider to be the most important part of the article! Here's the penultimate paragraph in its full unedited version, with the important part underlined.
      But wait, there's still more. What if you want to access the real 64-bit system directory from a 32-bit process? File system redirection will take your attempt to access the C:\Windows\System32 directory and redirect it to the C:\Windows\SysWOW64 directory. Programmatically, you can use functions with unwieldy names like Wow64­Disable­Wow64­Fs­Redirection, but those disable redirection for all operations until re-enabled, which causes trouble if you're doing anything more complicated than opening a single file, because a complex operation may result in multiple files being accessed and possibly even worker threads being created. Instead of using a gross switch like disabling file system redirection, you can use the special C:\Windows\SysNative virtual directory. When a 32-bit process tries to access the C:\Windows\SysNative directory, the operations are redirected to the real C:\Windows\System32 directory. A local solution to a local problem.
    • Leftovers from Windows 3.0.
    • The Story of Restore.
    • The Tumultuous History of 'Up One Level'. The editors messed up the diagram in this article. The "1" is supposed to be an "open folder" icon, but due to the same error that results in that mysterious J, the Wingdings glyph turned into a plain "1". Here's what the diagram was supposed to look like. (Of course, if your browser is one who believes that Wingdings doesn't have a "1" glyph, then you'll just see a "1".)

      So for those of you looking for your Up One Level button, it's right there on the Address Bar. I've drawn a box around it so it's easier to see.

      Computer OS (C:) Windows Web Wallpaper
  • The Old New Thing

    What makes RealGetWindowClass so much more real than GetClassName?

    • 7 Comments

    There's Get­Class­Name and then there's Real­Get­Window­Class. What makes Real­Get­Window­Class more real?

    Recall from last time that the Real... functions were added to support Windows accessibility. The goal with Real­Get­Window­Class is to help accessibility tools identify what kind of window it is working with, even if the application did a little disguising in the form of superclassing.

    If you ask Real­Get­Window­Class for the class name of a window, it digs through all the superclassing and returns the name of the base class (if the base class is one of the standard window manager classes). For example, if your application superclassed the button class, a call to Get­Class­Name would return Awesome­Button, but a call to Real­Get­Window­Class would return button. Returning the underlying window class allows accessibility tools to know that the user is interacting with some type of button control (albeit a customized one), so that it can adjust the interaction to something appropriate for buttons. Without Real­Get­Window­Class, the accessibility tool would just see Awesome­Button, and it would probably shrug and say, "I have no idea what a Awesome­Button is."

    (I guess you could have the accessibility tool do a strstr for button, but then it would be faked out by classes like Button­Bar or applications which superclass a button but call it something completely different like Awesome­Radio.)

    If you read the winuser.h header file, you can see a comment next to the Real­Get­Window­Class function:

    /*
     * This gets the name of the window TYPE, not class.  This allows us to
     * recognize ThunderButton32 et al.
     */
    

    What is Thunder­Button32?

    Thunder was the code name for Visual Basic 1.0. Visual Basic superclassed all the standard Windows controls and called its superclassed version Thunder­Whatever.

  • The Old New Thing

    Psychic debugging: When I copy a file to the clipboard and then paste it, I get an old version of the file

    • 30 Comments

    A customer reported the following strange problem:

    I tried to copy some text files from my computer to another computer on the network. After the copy completes, I looked at the network directory and found that while it did contain files with the same names as the ones I copied, they have completely wrong timestamps. Curious, I opened up the files and noticed that they don't even match the files I copied! Instead, they have yesterday's version of the files, not incorporating the changes that I made today. I still have both the source and destination folders open on my screen and can confirm that the files I copied really are the ones that I modified and not files from some backup directory.

    I tried copying it again but still an outdated version of the file gets copied. Curiously, the problem does not occur if I use drag/drop to copy the files. It happens only if I use Ctrl+C and Ctrl+V. Any ideas?

    This was indeed quite puzzling. One possibility was that the customer was mistakenly copying out of a Previous Versions folder. Before we could develop some additional theories, the customer provided additional information.

    I've narrowed down the problem. I've found that this has something to do with a clipboard tool I've installed. Without the tool running, everything is fine. How is it that with the tool running, Explorer is copying files through some sort of time machine? Those old versions of the files no longer exist on my computer; where is Explorer getting them from?

    Other people started investigation additional avenues, taking I/O trace logs, that sort of thing. Curiously, the I/O trace shows that while Explorer opened both the source and destination files and issued plenty of WriteFile calls to the destination, it never issued a ReadFile request against the source. An investigation of Previous Versions shows that there are no previous versions of the file recorded in the file system. It's as if the contents were being created from nothing.

    While the others were off doing their investigations, my head shuddered and I was sent into a trance state. A hollow voice emanated from my throat as my psychic powers manifested themselves. Shortly thereafter, my eyes closed and I snapped back to reality, at which point I frantically typed up the following note while I still remembered what had happened:

    My psychic powers tell me that this clipboard program is "virtualizing" the clipboard contents (replacing whatever is on the clipboard with its own data) and then trying (and failing) to regurgitate the original contents when the Paste operation asks for the file on the clipboard.

    A closer investigation of the clipboard enhancement utility showed that one of its features was the ability to record old clipboard contents and replay them (similar to the Microsoft Office Clipboard). Hidden inside the I/O operations was a query for the last-access time.

    And that's when things fell into place.

    Starting in Windows Vista, last access time is no longer updated by default. The program apparently saw that the file was never accessed and assumed that that meant that it also had never been modified, so it regenerated the file contents from its internal cache. (The quick fix for the program would be to switch to checking the last modified time instead of the last access time.)

    Upon learning that my psychic powers once again were correct, I realized that my prize for being correct was actually a penalty: Now even more people will ask me to help debug their mysterious problems.

  • The Old New Thing

    It rather involved being on the other side of this airtight hatchway: Invalid parameters from one security level crashing code at the same security level

    • 40 Comments

    In the category of dubious security vulnerability, I submit the following (paraphrased) report:

    I have discovered that if you call the XYZ function (whose first parameter is supposed to be a pointer to a IUnknown), and instead of passing a valid COM object pointer, you pass a pointer to a random hunk of data, you can trigger an access violation in the XYZ function which is exploitable by putting specially-crafted data in that memory blob. An attacker can exploit the XYZ function for remote execution and compromise the system, provided an application uses the XYZ function and passes a pointer to untrusted data as the first parameter instead of a valid IUnknown pointer. Although we have not found an application which uses the XYZ in this way, the function neverless contains the potential for exploit, and the bug should be fixed as soon as possible.
    The person included a sample program which went something like this (except more complicated):
    // We can control the behavior by tweaking the value
    // of the Exploit array.
    unsigned char Exploit[] = "\x01\x02\x03...";
    
    void main()
    {
       XYZ((IUnknown*)Exploit);
    }
    

    Well, yeah, but you're already on the other side of the airtight hatchway. Instead of building up a complicated blob of memory with exactly the right format, just write your bad IUnknown:

    void Pwnz0r()
    {
      ... whatever you want ...
    }
    
    class Exploit : public IUnknown
    {
    public:
      STDMETHODIMP QueryInterface(REFIID riid, void **ppv)
      { Pwnz0r(); return E_NOINTERFACE; }
      STDMETHODIMP_(ULONG) AddRef() { Pwnz0r(); return 2; }
      STDMETHODIMP_(ULONG) Release() { Pwnz0r(); return 1; }
    };
    
    void main()
    {
       XYZ(&Exploit);
    }
    

    Wow, this new "exploit" is even portable to other architectures!

    Actually, now that you're on the other side of the airtight hatchway, you may as well take XYZ out of the picture since it's just slowing you down:

    void main()
    {
       Pwnz0r();
    }
    

    You're already running code. It's not surprising that you can run code.

    There's nothing subtle going on here. There is no elevation of privilege because the rogue activity happens in user-mode code, based on rogue code provided by an executable with trusted code execution privileges, at the same security level as the original executable.

    The people reporting the alleged vulnerability do say that they haven't yet found any program that calls the XYZ function with untrusted data, but even if they did, that would be a data handling bug in the application itself: Data crossed a trust boundary without proper validation. It's like saying "There is a security vulnerability in the DeleteFile function because it is possible for an application to pass an untrusted file name and thereby result in an attacker deleting any file of his choosing." Even if such a vulnerability existed, the flaw is in the application for not validating its input, not in DeleteFile for, um, deleting the file it was told to delete.

    The sad thing is that it took the security team five days to resolve this issue, because even though it looks like a slam dunk, the issue resolution process must be followed, just to be sure. Who knows, maybe there really is a bug in the XYZ function's use of the first parameter that would result in elevation of privilege. All supported versions of Windows need to be examined for the slim possibility that there's something behind this confused vulnerability report.

    But there isn't. It's just another dubious security vulnerability report.

    Exercise: Apply what you learned to this security vulnerability report. This is also paraphrased from an actual security report:

    There is a serious denial-of-service vulnerability in the function XYZ. This function takes a pointer to a buffer and a length. If the function is passed malformed parameters, it may encounter an access violation when it tries to read from an invalid buffer. Any application which calls this function with bad parameters will crash. Here is a sample program that illustrates the vulnerability:

    int main(int argc, char **argv)
    {
     // crash inside XYZ attempting to read past end of buffer
     XYZ("x", 9999999);
     return 0;
    }
    

    Credit for discovering this vulnerability goes to ABC Security Research Company. Copyright© 20xx ABC Security Research Company. All Rights Reserved.

  • The Old New Thing

    The OVERLAPPED associated with asynchronous I/O is passed by address, and you can take advantage of that

    • 26 Comments

    When you issue asynchronous I/O, the completion function or the I/O completion port receives, among other things, a pointer to the OVERLAPPED structure that the I/O was originally issued against. And that is your key to golden riches.

    If you need to associate information with the I/O operation, there's no obvious place to put it, so some people end up doing things like maintaining a master table which records all outstanding overlapped I/O as well as the additional information associated with that I/O. When each I/O completes, they look up the I/O in the master table to locate that additional information.

    But it's easier than that.

    Since the OVERLAPPED structure is passed by address, you can store your additional information alongside the OVERLAPPED structure:

    // in C
    struct OVERLAPPEDEX {
     OVERLAPPED o;
     CClient *AssociatedClient;
     CLIENTSTATE ClientState;
    };
    
    // or in C++
    struct OVERLAPPEDEX : OVERLAPPED {
     CClient *AssociatedClient;
     CLIENTSTATE ClientState;
    };
    

    When the I/O completes, you can use the CONTAINING_RECORD macro or just static_cast the LPOVERLAPPED to OVERLAPPEDEX* and bingo, there's your extra information right there. Of course, you have to know that the I/O that completed is one that was issued against an OVERLAPPEDEX structure instead of a plain OVERLAPPED structure, but there are ways of keeping track of that. If you're using a completion function, then only use an OVERLAPPEDEX-aware completion function when the OVERLAPPED structure is part of an OVERLAPPEDEX structure. If you're using an I/O completion port, then you can use the completion key or the OVERLAPPED.hEvent to distinguish OVERLAPPEDEX asynchronous I/O from boring OVERLAPPED I/O.

Page 1 of 4 (31 items) 1234