• The Old New Thing

    BeginBufferedPaint: It's not just for buffered painting any more

    • 10 Comments

    I covered the BeginBufferedPaint function in my 2008 PDC presentation, but one thing I didn't mention is that the buffered paint functions are very handy even if you have no intention of painting.

    Since the buffered paint functions maintain a cache (provided that you remembed to call Buffered­Paint­Init), you can use Begin­Buffered­Paint to get a temporary bitmap even if you have no intention of actually painting to the screen. You might want a bitmap to do some off-screen composition, or for some other temporary purpose, in which case you can ask Begin­Buffered­Paint to give you a bitmap, use the bitmap for whatever you like, and then pass fUpdateTarget = FALSE when you call End­Buffered­Paint to say "Ha ha, just kidding."

    One thing to have to be aware of is that the bitmap provided by Begin­Buffered­Paint is not guaranteed to be exactly the size you requested; it only promises that the bitmap will be at least the size you requested. Most of the time, your code won't care (there are just pixels out there that you aren't using), but if you use the Get­Buffered­Paint­Bits function to obtain direct access to the bits, don't forget to take the stride into account.

    Consider this artifical example of a program that uses Create­DIB­Section to create a temporary 32bpp bitmap for the purpose of updating a layered window. Start with the scratch program and make these changes:

    BOOL
    OnCreate(HWND hwnd, LPCREATESTRUCT lpcs)
    {
     BOOL fRc = FALSE;
     HDC hdcWin = GetDC(hwnd);
     if (hdcWin) {
      HDC hdcMem = CreateCompatibleDC(hdcWin);
      if (hdcMem) {
       const int cx = 200;
       const int cy = 200;
       RECT rc = { 0, 0, cx, cy };
       BITMAPINFO bmi = { 0 };
       bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader);
       bmi.bmiHeader.biWidth = cx;
       bmi.bmiHeader.biHeight = cy;
       bmi.bmiHeader.biPlanes = 1;
       bmi.bmiHeader.biBitCount = 32;
       bmi.bmiHeader.biCompression = BI_RGB;
       RGBQUAD *prgbBits;
       HBITMAP hbm = CreateDIBSection(hdcWin, &bmi,
                 DIB_RGB_COLORS, &reinterpret_cast<void*&>(prgbBits),
                                                            NULL, 0);
       if (hbm) {
        HBITMAP hbmPrev = SelectBitmap(hdcMem, hbm);
    
        // Draw a simple picture
        FillRect(hdcMem, &rc,
                         reinterpret_cast<HBRUSH>(COLOR_INFOBK + 1));
        rc.left = cx / 4;
        rc.right -= rc.left;
        rc.top = cy / 4;
        rc.bottom -= rc.top;
        FillRect(hdcMem, &rc,
                       reinterpret_cast<HBRUSH>(COLOR_INFOTEXT + 1));
    
        // Apply the alpha channel (and premultiply)
        for (int y = 0; y < cy; y++) {
         for (int x = 0; x < cx; x++) {
          RGBQUAD *prgb = &prgbBits[y * cx + x];
          BYTE bAlpha = static_cast<BYTE>(cx * x / cx);
          prgb->rgbRed = static_cast<BYTE>(prgb->rgbRed * bAlpha / 255);
          prgb->rgbBlue = static_cast<BYTE>(prgb->rgbBlue * bAlpha / 255);
          prgb->rgbGreen = static_cast<BYTE>(prgb->rgbGreen * bAlpha / 255);
          prgb->rgbReserved = bAlpha;
         }
        }
    
        // update the layered window
        POINT ptZero = { 0, 0 };
        SIZE siz = { cx, cy };
        BLENDFUNCTION bf =  { AC_SRC_OVER, 0, 255, AC_SRC_ALPHA };
        fRc = UpdateLayeredWindow(hwnd, NULL, &ptZero, &siz, hdcMem,
                                  &ptZero, 0, &bf, ULW_ALPHA);
        SelectBitmap(hdcMem, hbmPrev);
        DeleteObject(hbm);
       }
       DeleteDC(hdcMem);
      }
      ReleaseDC(hwnd, hdcWin);
     }
     return fRc;
    }
    

    Pretty standard stuff. But let's convert this to use the buffered paint functions to take advantage of the buffered paint bitmap cache.

    BOOL
    OnCreate(HWND hwnd, LPCREATESTRUCT lpcs)
    {
     BOOL fRc = FALSE;
     HDC hdcWin = GetDC(hwnd);
     if (hdcWin) {
      HDC hdcMem;
      // HDC hdcMem = CreateCompatibleDC(hdcWin);
      // if (hdcMem) {
       const int cx = 200;
       const int cy = 200;
       RECT rc = { 0, 0, cx, cy };
       // BITMAPINFO bmi = { 0 };
       // bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader);
       // bmi.bmiHeader.biWidth = cx;
       // bmi.bmiHeader.biHeight = cy;
       // bmi.bmiHeader.biPlanes = 1;
       // bmi.bmiHeader.biBitCount = 32;
       // bmi.bmiHeader.biCompression = BI_RGB;
       RGBQUAD *prgbBits;
       BP_PAINTPARAMS params = { sizeof(params), BPPF_NOCLIP };
       HPAINTBUFFER hbp = BeginBufferedPaint(hdcWin, &rc,
                                  BPBF_TOPDOWNDIB, &params, &hdcMem);
       if (hbp) {
        int cxRow;
        if (SUCCEEDED(GetBufferedPaintBits(hpb, &prgbBits, &cxRow))) {
       // HBITMAP hbm = CreateDIBSection(hdcWin, &bmi,
       //        DIB_RGB_COLORS, &reinterpret_cast<void*&>(prgbBits),
       //                                                   NULL, 0);
       // if (hbm) {
        // HBITMAP hbmPrev = SelectBitmap(hdcMem, hbm);
    
        // Draw a simple picture
        FillRect(hdcMem, &rc,
                         reinterpret_cast<HBRUSH>(COLOR_INFOBK + 1));
        rc.left = cx / 4;
        rc.right -= rc.left;
        rc.top = cy / 4;
        rc.bottom -= rc.top;
        FillRect(hdcMem, &rc,
                       reinterpret_cast<HBRUSH>(COLOR_INFOTEXT + 1));
    
        // Apply the alpha channel (and premultiply)
        for (int y = 0; y < cy; y++) {
         for (int x = 0; x < cx; x++) {
          RGBQUAD *prgb = &prgbBits[y * cxRow + x];
          BYTE bAlpha = static_cast<BYTE>(cx * x / cx);
          prgb->rgbRed = static_cast<BYTE>(prgb->rgbRed * bAlpha / 255);
          prgb->rgbBlue = static_cast<BYTE>(prgb->rgbBlue * bAlpha / 255);
          prgb->rgbGreen = static_cast<BYTE>(prgb->rgbGreen * bAlpha / 255);
          prgb->rgbReserved = bAlpha;
         }
        }
    
        // update the layered window
        POINT ptZero = { 0, 0 };
        SIZE siz = { cx, cy };
        BLENDFUNCTION bf =  { AC_SRC_OVER, 0, 255, AC_SRC_ALPHA };
        fRc = UpdateLayeredWindow(hwnd, NULL, &ptZero, &siz, hdcMem,
                                  &ptZero, 0, &bf, ULW_ALPHA);
        // SelectBitmap(hdcMem, hbmPrev);
        // DeleteObject(hbm);
       }
       EndBufferedPaint(hpb, FALSE);
       // DeleteDC(hdcMem);
      }
      ReleaseDC(hwnd, hdcWin);
     }
     return fRc;
    }
    
    // changes to WinMain
     if (SUCCEEDED(BufferedPaintInit())) {
     // if (SUCCEEDED(CoInitialize(NULL))) {/* In case we use COM */
      hwnd = CreateWindowEx(WS_EX_LAYERED,
      // hwnd = CreateWindow(
      ...
      BufferedPaintUnInit();
      // CoUninitialize();
      ...
    

    We're using the buffered paint API not for buffered painting but just as a convenient way to get a bitmap and a DC at one shot. It saves some typing (you don't have to create the bitmap and the DC and select the bitmap in and out), and when you return the paint buffer to the cache, some other window that calls Begin­Buffered­Paint may be able to re-use that bitmap.

    There are a few tricky parts here. First, if you're going to be accessing the bits directly, you need to call Get­Buffered­Paint­Bits and use the cxRow to determine the bitmap stride. Next, when we're done, we pass FALSE to End­Buffered­Paint to say, "Yeah, um, thanks for the bitmap, but don't Bit­Blt the results back into the DC we passed to Begin­Buffered­Paint. Sorry for the confusion."

    A less obvious trick is that we used BPPF_NOCLIP to get a full bitmap. By default, Begin­Buffered­Paint returns you a bitmap which is clipped to the DC you pass as the first parameter. This is an optimization to avoid allocating memory for pixels that can't be seen anyway when End­Buffered­Paint goes to copy the bits back to the original DC. We don't want this optimization, however, since we have no intention of blitting the results back to the original DC. The clip region of the original DC is irrelevant to us because we just want a temporary bitmap for some internal calculations.

    Anyway, there you have it, an example of using Begin­Buffered­Paint to obtain a temporary bitmap. It doesn't win much in this example (since we call it only once, at window creation time), but if you have code which creates a lot of DIB sections for temporary use, you can use this trick to take advantage of the buffered paint cache and reduce the overhead of bitmap creation and deletion.

    Pre-emptive snarky comment: "How dare you show us an alternative method that isn't available on Windows 2000!"

  • The Old New Thing

    Why is my program terminating with exit code 3?

    • 20 Comments

    There is no standard for process exit codes. You can pass anything you want to Exit­Process, and that's what Get­Exit­Code­Process will give back. The kernel does no interpretation of the value. If youw want code 42 to mean "Something infinitely improbable has occurred" then more power to you.

    There is a convention, however, that an exit code of zero means success (though what constitutes "success" is left to the discretion of the author of the program) and a nonzero exit code means failure (again, with details left to the discretion of the programmer). Often, higher values for the exit code indicate more severe types of failure. The command processor ERROR­LEVEL keyword was designed with these convention in mind.

    There are cases where your process will get in such a bad state that a component will take it upon itself to terminate the process. For example, if a process cannot locate the DLLs it imports from, or one of those DLLs fails to initialize, the loader will terminate the process and use the status code as the process exit code. I believe that when a program crashes due to an unhandled exception, the exception code is used as the exit code.

    A customer was seeing their program crash with an exit code of 3 and couldn't figure out where it was coming from. They never use that exit code in their program. Eventually, the source of the magic number 3 was identified: The C runtime abort function terminates the process with exit code 3.

  • The Old New Thing

    Watching the battle between Facebook and Facebook spammers

    • 18 Comments

    I am watching the continuing battle between Facebook and Facebook spammers with detached amusement. When I see a spam link posted to a friend's Facebook wall, I like to go and figure out how they got fooled. Internet Explorer's InPrivate Browsing comes in handy here, because I can switch to InPrivate mode before visiting the site, so that the site can't actually cause any harm to my Facebook account since I'm not logged in and it doesn't know how to log me in.

    The early versions were simply Web pages that hosted an embedded YouTube video, but they placed an invisible "Like" button over the playback controls, so that any attempt to play the video resulted in a Like being posted to your wall.

    Another early version of Facebook spam pages sent you to a page with an embedded YouTube video, but they also ran script that monitored your mouse position and positioned a 1×1 pixel Like button under it. That way, no matter where you clicked, you clicked on the Like button.

    A more recent variant is one that displayed a simple math problem and asked you to enter the answer. The excuse for this is that it is to "slow down robots", but really, that answer box is a disguised Facebook comment box. You can see the people who fell for this because their Facebook wall consists of a link to the page with the comment "7".

    My favorite one is a spam page that said, "In order to see the video, copy this text and paste it into your Address bar." The text was, of course, some script that injected code into the page so it could run around sending messages to all your Facebook friends. The kicker was that the script being injected was called owned.js. (The spam was so unsophisticated, it made you copy the text yourself! Not like this one which puts the attack string on your clipboard automatically.)

    I started to think, "Who could possibly fall for this?" And then I realized that the answer is "There will always be people who will fall for this." These are the people who would fall for the honor system virus.

    Update: On May 20, I saw a new variant. This one puts up a fake Youtube [sic] "security" dialog that says, "To comply with our Anti-SPAM™ regulations for a safe internet experience we are required to verify your identity" by solving a CAPTCHA. (This makes no sense.) The words in the CAPTCHA by an amazing coincidence happen to be a comment somebody might make on a hot video. Because the alleged CAPTCHA dialog is a disguised Facebook comment box. The result is that the victim posts a comment like "so awesome" to their own wall, thereby propagating the spam.

  • The Old New Thing

    How long do taskbar notification balloons appear on the screen?

    • 27 Comments

    We saw some time ago that taskbar notification balloons don't penalize you for being away from the computer. But how long does the balloon stay up when the user is there?

    Originally, the balloon appeared for whatever amount of time the application specified in the uTimeout member of the NOTIFYICONDATA structure, subject to a system-imposed minimum of 10 seconds and maximum of 60 seconds.

    In Windows XP, some animation was added to the balloon, adding 2 seconds of fade-in and fade-out animation to the display time.

    Starting in Windows Vista, applications are no longer allowed to specify how long they wanted the balloon to appear; the uTimeout member is ignored. Instead, the display time is the amount of time specified by the SPI_GETMESSAGEDURATION system parameter, with 1 second devoted to fade-in and 5 seconds devoted to fade-out, with a minimum of 3 seconds of full visibility. In other words, if you set the message duration to less than 1+3+5=9 seconds, the taskbar behaves as if you had set it to 9 seconds.

    The default message duration is 5 seconds, so in fact most systems are in the "shorted possible time" case. If you want to extend the time for which balloons notification appear, you can use the SystemParametersInfo function to change it:

    BOOL SetMessageDuration(DWORD seconds, UINT flags)
    {
     return SystemParametersInfo(SPI_SETMESSAGEDURATION,
                                 0, IntToPtr(seconds), flags);
    }
    

    (You typically don't need to mess with this setting, because you can rescue a balloon from fading out by moving the mouse over it.)

    Note that an application can also set the NIF_REALTIME flag, which means "If I can't display the balloon right now, then just skip it."

  • The Old New Thing

    Why does Explorer show a thumbnail for my image that's different from the image?

    • 21 Comments

    A customer (via a customer liaison) reported that Explorer somestimes showed a thumbnail for an image file that didn't exactly match the image itself.

    I have an image that consists of a collage of other images. When I switch Explorer to Extra Large Icons mode, the thumbnail is a miniature representation of the image file. But in Large Icons and Medium Icons mode, the thumbnail image shows only one of the images in the collage. I've tried deleting the thumbnail cache, but that didn't help; Explorer still shows the wrong thumbnails for the smaller icon modes. What is wrong?

    The customer provided screenshots demonstrating the problem, but the customer did not provide the image files themselves that were exhibiting the problem. I therefore was reduced to using my psychic powers.

    My psychic powers tell me that your JPG file has the single-item image as the camera-provided thumbnail. The shell will use the camera-provided thumbnail if suitable.

    The customer liaison replied,

    The customer tells me that the problem began happening after they edited the images. Attached is one of the images that's demonstrating the problem.

    Some image types (most notable TIFF and JPEG) support the EXIF format for encoding image metadata. This metadata includes information such as the model of camera used to take the picture, the date the picture was taken, and various camera settings related to the photograph. But the one that's interesting today is the image thumbnail.

    When Explorer wants to display a thumbnail for an image, it first checks whether the image comes with a precalculated thumbnail. If so, and the thumbnail is at least as large as the thumbnail Explorer wants to show, then Explorer will use the image-provided thumbnail instead of creating its own from scratch. If the thumbnail embeded in the image is wrong, then when Explorer displays the image-provided thumbnail, the result will be incorrect. Explorer has no idea that the image is lying to it.

    Note that the decision whether to use the image-provided thumbnail is not based solely on the view. (In other words, the conclusion is not "Explorer uses the image-provided thumbnail for Large Icons and Medium Icons but ignores it for Extra Large Icons.) The decision is based on both the view and the size of the image-provided thumbnail. If the image-provided thumbnail is at least the size of the view, then Explorer will use it. For example, if your view is set to 64 × 64 thumbnails, then the image-provided thumbnail will be used if it is at least 64 × 64.

    The Wikipedia page on EXIF points out that "Photo manipulation software sometimes fails to update the embedded information after an editing operation." It appears that some major image editing software packages fail to update the EXIF thumbnail when an image is edited, which can result in inadvertent information disclosure: If the image was cropped or otherwise altered to remove information, the information may still linger in the thumbnail. This Web site has a small gallery of examples.

  • The Old New Thing

    Multithreaded UI code may be just as hard as multithreaded non-UI code, but the consequences are different

    • 22 Comments

    Commenter Tim Smith claims that the problems with multithreaded UI code are not significantly more than plain multithreaded code. While that may be true on a theoretical level, the situations are quite different in practice.

    Regardless of whether your multithreaded code does UI or not, you have to deal with race conditions, synchronization, cache coherency, priority inversion, all that mulitthreaded stuff.

    The difference is that multithreaded problems with non-UI code are often rare, relying on race conditions and other timing issues. As a result, you can often get away with a multithreaded bug, because it may shows up in practice only rarely, if ever. (On the other hand, when it does show up, it's often impossible to diagnose.)

    If you mess up multithreaded UI code, the most common effect is a hang. The nice thing about this is that it's easier to diagnose because everything has stopped and you can try to figure out who is waiting for what. On the other hand, the problems also occur with much more frequency.

    So it's true that the problems are the same, but the way they manifest themselves are very different.

  • The Old New Thing

    If undecorated names are given in the DLL export table, why does link /dump /exports show me decorated names?

    • 11 Comments

    If you run the link /dump /exports command on a DLL which exports only undecorated names, you may find that in addition to showing those undecorated names, it also shows the fully-decorated names.

    We're building a DLL and for some functions, we have chosen to suppress the names from the export table by using the NONAME keyword. When we dump the exports, we still see the names. And the functions which we did want to export by name are showing up with their decorated names even though we list them in the DEF file with undecorated names. Where is the decorated name coming from? Is it being stored in the DLL after all?

            1        00004F1D [NONAME] _Function1@4
            2        000078EF [NONAME] _Function2@12
            3        00009063 [NONAME] _Function3@8
    

    The original decorated names are not stored in the DLL. The link /dump /exports command is sneaky and looks for a matching PDB file and, if finds one, extracts the decorated names from there.

    (How did I know this? I didn't, but I traced each file accessed by the link /dump /exports command and observed that it went looking for the PDB.)

  • The Old New Thing

    Looking at the world through kernel-colored glasses

    • 14 Comments

    During a discussion of the proper way of cancelling I/O, the question was raised as to whether it was safe to free the I/O buffer, close the event handle, and free the OVERLAPPED structure immediately after the call to CancelIo. The response from the kernel developer was telling.

    That's fine. We write back to the buffer under a try/except, so if the memory is freed, we'll just ignore it. And we take a reference to the handle, so closing it does no harm.

    These may be the right answers from a kernel-mode point of view (where the focus is on ensuring that consistency in kernel mode is not compromised), but they are horrible answers from an application point of view: Kernel mode will write back to the buffer and the OVERLAPPED when the I/O completes, thereby corrupting user-mode memory if user-mode had re-used the memory for some other purpose. And if the handle in the OVERLAPPED structure is closed, then user mode has lost its only way of determining when it's safe to continue! You had to look beyond the literal answer to see what the consequences were for application correctness.

    (You can also spot the kernel-mode point of view in the clause "if the memory is freed." The developer is talking about freed from kernel mode's point of view, meaning that it has been freed back to the operating system and is no longer committed in the process address space. But memory that is logically freed from the application's point of view may not be freed back to the kernel. It's usually just freed back into the heap's free pool.)

    The correct answer is that you have to wait for the I/O to complete before you free the buffer, close the event handle, or free the OVERLAPPED structure.

    Don't fall into this trap. The kernel developer was looking at the world through kernel-colored glasses. But you need to look at the situation from the perspective of your customers. When the kernel developer wrote "That's fine", he meant "That's fine for me." Sucks to be you, though.

    It's like programming an autopilot to land an airplane, but sending it through aerobatics that kill all the passengers. If you ask the autopilot team, they would say that they accomplished their mission: Technically, the autopilot did land the airplane.

    Here's another example of kernel-colored glasses. And another.

    Epilogue: To be fair, after I pointed out the kernel-mode bias in the response, the kernel developer admitted, "You're right, sorry. I was too focused on the kernel-mode perspective and wasn't looking at the bigger picture."

  • The Old New Thing

    Why double-null-terminated strings instead of an array of pointers to strings?

    • 16 Comments

    I mentioned this in passing in my description of the format of double-null-terminated strings, but I think it deserves calling out.

    Double-null-terminated strings may be difficult to create and modify, but they are very easy to serialize: You just write out the bytes as a blob. This property is very convenient when you have to copy around the list of strings: Transferring the strings is a simple matter of transferring the memory block as-is. No conversion is necessary. This makes it easy to do things like wrap the memory inside another container that supports only flat blobs of memory.

    As it turns out, a flat blob of memory is convenient in many ways. You can copy it around with memcpy. (This is important when capturing values across security boundaries.) You can save it to a file or into the registry as-is. It marshals very easily. It becomes possible to store it in an IData­Object. It can be freed with a single call. And in the cases where you can't allocate any memory at all (e.g., you're filling a buffer provided by the caller), it's one of the few options available. This is also why self-relative security descriptors are so popular in Windows: Unlike absolute security descriptors, self-relative security descriptors can be passed around as binary blobs, which makes them easy to marshal, especially if you need to pass one from kernel mode to user mode.

    A single memory block with an array of integers containing offsets would also work, but as the commenter noted, it's even more cumbersome than double-null-terminated strings.

    Mind you, if you don't need to marshal the list of strings (because it never crosses a security boundary and never needs to be serialized), then an array of string pointers works just fine. If you look around Win32, you'll find that most cases where double-null terminated strings exist are for the most part either inherited from 16-bit Windows or are one of the cases where marshalling is necessary.

  • The Old New Thing

    Why is hybrid sleep off by default on laptops? (and how do I turn it on?)

    • 27 Comments

    Hybrid sleep is a type of sleep state that combines sleep and hibernate. When you put the computer into a hybrid sleep state, it writes out all its RAM to the hard drive (just like a hibernate), and then goes into a low power state that keeps RAM refreshed (just like a sleep). The idea is that you can resume the computer quickly from sleep, but if there is a power failure or some other catastrophe, you can still restore the computer from hibernation.

    A hybrid sleep can be converted to a hibernation by simply turning off the power. By comparison, a normal sleep requires resuming the computer to full power in order to write out the hibernation file. Back in the Windows XP days, I would sometimes see the computer in the next room spontaneously turn itself on: I'm startled at first, but then I see on the screen that the system is hibernating, and I understand what's going on.

    Hybrid sleep is on by default for desktop systems but off by default on laptops. Why this choice?

    First of all, desktops are at higher risk of the power outage scenario wherein a loss of power (either due to a genuine power outage or simply unplugging the computer by mistake) causes all work in progress to be lost. Desktop computers typically don't have a backup battery, so a loss of power means instant loss of sleep state. By comparison, laptop computers have a battery which can bridge across power outages.

    Furthermore, laptops have a safety against battery drain: When battery power gets dangerously low, it can perform an emergency hibernate.

    Laptop manufacturers also requested that hybrid sleep be off by default. They didn't want the hard drive to be active for a long time while the system is suspending, because when users suspend a laptop, it's often in the form of "Close the lid, pick up the laptop from the desk, throw it into a bag, head out." Performing large quantities of disk I/O at a moment when the computer is physically being jostled around increases the risk that one of those I/O's will go bad. This pattern doesn't exist for desktops: When you suspend a desktop computer, you just leave it there and let it do its thing.

    Of course, you can override this default easily from the Control Panel. Under Power Options, select Change plan settings, then Changed advanced power settings, and wander over into the Sleep section of the configuration tree.

    If you're a command line sort of person, you can use this insanely geeky command line to enable hybrid sleep when running on AC power in Balanced mode:

    powercfg -setacvalueindex 381b4222-f694-41f0-9685-ff5bb260df2e
                              238c9fa8-0aad-41ed-83f4-97be242c8f20
                              94ac6d29-73ce-41a6-809f-6363ba21b47e 1
    

    (All one line. Take a deep breath.) [Update: Or you can use powercfg -setacvalueindex SCHEME_BALANCED SUB_SLEEP HYBRIDSLEEP 1, as pointed out by Random832. I missed this because the ability to substitute aliases is not mentioned in the -setacvalueindex documentation. You have to dig into the -aliases documentation to find it.]

    Okay, what do all these insane options mean?

    -setacvalueindex sets the behavior when running on AC power. To change the behavior when running on battery, use -setdcvalueindex instead. Okay, that was easy.

    The next part is a GUID, specifically, the GUID that represents the balanced power scheme. If you want to modify the setting for a different power scheme, then substitute that scheme's GUID.

    After the scheme GUID comes the subgroup GUID. Here, we give the GUID for the Sleep subgroup.

    Next we have the GUID for the Hybrid Sleep setting.

    Finally, we have the desired new value for the setting. As you might expect, 1 enables it and 0 disables it.

    And where did these magic GUIDs come from? Run the powercfg -aliases command to see all the GUIDs. You can also run powercfg -q to view all the settings and their current values in the current power scheme.

    Bonus reading:

Page 124 of 448 (4,472 items) «122123124125126»