• The Old New Thing

    Wow, they really crammed a lot into those 410 transistors

    • 22 Comments

    A colleague of mine pointed out that in yesterday's Seattle Times, there was an article about Moore's Law. To illustrate the progress of technology, they included some highlights, including the following piece of trivia:

    The Core 2 Duo processor with 410 transistors made its debut in 2002.

    You can see the photo and caption in the online version of the article if you go to the slide show and look at photo number three.

    This is an impressive feat. Intel managed to cram a Core 2 Duo into only an eighth as many transistors as the 6502.

    On the other hand, it does help to explain why the chip has so few registers. There weren't any transistors left!

  • The Old New Thing

    How do I extract the path to Control Panel from this shortcut so I can launch it?

    • 13 Comments

    A customer explained that they had a program that used the IShell­Link::Get­Path method to extract the program that is the target of a shortcut. They found that this didn't work for certain shortcuts, namely, shortcuts whose targets are not physical file paths.

    The one that they were specifically having trouble with was the Control Panel shortcut. For example, if you open the classic Control Panel, then drag any of the Control Panel items to the desktop, this will create a shortcut to that Control Panel item. If you view the properties on that shortcut, the Target will be grayed out instead of showing a path.

    "We want to get the target path of the shortcut so that we can launch the application. How can we get the target path from IShell­Link::Get­Path? Is there a special Windows API to get the path?"

    They can't get the target path because these are shortcuts to virtual objects. There is no target path to begin with.

    But if you look past the question to their problem, you can see that they don't need to know the path in the first place. All they want to do is launch the target application. The way to do this is simply to pass the shortcut to the Shell­Execute function. You can take this simple program as inspiration. Pass "open" as the verb and the full path to the shortcut as the file.

    As a bonus: Your program will also respect the other settings in the shortcut, like the Start In folder, the shortcut key, the preferred window state (normal, maximized, etc.), the custom application user model ID.

    And to answer the question (even though it isn't needed to solve the problem): Use the IShell­Link::Get­ID­List method to obtain the shortcut target regardless of whether it is a physical file or virtual namespace item.

  • The Old New Thing

    Why can't I have variadic COM methods?

    • 8 Comments

    COM methods cannot be variadic. Why not?

    Answer: Because the marshaler doesn't know when to stop.

    Suppose variadic COM methods were possible. And then you wrote this code:

    interface IVariadic
    {
     HRESULT Mystery([in] int code, ...);
    };
    
    IVariadic *variadic = something;
    uint32_t ipaddr;
    HRESULT hr = variadic->Mystery(9, 192, 168, 1, 1, &ipaddr);
    

    How would COM know how to marshal this function call? In other words, suppose that variadic is a pointer to a proxy that refers to an object in another process. The COM marshaler needs to take all the parameters to IVariadic::Mystery, package them up, send them to the other process, then unpack the parameters, and pass them to the implementation. And then when the implementation returns, it needs to take the return value and any output parameters, package them up, send them back to the originating process, where they are unpacked and applied to the original parameters.

    Consider, for example,

    interface IDyadic
    {
     HRESULT Enigma([in] int a, [out] int *b);
    };
    
    IDyadic *dyadic = something;
    int b;
    HRESULT hr = dyadic->Enigma(1, &b);
    

    If dyadic refers to an object in another process, the marshaler does this:

    • Allocate a block of memory containing the following information:
      • Information to identify the dyadic object in the other process,
      • the integer 1.
    • Transmit that block of memory to the other process.

    The other process receives the block of memory and does the following:

    • Use the information in the memory block to identify the dyadic object.
    • Extract the parameter 1 from the memory block.
    • Allocate a local integer variable, call it x.
    • Call dyadic->Enigma(1, &x). Let's say that the function stores 42 into x, and it returns E_PENDING.
    • Allocate a block of memory containing the following information:
      • The value E_PENDING (the HRESULT returned by dyadic->Enigma),
      • The integer 42 (the value that dyadic->Enigma stored in the local variable x).
    • Transmit that block of memory to the originating process.

    The originating process receives the block of memory and does the following:

    • Extracts the HRESULT E_PENDING.
    • Extracts the value 42.
    • Stores the value 42 into b.
    • Returns the value E_PENDING to the caller.

    Note that in order for the marshaler to do its job, it needs to know every parameter to the method, whether that parameter is an input parameter (which is sent from the originating process to the remote process), an output parameter (which is sent from the remote process to the originating process), and how to send that parameter. In our case, the parameter is just an integer, so sending it is just copying the bits, but in the more general case, the parameter could be a more complicated data structure.

    Now let's look at that variadic method again. How is the marshaler supposed to know what to do with the ...? It doesn't know how many parameters it needs to transfer. It doesn't know what types those parameters are. It doesn't know which ones are input parameters and which ones are output parameters.

    In order to know that, it would have to reverse-engineer the implementation of the IVariadic::Mystery function and figure out that the first parameter, the number 9, is a code that means that the method takes four 8-bit integers as input and outputs a 32-bit integer.

    This is a rather tall order for the client side of the marshaler, since it has to do its work without access to the other process. It would have to use its psychic powers to figure out how to package up the parameters, as well as how to unpack them afterward.

    Therefore, COM says, "Sorry, you can't do that."

    But what you can do is encode the parameters in a form that the marshaler understands. For example, you might use a counted array of VARIANTs or a SAFEARRAY. The COM folks already did the work to teach the marshaler how to, for example, decode the vt member of the VARIANT and understand that, "Oh, if the value is VT_I4, then the VARIANT contains a 32-bit signed integer."

    Bonus chatter: But wait, there is a MIDL attribute called [vararg]. You said that COM doesn't support variadic methods, but there is a MIDL keyword that says variadic right on the tin!

    Ah, but that [varargs] attribute is just a sleight of hand trick. Bceause when you say [varargs], what you're saying is, "The last parameter of this method is a SAFEARRAY of VARIANTs. A scripting language can expose this method to scripts as variadic, but what it actually does is take all the variadic parameters and store them into a SAFEARRAY, and then pass the SAFEARRAY."

    In other words, it indicates that the last parameter of the method acts like the C# params keyword.

  • The Old New Thing

    How did the scopes for the CryptProtectMemory function end up in a strange order?

    • 4 Comments

    A few weeks ago, I left an exercise: Propose a theory as to why the names and values of the scopes for the Crypt­Protect­Memory function are the way they are.

    I didn't know the answer when I posed the exercise, but I went back and dug into it.

    The Crypt­Protect­Memory function started out as an internal function back in Windows 2000, and when originally introduced, there were only two scopes: Within a process and cross-process. The Flags parameter therefore defined only a single bit, leaving the other bits reserved (must be zero). If the bottom bit was clear, then the memory was protected within a process; if the bottom bit was set, then the memory was protected across processes.

    Later, the team realized that they needed to add a third scope, the one that corresponds to CRYPT­PROTECT_SAME_LOGON. They didn't want to make a breaking change for existing callers, but they saw that they could retarget what used to be a Flags parameter as an Options parameter, and they added the new scope as a third option.

    The numeric values remained unchanged, which meant that the new function was backward-compatible with existing callers.

    Bonus chatter: Commenter sense is correct that SAME_LOGON can be used by a service while impersonating the client, however it is not the case that the scope can be larger when impersonating a remote user. The memory block returned by the Crypt­Protect­Memory function can be decrypted only on the same machine that encrypted it, and only as long as the machine has not been rebooted.

  • The Old New Thing

    It rather involved being on the other side of this airtight hatchway: Invalid parameters from one security level crashing code at the same security level (yet again)

    • 23 Comments

    It's the bogus vulnerability that keeps on giving. This time a security researcher found a horrible security flaw in Sys­Alloc­String­Len:

    The Sys­Alloc­String­Len function is vulnerable to a denial-of-service attack. [Long description of reverse-engineering deleted.]

    The Sys­Alloc­String­Len does not check the length parameter properly. If the provided length is larger than the actual length of the buffer, it may encounter an access violation when reading beyond the end of the buffer. Proof of concept:

    SysAllocStringLen(L"Example", 0xFFFFFF);
    

    Credit for this vulnerability should be given to XYZ Security Labs. Copyright © XYZ Security Labs. All rights reserved.

    As with other issues of this type, there is no elevation. The attack code and the code that crashes are on the same side of the airtight hatchway. If your goal was to make the process crash, then instead of passing invalid parameters to the Sys­Alloc­String­Len function, you can launch the denial of service attack much more easily:

    int __cdecl main(int, char**)
    {
        ExitProcess(0);
    }
    

    Congratulations, you just launched a denial-of-service attack against yourself.

    In order to trigger an access violation in the Sys­Alloc­String­Len function, you must already have had enough privilege to run code, which means that you already have enough privilege to terminate the application without needing the Sys­Alloc­String­Len function.

    Once again, we have a case of MS07-052: Code execution results in code execution

    Earlier in the series:

    Bonus bogus vulnerability report:

    The Draw­Text function is vulnerability to a denial-of-service attack because it does not validate that the lpchText parameter is a valid pointer. If you pass NULL as the second parameter, the function crashes. We have found many functions in the system which are vulnerable to the same issue.

    ¹ Now, of course, if there were some way you could externally induce a program into passing invalid parameters to the Sys­Alloc­String­Length function, then you'd be onto something. But even then, the vulnerability would be in the program that is passing the invalid parameters, not in the Sys­Alloc­String­Length function itself.

  • The Old New Thing

    What was the starting point for the Panther Win32 kernel?

    • 21 Comments

    When I presented a list of cat-related code names from Windows 95, commenter dave wanted to know whether the Panther kernel was derived from the 32-bit DOS kernel or the Windows/386 kernel.

    Neither.

    Here's the table again, with some more columns of information:

    Component Code Name Based on Fate
    16-bit DOS kernel Jaguar MS-DOS 5 Morphed into Windows 95 boot loader / compatibility layer
    32-bit DOS kernel Cougar Win386 kernel Morphed into VMM32
    Win32 kernel Panther Windows NT kernel Cancelled
    User interface Stimpy Windows 3.1 user interface Became the Windows 95 user interface

    The original idea for the Jaguar and Cougar projects was to offer a 16-bit MS-DOS environment that could be "kicked up a notch" to a 32-bit protected-mode MS-DOS environment, with virtual memory and multiple virtual machines. They used the MS-DOS 5 and Win386 kernels as starting points. (Why wasn't Jaguar based on MS-DOS 6.0? For the same reason NASA didn't use the Space Shuttle to rescue the Apollo 13 astronauts.) This project as originally envisioned was cancelled, but the work was not lost. The projects took on new life as the Windows 95 boot loader / compatibility layer and as the Windows 95 virtual machine manager, respectively.

    The idea for the Panther project was to start with the existing Windows NT kernel and strip it down to run in 4MB of RAM. This project did not pan out, and it was cancelled outright. It was replaced with a Win32 kernel written from scratch with the 4MB limit in mind.

    The Stimpy project survived intact and became the Windows 95 user interface.

    I doubt the code name was the reason, but it's interesting that the ferocious cats did not carry out their original missions, but the dim-witted cat did.

  • The Old New Thing

    How to find the IP address of a hacker, according to CSI: Cyber

    • 50 Comments

    The episode of the television documentary CSI: Cyber which aired on CBS last Wednesday demonstrated an elite trick to obtaining a hacker's IP address: Extract it from the email header.

    Here's a screen shot from time code 14:35 that demonstrates the technique.

    <meta id="viewport" content="" name="viewport"></m <link href="y/images/favicon.ico" rel="shortcut ic <link href="y/styles.css?s=1382384360" type="text/ <link href="y/mail.css?s=1382384360" type="text/cs <hidden: ip: 951.27.9.840 > < echo;off;>           <!--if lte IE 8><link rel="stylesheet" type="text/ <!--if lte IE 7><link rel="stylesheet" type="text/ <link href="plugins/jqueryui/themes/larry/jquery-u <link href="plugins/jqueryui/themes/larry/ui.js?s=

    This technique is so awesome I had to share it.

  • The Old New Thing

    Why are there both TMP and TEMP environment variables, and which one is right?

    • 49 Comments

    If you snoop around your environment variables, you may notice that there are two variables that propose to specify the location of temporary files. There is one called TMP and another called TEMP. Why two? And if they disagree, then who's right?

    Rewind to 1973. The operating system common on microcomputers was CP/M. The CP/M operating system had no environment variables. That sounds like a strange place to start a discussion of environment variables, but it's actually important. Since it had no environment variables, there was consequently neither a TMP nor a TEMP environment variable. If you wanted to configure a program to specify where to put its temporary files, you needed to do some sort of program-specific configuration, like patching a byte in the executable to indicate the drive letter where temporary files should be stored.

    (My recollection is that most CP/M programs were configured via patching. At least that's how I configured them. I remember my WordStar manual coming with details about which bytes to patch to do what. There was also a few dozen bytes of patch space set aside for you to write your own subroutines, in case you needed to add custom support for your printer. I did this to add an "Is printer ready to accept another character?" function, which allowed for smoother background printing.)

    Move forward to 1981. The 8086 processor and the MS-DOS operating system arrived on the scene. The design of both the 8086 processor and the MS-DOS operating system were strongly inspired by CP/M, so much so that it was the primary design goal that it be possible to take your CP/M program written for the 8080 processor and machine-translate it into an MS-DOS program written for the 8086 processor. Mind you, the translator assumed that you didn't play any sneaky tricks like self-modifying code, jumping into the middle of an instruction, or using code as data, but if you played honest, the translator would convert your program.

    (The goal of allowing machine-translation of code written for the 8080 processor into code written for the 8086 processor helps to explain some of the quirks of the 8086 instruction set. For example, the H and L registers on the 8080 map to the BH and BL registers on the 8086, and on the 8080, the only register that you could use to access a computed address was HL. This is why of the four basic registers AX, BX, CX, and DX on the 8086, the only one that you can use to access memory is BX.)

    One of the things that MS-DOS added beyond compatibility with CP/M was environment variables. Since no existing CP/M programs used environment variables, none of the first batch of programs for MS-DOS used them either, since the first programs for MS-DOS were all ported from CP/M. Sure, you could set a TEMP or TMP environment variable, but nobody would pay attention to it.

    Over time, programs were written with MS-DOS as their primary target, and they started to realize that they could use environment variables as a way to store configuration data. In the ensuing chaos of the marketplace, two environment variables emerged as the front-runners for specifying where temporary files should go: TEMP and TMP.

    MS-DOS 2.0 introduced the ability to pipe the output of one program as the input of another. Since MS-DOS was a single-tasking operating system, this was simulated by redirecting the first program's output to a temporary file and running it to completion, then running the second program with its input redirected from that temporary file. Now all of a sudden, MS-DOS needed a location to create temporary files! For whatever reason, the authors of MS-DOS chose to use the TEMP variable to control where these temporary files were created.

    Mind you, the fact that COMMAND.COM chose to go with TEMP didn't affect the fact that other programs could use either TEMP or TMP, depending on the mood of their original author. Many programs tried to appease both sides of the conflict by checking for both, and it was up to the mood of the original author which one it checked first. For example, the old DISKCOPY and EDIT programs would look for TEMP before looking for TMP.

    Windows went through a similar exercise, but for whatever reason, the original authors of the Get­Temp­File­Name function chose to look for TMP before looking for TEMP.

    The result of all this is that the directory used for temporary files by any particular program is at the discretion of that program, Windows programs are likely to use the Get­Temp­File­Name function to create their temporary files, in which case they will prefer to use TMP.

    When you go to the Environment Variables configuration dialog, you'll still see both variables there, TMP and TEMP, still duking it out for your attention. It's like Adidas versus Puma, geek version.

  • The Old New Thing

    Why did the original code for FIND.COM use lop as a label instead of loop?

    • 4 Comments

    A few years ago, I left you with an exercise: Given the code

            mov     dx,st_length            ;length of the string arg.
            dec     dx                      ;adjust for later use
            mov     di, line_buffer
    lop:
            inc     dx
            mov     si,offset st_buffer     ;pointer to beg. of string argument
    
    comp_next_char:
            lodsb
            cmp     al,byte ptr [di]
            jnz     no_match
    
            dec     dx
            jz      a_matchk                ; no chars left: a match!
            call    next_char               ; updates di
            jc      no_match                ; end of line reached
            jmp     comp_next_char          ; loop if chars left in arg.
    

    why is the loop label called lop instead of loop?

    The answer is that calling it loop would create ambiguity with the 8086 instruction loop.

    Now, you might say (if your name is Worf), that there is no ambiguity. "Every line consists of up to four things (all optional). A label, an instruction/pseudo-instruction, operands, and comments. The label is optionally followed by a colon. If there is no label, then the line must start with whitespace."

    If those were the rules, then there would indeed be no ambiguity.

    But those aren't the rules. Leading whitespace is not mandatory. If you are so inclined, you can choose to begin your instructions all in column zero.

    mov dx,st_length
    dec dx
    mov di, line_buffer
    lop:
    inc dx
    mov si,offset st_buffer
    comp_next_char:
    lodsb
    cmp al,byte ptr [di]
    jnz no_match
    dec dx
    jz a_matchk
    call next_char
    jc no_match
    jmp comp_next_char
    

    It's not recommended, but it's legal. (I have been known to do this when hard-coding breakpoints for debugging purposes. That way, a search for /^int 3/ will find all of my breakpoints.)

    Since you can put the opcode in column zero, a line like this would be ambiguous:

    loop ret
    

    This could be parsed as "Label this line loop and execute a ret instruction." Or it could be parsed as "This is an unlabeled line, consisting of a loop instruction that jumps to the label ret."

    Label Opcode Operand
    loop ret
    – or –
    loop ret

    Disallowing instruction names as labels or macros or equates is the simplest way out of this predicament. Besides, you probably shouldn't be doing it anyway. Imagine the havoc if you did

    or equ and
    
  • The Old New Thing

    When I set the "force X off" policy to Disabled, why doesn't it force X on?

    • 18 Comments

    A customer was using one of the many "force X off" policies, but instead of using it to force X off, they were trying to use it to force X on by setting the policy to Disabled. For example, there is a "Hide and disable all items on the desktop". The customer was setting this policy to Disabled, expecting it to force all icons visible on the desktop, removing the option on the desktop View menu to hide the icons.

    As we discussed some time ago, group policy is for modifying default behavior, and interpreting them requires you to have a degree in philosophy.

    In particular, a policy which forces X off has three states:

    • Enabled: X is forced off.
    • Disabled: X is not forced off.
    • Not configured: No opinion. Let another group policy object decide.

    Disabling a policy means "Return to default behavior", and the default behavior in many cases is that the user can decide whether they want X or not by selecting the appropriate option. In philosophical terms, "Not forced off" is not the same as "Forced on."

    If you want to force X on, then you have to look for a policy that says "Force X on." (And if there isn't one, then forcing X on is not something currently supported by group policy.)

Page 1 of 448 (4,474 items) 12345»