Announcing a major MFC update plus TR1 support

Announcing a major MFC update plus TR1 support

  • Comments 43

As an update to Visual Studio 2008, we’re pleased to announce a major new release of the Microsoft Foundation Classes (MFC).  Using these components, developers will be able to create applications with the “look & feel” of Microsoft’s most popular applications – including Office, Internet Explorer and Visual Studio.  Some of the specific features include:

·         Office 2007 Ribbon Bar:   Ribbon, Pearl, Quick Access Toolbar, Status Bar, etc.

 

·         Office 2003 and XP look:  Office-style toolbars and menus, Outlook-style shortcut bar, print preview, live font picker, color picker, etc.

 

·         Internet Explorer look:  Rebars and task panes.

 

·         Visual Studio look: sophisticated docking functionality, auto hide windows, property grids, MDI tabs, tab groups, etc.

 

·         Vista theme support:  Dynamically switch between themes!

 

·         “On the fly” menus and toolbar customization:  Users can customize the running application through live drag and drop of menu items and toolbar buttons.

 

·         Shell management classes:  Use these classes to enumerate folders, drives and items, browse for folders and more.

 

·         + many additional controls

 

In addition, we will also be delivering TR1 support.  Portions of TR1 are scheduled for adoption in the upcoming C++0x standard as the first major addition to the ISO 2003 standard C++ library. Our implementation includes a number of important features such as smart pointers, regular expression parsing, new containers (tuple, array, unordered set, etc), sophisticated random number generators, polymorphic function wrappers, type traits and more!  We are not currently shipping C99 compatibility or support for special math functions. 

While we’re announcing these today, please note they won’t be final until Q1CY08.  Since we know you want to get your hands on them, we’ll have a beta sometime near the first of the new year.  The components will be available to all Visual Studio 2008 Standard and above customers.   This is just the first step in our drive to improve the native development experience.  There’s a lot more that we’re working on, but we hope you enjoy this first milestone.

There’s a lot more to tell you about the MFC libraries so keep watching this blog for more information!  You should also check out Pat Brenner’s video on Channel 9 where he talks about the new libraries.  You can also read what Soma had to say at http://blogs.msdn.com/somasegar/archive/2007/11/09/visual-c-libraries-update.aspx.

Visual C++ Development Team

  • Stephan,

    Thanks for the response.

    Let me make a few further responses.

    >> However, like it or not, the

    >> undeniable fact is that these

    >> are all part of the standard.

    > To be clear: I personally want

    > two-stage name lookup and exception

    > specification support.  (I also want

    > export to be erased from human

    > memory - I don't always get what I

    > want.)

    Implementing the first two would be a *great* start.

    However, unless export is ultimately removed from a future C++ standard (that proposal has already been formally rejected), it really needs to be implemented in due time. While it may be difficult, it has been almost a decade now. An implementation within that time should not be too unreasonable. At the very least put it on the roadmap and begin implementation.

    > Riddle me this: why doesn't GCC

    > support export?

    I do not know. Maybe it will someday. Would MS jump on the bandwagon then? Is the fact that it is in the standard not enough justification and reason to implement it?

    Now riddle me this: why do Comeau C++ and Borland Builder X support export? I would _guess_ just because it is in the standard and for that very reason should be implemented. Actual value (or lack) of the feature is beside the point.

    Surely GCC is not the baseline by which MS designs VC++ against.

    > Are you seriously arguing that

    > export is more useful than TR1?

    Not at all. In the little bit that I have used export (due to the extremely few compilers that support it), it is perhaps not so useful. However, it is in the standard, and that is the only point of important.

    I am sure that TR1 will be very useful. However, it is merely a library, nothing more. Some of TR1 has been available in Boost for a while now. Other containers and features I can, and have, design myself when needed. TR1 will limit that need, which will save me time and resources.

    On the other hand, though, missing compiler features (such as export, but also including two-stage name lookup and exception specification) as defined in the standard are not something that I can implement myself. I can only work around their absence and wait, years or maybe decades, for their implementation.

    My main argument is this: the 1998 standard should have been fully implemented before beginning on the 2003 update. And the 2003 standard should be fully implemented before beginning on the future C++0x.

  • [Craig]

    > it really needs to be implemented in due time.

    Does it?

    I don't see any particular difficulty in continuing to deny the existence of export forever and ever.

    > I do not know. Maybe it will someday.

    > Would MS jump on the bandwagon then?

    It'd certainly make a stronger case.

    > Is the fact that it is in the standard not enough justification and reason to implement it?

    Unfortunately, no. The people who ultimately make resource allocation decisions care much more about "return on investment" and "customer demand" (as they should) than paragraph-by-paragraph conformance to the Standard. export seems very costly for very little benefit, and there isn't massive customer demand for it (yes, you and several other people want export, but we don't have legions of customers beating down our door about it).

    I care about pure conformance much more, but in the end I completely agree with the non-implementation of export.  Only if all compiler bugs were fixed and all yummy C++0x features were implemented would I want to see export (and maybe not even then; implementing stuff breaks other stuff).

    > Now riddle me this: why do Comeau C++ and Borland Builder X support export?

    One of Comeau's major selling points is pure paragraph-by-paragraph conformance.  I'm not sure (I don't keep track of anything but VC, GCC, and Comeau), but I think Borland now uses the same EDG frontend that Comeau does, so they would get export mostly for free.

    > Surely GCC is not the baseline by which MS designs VC++ against.

    No, but it is obviously easier to argue that VC should do something when the clear weight of the C++ community is behind it.

    > My main argument is this: the 1998 standard should have been fully implemented before beginning on the 2003 update. And the

    > 2003 standard should be fully implemented before beginning on the future C++0x.

    First, 2003 completely supersedes 1998.  It is 1998 as it was meant to be (C++03 is shorthand for C++98+TC1, integrated into one document for your reading convenience).

    Second, imagine that there was one person at Microsoft responsible for deciding on what the frontend devs are going to do in the next version. (There's not actually just one person.) Now put yourself in this person's shoes, deciding what to do in VC9. You don't have an army of ninja frontend devs; instead, you have one (http://blogs.msdn.com/vcblog/archive/2006/06/30/visual-c-compiler-plans.aspx). The rest are working on unspecified post-VC9 awesomeness. JonCaves is our voting committee member (as you can see from http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2453.html), and knows the language and compiler inside and out - but there is only one of him, and all attempts to clone him have been unsuccessful. Now, where should dev time be spent - on a feature that is "perhaps not so useful", or on making two-step PCHs as documented at http://msdn2.microsoft.com/en-us/library/2yzw0wyd(VS.80).aspx actually work? (They apparently worked at one time, probably in VC7.1, but are profoundly broken in VC8 RTM and VC8 SP1).

    As it happens, that particular bug (one of my "favorites") was punted; it'll be broken in VC9 RTM too (although I will agitate for it to be fixed in VC10). There were even more pressing matters to attend to. For one great example, Jonathan fixed a lurking bug, noticed by an internal team, that sped up the compilation of large projects using lots of templates with PCHs by 10-20%.

    You can bet that "make the compiler faster" is one of the top customer demands for every version.

    Stephan T. Lavavej

    Visual C++ Libraries Developer

  • Stephan,

    >> Is the fact that it is in the

    >> standard not enough justification

    >> and reason to implement it?

    > Unfortunately, no.

    I think that this emphasizes the fundamental difference in our thinking. At a bare minimum, I am looking for an implementation of the standard. Anything above and beyond that is quite welcome, but anything less than that is unacceptable.

    > One of Comeau's major selling points

    > is pure paragraph-by-paragraph

    > conformance.

    Which is precisely why it has the well-earned respect that it has earned. It is essentially the gold-standard that all other compilers should aspire to become.

    > Now, where should dev time be spent

    > - on a feature that is "perhaps not

    > so useful", or on making two-step

    > PCHs as documented at http://msdn2.microsoft.com/en-us/library/2yzw0wyd(VS.80).aspx

    > actually work?

    That's easy: export. Why? Not because it is or is not useful, but precisely because it is defined in the standard. Two-step PCHs? Not in the standard, so it's not necessary nor should it be a priority. *Once* the standard is implemented, going above and beyond it with two-step PCHs etc. may be nice.

    > You can bet that "make the compiler

    > faster" is one of the top customer

    > demands for every version.

    No matter how fast you make the compiler, if it can not even compile legal C++ code slowly (such as export), then its value is not so great.

  • > No matter how fast you make the compiler, if it can not even compile legal C++ code slowly (such as export), then its value is not so great.

    YES!

  • Just my 4 cents:

    1 - Having export in VC++ wouldn't allow me to use it, since i want my code to compile on GCC too. So, i don't care about export (currently).

    It is troublesome when GCC and VC++ implement different subsets of C++. Export is implemented in neither compiler, so it's not a big problem (currently) in my humble opinion.

    2 - Two-phase name lookup is important to me, for the same reason. I want code to compile on VC++ and GCC. If i develop on VC++ first, most likely i will get errors on GCC later due to 2-phase name lookup. (Or worse, different behavior)

    3 - Exception specifications are just another way to make my program (std::)terminate. I woulnd't use them even if they were available.

    4 - TR1 and C++0x are the way to go.

    5 - If you succeed cloning JonCaves, please do implement export and everything else. :-)

  • > It is possible to be both secure and performant.

    I was really joking. You are absolutely right - not only it is possible, but in many cases required. But only with the checking iterators off! The difference in speed was just amazing (around 3x) for the processing we were doing.

    So my point is (and I saw a similar argument in another thread): with the default settings set to "secure", people will simply avoid using the Standard Library and fall back to much less secure C-style data structures and functions. That way not only the security, but also the productivity goes down.

  • [Ben]

    > I think that this emphasizes the fundamental difference in our thinking.

    I want a compiler (and other tools) to allow me to access the expressive power of the language. This is primarily achieved through conformance, which is why I value it so highly. If your compiler doesn't support (say) partial specialization, then you're limited in what you can say. Conformance *itself* buys nothing (except a check-box in compiler advertising) beyond expressive power.

    I look at export and I don't see it buying me anything. That's why I value it far less than other things that buy me expressive power (even when they're outside the domain of conformance).

    > That's easy: export. Why? Not because it is or is not useful, but precisely because it is defined in the standard.

    > Two-step PCHs? Not in the standard, so it's not necessary nor should it be a priority. *Once* the standard is

    > implemented, going above and beyond it with two-step PCHs etc. may be nice.

    The vast majority of VC customers think otherwise (for export vs. anything else, not the example of two-step PCHs in particular).

    [ikk]

    > If i develop on VC++ first, most likely i will get errors on GCC later due to 2-phase name lookup.

    > (Or worse, different behavior)

    As I understand it, you'd get compilation failure. If you have an example that compiles and produces different behavior, I'd like to see it.

    I've never run into a case where two-stage name lookup would have any effect. I often run into the case of unqualified name lookup reaching into dependent bases (VC nonconformantly does this by default, although it implements the conformant rule under /Za), which is related to two-stage name lookup but it not actually part of it.

    [Nemanja Trifunovic]

    > The difference in speed was just amazing (around 3x) for the processing we were doing.

    Were you using Standard algorithms whenever possible (yes, even the underappreciated for_each())? Handwritten iterator loops incur a greater _SECURE_SCL penalty. We are *very* interested in cases where you are using STL algorithms and still incur significant penalties.

    Vaguely speaking, I generally see a 10% performance cost for _SECURE_SCL, rising to 2x+ only for very tight iterator loops (and you can always replace vector iterators with pointers).

    > with the default settings set to "secure", people will simply avoid

    > using the Standard Library and fall back to much less secure C-style

    > data structures and functions. That way not only the security, but

    > also the productivity goes down.

    Yes, this is absolutely a valid concern, because C-style programming is so horrible.  However, with the default settings set to "super speed", no one would ever use the "secure" setting. So the choice of default is a difficult one, and I think that "on by default" is reasonable, if not perfect for everyone.

    I know that our messaging around this has been a mess (the macro-setting rules aren't described in MSDN, the "iterator checking"/"iterator debugging" terminology is somewhat confusing, and there's no exposure of _SECURE_SCL/_HAS_ITERATOR_DEBUGGING in the IDE's configuration settings if I recall correctly).

    There are several defaults that always need configuration, anyways - you need to pass /EHsc, you need to define NOMINMAX before including windows.h to get it to not stomp over the STL, etc. I consider those two to be much more aggravating than having to set _SECURE_SCL=0 for performance. I don't even do that at home, where I write CPU-bound data compression code for VC and GCC. Using the STL heavily, I see generally indistinguishable performance, with only one exception for a very tight iterator loop that I worked around by using pointers. (_HAS_ITERATOR_DEBUGGING produced a 1000x+ penalty there.) Of course every application is different, but I think _SECURE_SCL's performance effects are overly feared.

    Stephan T. Lavavej

    Visual C++ Libraries Developer

  • Not really a part of my discussion, but still:

    > If i develop on VC++ first, most likely i will get errors on GCC later due to 2-phase name lookup.

    > (Or worse, different behavior)

    > As I understand it, you'd get compilation

    > failure. If you have an example that compiles

    > and produces different behavior, I'd like to > see it.

    I wrote an article on the two-phase name lookup, and one of the readers left this sample code:

    http://www.codeproject.com/cpp/TwoPhaseLookup.asp?msg=949325#xx949325xx

    Back to our discussion:

    > The difference in speed was just amazing (around 3x) for the processing we were doing.

    >Were you using Standard algorithms whenever >possible (yes, even the underappreciated >for_each())?

    No, I use them only when it makes my life easier. As Scott Meyers wrote here: http://www.ddj.com/cpp/184401446 "In the ongoing tussle between algorithm calls and hand-written loops, the bottom line on code clarity is that it all depends on what you need to do inside the loop. If you need to do something an algorithm already does, or if you need to do something very similar to what an algorithm does, the algorithm call is clearer. If you need a loop that does something fairly simple, but would require a confusing tangle of binders and adapters or would require a separate functor class if you were to use an algorithm, you’re probably better off just writing the loop."

    Mind you, I am the one who uses the algorithms the most - other developers mostly avoid them.

    > So the choice of default is a difficult one, > and I think that "on by default" is  >reasonable, if not perfect for everyone.

    I dissagree here :) C++ is meant to be used for high speed and easy access to system resources - putting "security" (and all these security settings make C++ only slightly "less insecure", and still far from "secure") first at the expense of speed violates one of the basic design priniples for this lanuage.

  • I agree with the opinion that C++ is meant to be used for high speed, so options to assist security at the expense of speed should default to being off.  Object-oriented assembly language, just like ordinary assembly language (e.g. C), should be used in the same kinds of situations where other kinds of assembly language would be used.  They should be coded by people who know how to write secure fast code themselves, they should be proofread (walked through) by people who know how to write (and break) secure fast code themselves, they should be tested by people who know how to write (and break) fast secure code themselves.

    Options to maximize security should be on by default in C# and its predecessors.  These are higher level languages for a reason.  Notice that the letter 'C' doesn't even appear in the names of its predecessor languages (Java, Visual Basi, Pasal, um, oops), well anyway, notice that they aren't assembly languages.

  • [Nemanja Trifunovic]

    > one of the readers left this sample code

    Thanks for the example.

    > As Scott Meyers wrote here

    That's good advice (like everything that he writes).

    > Mind you, I am the one who uses the algorithms the

    > most - other developers mostly avoid them.

    Note that there's an additional reason beyond clarity to use Standard algorithms whenever possible: they may be faster.  In the case of VC8+, they actually are faster, since we can lift out _SECURE_SCL/_HAS_ITERATOR_DEBUGGING checks and proceed with raw pointers (in the case of vector iterators, etc.).

    > I dissagree here :)

    Fair enough; my point is that VC's default is not gratuitously trying to make your life harder.

    [Norman Diamond]

    > I agree with the opinion that C++ is meant to be

    > used for high speed, so options to assist security

    > at the expense of speed should default to being

    > off.

    Things like /GS (Buffer Security Check) are on by default, and this is absolutely the correct decision - /GS is extremely cheap and this check is definitely worthwhile.

    _SECURE_SCL followed this example.  It seems that a lot of customers complain about its performance cost, and some of these concerns are legitimate (remember, there are also people who say "argh, don't slow me down" and disable /GS out of fear/superstition rather than for any good reason - separating fear from real concerns is not always easy).  We're looking at how to make _SECURE_SCL faster in future versions of VC.  I don't think you should expect us to make it off by default (security is a big concern around here), but we'll definitely be retaining the option to disable it.

    Good defaults are important, but a default that you don't like isn't the end of the world.

    > Object-oriented assembly language

    C++ is so much more than that.

    > Options to maximize security should be on by

    > default in C# and its predecessors.  These are

    > higher level languages for a reason.

    C# is not "higher level" than C++.  C++ is capable both of "down to the metal" coding as well as highly abstracted coding.

    Stephan T. Lavavej

    Visual C++ Libraries Developer

  • > Note that there's an additional reason >beyond clarity to use Standard algorithms >whenever possible: they may be faster.  In >the case of VC8+, they actually are faster, >since we can lift out >_SECURE_SCL/_HAS_ITERATOR_DEBUGGING checks >and proceed with raw pointers (in the case of >vector iterators, etc.).

    Thanks! That's great information. But why did I find about it by accidentally checking out the VC++ dev blog? How many developers know about the speed implications of using the standard algorithms with _SECURE_SCL on?

    As I said, too many people just assume that STL is "slow" and use new[]/delete[] instead, and it does not help anybody.

  • "Were you using Standard algorithms whenever possible (yes, even the underappreciated for_each())? Handwritten iterator loops incur a greater _SECURE_SCL penalty. We are *very* interested in cases where you are using STL algorithms and still incur significant penalties."

    Simple. The case that the most popular STL container was made for.

    Random access in a std::vector.

    True, if we're iterating through a vector, you can and should usually use the standard algorithms, but consider for a moment what the vector is. A resizable array. If we can't use it as an array (perform random access), it's not very useful at its primary role.

    In a perfect world, I'd agree with your decision. Sure, it makes sense to make code more secure by default, but it assumes that all C++ programmers are perfect and know both the language and the IDE.

    Neither of which is the case.

    Most C++ programmers suck. A huge proportion don't use, or aren't aware of, the standard algorithms. The containers are a bit more accepted, but there are still many who don't use them either, for fears that they're too slow, because they seem too complex, or because of the "Not Invented Here" syndrome.

    Most people are very skeptical of the STL to begin with.

    If their test code then shows std::vector to be 4 times slower than a plain array, what conclusion do you think they'll draw from that?

    Even if they only see it being 50% slower, or 20%, that's enough to impress beginners, and quite a few intermediate programmers (and aren't those the ones we really want to reach with any security improvement?). C++ developers often obsess over performance, whether or not it's justified in that particular case. And the last thing we need is another generation of programmers deciding that "the STL is too slow for real-world code".

    That's one half of the argument. C++ is a complex language, and far from every C++ developer are aware of half the features it offers. If they see their for loops on vectors being slow, they're going to ditch the vector rather than switching to for_each or a std algorithm, because they don't know the latter exists.

    The other half is that far from everyone know their IDE. Not everyone know that these macros exist in the first place. (You could at least make them configurable in project settings)

    Sure, you or I can just toggle the macros off (usually, anyway. It becomes a bit more awkward to do if I want to download and compile a third-party library such as Boost. I trust Boost to be reasonably robust, and some areas of it may be quite performance-critical in my code. So I have to figure out how to make their build tool insert this macro in every compilation unit, and hope that it doesn't break anything.)

    But you are seriously underestimating how many people don't do this, because they simply see "Hmm, the STL is ridiculously slow. All those "experts" who keep telling me to use STL must be stupid. I'll make my own resizeable array instead, how had can it be? And I'll use char* instead of std::string, thank you very much"

    I see posts by people online every day, who say "I know I've been told to use the STL, but it's just too slow. I made my own array class, and everything runs smoothly now". Some find out afterwards that they can disable SECURE_SCL, some don't. Of those who do find out, some are by then too skeptical of the STL to give it a second chance. And some have simply invested too much time into their C-style code by now, that they don't really want to switch back to STL, with or without SECURE_SCL.

    People like you or I, who are aware of the SECURE_SCL option, and realize that STL code is preferable, make up maybe a few percent of the C++ developers out there. (And we're probably not the ones most liable to write security vulnerabilities in the first place)

    Most people have a hard time swallowing STL because it looks so complex that "it must be slow". The last thing you want to do is confirm that fear.

    "Good defaults are important, but a default that you don't like isn't the end of the world."

    And this is coming from a "security is important" person? Making people do the right thing by default isn't a big deal?

    Don't you think it is a serious problem if developers shy away from the STL completely? If anything *does* cause the end of the world, I'd say the odds are good that it'll be because of people hand-rolling their own std::vectors, with all the security issues it entails.

    No, I think good defaults are extremely important, for security.

    [quote]However, with the default settings set to "super speed", no one would ever use the "secure" setting[/quote]

    Wouldn't they? Perhaps that's a sign that you're heading the wrong way with SECURE_SCL?

    Remember that you've just spent an impressive number of years focusing on managed C++ which no one really cared about. Does that demonstrate the deep understanding of your userbase that is required to proclaim that SECURE_SCL on by default provides a net increase in security? ;)

    When your actual users are standing on the other side, saying that from all they've seen, it doesn't?

    Anyway, I'm not asking for "super speed", I'm just asking for "good enough for people to consider it comparable to raw C code". A vector shouldn't be noticeably slower than a raw array, because then people won't use it.

    I agree with your /GS example, but that's different. If I'm irrationally scared of /GS, I just disable it. I don't start using char*'s. So enabling /GS by default can never have the same *negative* effect on security that _SECURE_SCL does.

    "I think _SECURE_SCL's performance effects are overly feared."

    Yes, precisely my point. Those of us who don't "overly fear it" can just disable it when we want to. No problem there. (Even if it gets a bit hairy with 3rd party libs, as I said)

    It's everyone who overly fear it (or rather, fear the performance characteristics they observe in STL code because of it), who are the issue.

  • "There are several defaults that always need configuration, anyways - you need to pass /EHsc, you need to define NOMINMAX before including windows.h to get it to not stomp over the STL, etc"

    Heh, that's another issue I've been wondering about. Why is windows.h so absolutely horrible? Ok, there's backwards compatibility, but that doesn't seem to answer it all.

    Why aren't functions defined something like

    void Foo() {

     #ifdef UNICODE

       FooW();

     #else

       FooA();

     #endif

    }

    instead of #define FooW Foo, and the whole #define hell you have going now? (Having a macro called something as common as CreateWindow is just evil)

    Why can't I #include parts of windows.h in a more fine-grained manner? Why do I get everything if I just want HRESULT? Why isn't it split into more, smaller files? (I know it is split into multiple headers internally, but MSDN always says to include windows.h instead)

    And finally, why not make a C++ wrapper available? One that doesn't use macros to solve problems that could be handled by function overloading, one that uses namespaces? (And which would compile with language extensions disabled)

    Why don't we have a windows++.h, or maybe windows.hpp?

    Or just a WindowsEx.h which does the most obvious fixes, such as not defining the min/max macros, and gets rid of the A/W #defines for all functions, but otherwise leaves the existing API intact?

    I can't see any technical reason why at least some of these shouldn't be viable, and some of them seem like they should be fairly straightforward to do. Or am I missing something totally obvious here?

  • To all the people who say VC must be 100% conformant before doing anything else:

    what exactly do you plan to use 'export' for that you cant do now??  Not all language elements are created equal, in fact no compiler had implemented 'export' at the time it was "standardized".  I say continue to let the VC team prioritize things in a sane manner dictated by customer needs as opposed to slavish devotion to adding flawed or minorly useful corners of the 'standard'.

  • Kris G,

    I have followed this thread with much interest. As a customer, the number one feature that I am looking for is a compliant compiler. The response in a nutshell: not important. Thus, I am being told that my needs are not unimportant.

    "what exactly do you plan to use 'export' for that you cant do now??"

    Better code organization.

    Now let me ask you now: What exactly do you need "for", "while", "do... while", and "if" for when they can all be replace with a simple "goto"? Why do you need std::string when you can just use a char*? Why do you need classes when you can manage without them in C?

    These are rhetorical questions, so you need not answer.

    "in fact no compiler had implemented 'export' at the time it was "standardized".

    Oh, some nine years ago. Since then, there are several implementations. Was nine years not enough time for Microsoft to implement it? If not, what is? 15 years? 20 years? Rather than even giving a time frame, apparently the standard is so unimportant that it is not even on the radar as one developer said he wants "export to be erased from human memory".

    Yes, I do not need export anymore than I need for, if, std:string ect. But they are nice things. It makes my code easier to maintain. I am a customer and I want export. What do I have to do to make that a priority? Switch to Comeau? Microsoft has made it clear that being standard compliant is not a priority.

Page 2 of 3 (43 items) 123