IntelliSense, Part 2 (The Future)

IntelliSense, Part 2 (The Future)

Rate This
  • Comments 36

Hi, Jim Springfield again.  This post covers our current work to fundamentally change how we implement Intellisense and code browsing for C++ in Visual Studio 10.  I previously covered the history of Intellisense and outlined many of the problems that we face.  See here for that posting and more detailed information.

As Visual C++ has evolved over the years there has been tension between getting code information quickly and getting it accurately.  We have moved from fast and not very accurate to sometimes fast and mostly accurate in Visual Studio 2008.  The main reason for the slowness has been the need to reparse .cpp files when a header is changed.  For large projects with some common headers, this can mean reparsing just about everything when a header is edited or a configuration or compile option is changed.  We are mostly accurate except that we only capture one parse of a header file even though it could be parsed differently depending on the .cpp that includes it (i.e. different #defines, compile options, etc).

For Visual Studio 10, which is the next release after Visual Studio 2008, we are going to do a lot of things differently.  For one, the NCB file is being eliminated.  The NCB file was very similar to a BSC file and the IDE needed to load the entire thing into memory in order to use it.  It was very hard to add new features to it (i.e. template support was bolted on) and some lookups required walking through a lot of information.  Instead of this, we will be using SQL Server Compact for our data store.  We did a lot of prototyping to make sure that it was the right choice and it exceeded our expectations.  Using Compact will allow us to easily make changes to our schema, change indexes if needed, and avoid loading the entire thing into memory.  We currently have this implemented and we are seeing increased performance and lower memory usage.

SQL Server Compact is an in-process version of SQL that uses a single file for the storage.  It was originally developed for Windows CE and is very small and efficient, while retaining the flexibility of SQL.

Also, there is a new parser for populating this store.  This parser will perform a file-based parse of the files in the solution in a way which is independent of any configuration or compile options and does not look inside included headers.  Because of this, a change to a header will not cause a reparse of all the files that include it, which avoids one of the fundamental problems today.  The parser is also designed to be extremely resilient and will be able to handle ambiguous code, mismatched braces or parentheses, and supports a “hint” file.  Due to the nature of C/C++ macros and because we aren’t parsing into headers, there is good bit of code that would be misunderstood.  The hint file will contain definitions for certain macros that fundamentally change the parsing and therefore understanding of a file.  As shipped, the hint file will contain all known macros of this type from the Windows SDK, MFC, ATL, etc.  This can be extended or modified and we are hoping to be able to identify potential macros in the source code.  Since we will be looking at every header, we want to be able to propose recommended additions to the hint file.

However, this style of parsing means that we don’t get completely accurate information in all cases, which is especially desirable in the Intellisense scenarios of auto-complete, parameter info, and quick info.  To handle this, we will be doing a full parse of a translation unit (i.e. .cpp file) when it is opened in the IDE.  This parse will be done in the fullest sense possible and will use all compile options and other configuration settings.  All headers will be included and parsed in the exact context that they are being used for.  We believe we can do this initial parse for most translation units very quickly with most not taking more than a second or two.  It should be comparable to how long it takes to actually compile that translation unit today, although since we won’t be doing code-generation and optimization, it should be faster than that.  Additionally, this parse will be done in the background and won’t block use of the IDE.  As changes are made to the .cpp or included headers, we will be tracking the edits made and incrementally reparsing only those bits that need to be parsed.

The database created from the file-based parse will be used for ClassView, CodeModel, NavBar, and other browsing based operations.  In the case of “Find All References”, the database will be used to identify possible candidates and the full parser will then be used to positively identify the actual references.

We have also been thinking about some longer term ideas that build on this.  This includes using a full SQL server to store information about source code, which multiple people can use.  It would allow you to lookup code that isn’t even on your machine.  For example, you could do a “goto definition” in your source and be taken to a file that isn’t even on your machine.  This could be integrated with TFS so that the store is automatically updated as code is checked in, potentially even allowing you to query stuff over time.  Another idea would be to populate a SQL database with information from a full build of your product.  This would include very detailed information (i.e. like a BSC file) but shared among everyone and including all parts of an application.  This would be very useful as you could identify all callers of a method when you are about to make a change to it.

What are your ideas if you had this type of information?  Let us know!

Jim Springfield

Visual C++ Architect

  • Jim,

    thank you for the great post (as usual).  I see massive potential with MS VC++ and i'm glad you are in the thick of it!


    I hear you on the lack of info on using sqlce with mfc.  I spent far too long figuring this out (and ended up using non-microsoft provided details on how to do it: sad but true).  i've complained to deaf MS ears so i'll say it again: Microsoft please provide more *NATIVE* MFC C++ sample code for new technologies.


    I agree with you 100%  well said!!


    I also agree that your proposed (cutting edge) features would be hugely beneficial.  I wish MS invested more in quality & useful VC++ features and less on .NET bloatware.

  • It would be nice to have such a level of intellisense.

    I too prefer to have "broken code" to work , since many times to get intellisense in a new function you are writing you should complete the main prototype and then wait a bunch of seconds...

    There are many other things I'd like to see in VC++ 10 , but here these are off topic

  • Thanks for all of the comments everyone.

    There are a few comments regarding using SQLCE from C++.  We are using the ATL OLEDB consumer templates and it has been fairly easy.  ATL and MFC are pretty integrated today, so it should be fairly straightforward to use them if you are an MFC user, although I don't know offhand what the MFC databinding story is with ATL OLEDB.

    Peter: The goal is not to have an opaque file, especially one that is difficult for us to maintain and change.  You can actually load the SQLCE SDF file into Visual Studio and explore it, and issue SQL queries against it.  I imagine there will be some interesting addons that exploit it.  One point I am driving hard here is that we need to make some things much more configurable.  Your suggestion about being able to specify a location for the store is a great one.  It would also be useful in the case of opening a project over a network share that you don't have write access to.

    Steve: I hear your point about how a header can be included multiple times into a single translation unit.  We will see it both ways when doing the full parse of a translation unit.  An interesting question, however, is if the editor has the header itself open, what context should be used when editing it?  My experience with these types of header, however, is that they are typically some kind of heavily macroized structured "data" that will expand into code and/or data when included.  I doubt that Intellisense is actually very useful in this case, but I would be open to hearing some examples.

    Bill: I also see the value in an out-of-proc mechanism and it is something we want to support.  However, we also have users in some organizations that are restricted from installing SQL on their own machines and insert performance is very good for SQLCE.  However, the code as written today can actually target full SQL as well as SQLCE.  We aren't sure what we will provide in VC10 around this, but we believe it could be very valuable.

    I believe Intellisense needs to work well in both code that compiles and code that doesn't.  Obviously, it can't read your mind about what the code is supposed to do, but we are trying to design our parsers to be very resilient to the types of errors that are likely to be seen on code that is being actively written such as mismatched braces/parens, etc.

    We do have plans to provide other features similar to what C# provides today that we hope will increase coding productivity.  This set isn't 100% firm yet, but we do hear you.

  • > what context should be used when editing it?

    How about a dropdown list in the title bar of all the insertion sites, constructed so I can see project, translation unit, and #include line and file.  Hmm, maybe that would have to be a popup, it's really a lot of info.  The selection would drive colorization, intellisense, and ctrl-F7.  Yeah!

    But when headers include other headers and you get an error deep in the nest, the IDE gives you very little help in navigation.  Something like a debugger call stack window would help a lot.  My typical work cycle is to double-click an error message, then return to the Output window and scroll back, noting the .CPP file name as I go, until I find the project name announcment line in the output.  Switch to the Solution explorer, navigate to that project, open the "Source Files" filter, and scroll down to the right .CPP file.  Now, if I'm lucky, the #include is a direct one and I've got most of what I'm going to need.  If not I have to use a combination of Zen and brute force to find the #include trail connecting the two files I have open.

    fdsa: When I'm writing new code I'd agree with your point of view.  Right now, however, I'm in a porting situation and code I didn't write is being broken by changes I didn't make to code and meta-code burried deep in the header jungle.  For this kind of work I really need intellisense (mostly tooltips and browse-to-defn) that groks preprocessor semantics and can cope with multiple incompatible definitions in scope and missing definitions and all that sort of brokenness.

  • > what context should be used when editing it?

    If there's a macro definition in the project properties (/D on the command line) then that should be respected in colouring the #if and #else blocks.  Also (and more importantly) if there's a #define in the file itself then that should be respected.

    But if the #define comes from a parent file and the situation isn't known in the included file itself, it's tough.  Maybe both the #if and #else parts should be coloured as usable code, but it's still tough.  A macro could expand to either of two possibilities, and which expansion should be recognized?  Tough question.  But the other cases were simpler and they should be handled properly.

  • I second Visual Assist X, it is way better, ms should buy it or make one like that, it is much faster and gives better suggestions

  • > It would be an *enormous* help if I could get a "preprocessed" view of my file without having to check out and modify the project file, recompile the translation unit (including finding it in the tree view if the error navigation took me straight to the header), browse to $(IntDir) and find the .asm file, search for the right spot, and then remember to undo my change to the project file before checking in.

    This is a very good idea.  I don't often have to delve into debugging the preprocessor, but when it needs to be done something like the suggestion above would be very handy.

  • Intellisense: big change in next visual studio

  • I think in fact, SQL light storage is the most eficient way.

    In a recent post, I was thinking about ADAM because of its superiority in terms of the couple storage/search&criterions.

    but OK, let'go with SQL. It will easier to write.

    great job guys.



  • - Developing in a codebase with a lot of boost headers and templates means my translation units often take 10-50 seconds to compile. Do you account for this when you run the full background parse?

    - Have you investigated using the information as a hint to the compiler? Faster builds would be most welcome.

    - Does the new intellisense still crash or hang the machine? Can we reboot intellisense when it breaks? We need reliability.

    - I am interested in third party add-ins for the new Intellisense. If it's open enough, I think this could be the best feature of VS10.

    Best of luck with development, I look forward to using it!

  • Jim,

    Your planned use of SQL Server Compact Edition with the OLE DB Consumer Templates for ATL is intriguing.  My hope is that several things flow from this effort (in addition to the higher performance Intellisense, of course).  First, maybe Microsoft will make available these interfaces (OLE DB) for Windows Mobile developers, recognizing that not everyone wants to use ADO.Net and C# for applications with database connections.  Second, it will be a useful learning experience if the MSDN magazine publishes code snippets in C++ that illustrate how you guys are using the consumer templates.  Third, hopefully the MSDN website will publish more C++ examples on how to use OLE DB.  All of this points to the fact that if the Visual C++ developers are using C++ to interface with SQL Server Compact edition, why wouldn't other developers want to do the same on the Windows Mobile platform?  Thoughts anyone??



  • "The parser is also designed to be extremely resilient and will be able to handle ambiguous code, mismatched braces or parentheses, and supports a 'hint' file.  Due to the nature of C/C++ macros and because we aren’t parsing into headers, there is good bit of code that would be misunderstood.  The hint file will contain definitions for certain macros that fundamentally change the parsing and therefore understanding of a file."

    Hmm...ever heard of an OO toolkit from about 10 years ago called 'Genitor'?  It used pretty much this exact approach to reverse-engineer C++ code.

  • One thing that would be nice to have "while you're at it" in the database: If you would provide for storage of historical / statistical / estimation(?) data on the build times for projects and translation units it could be used by the parallel build scheduler to do a much better job.

    As it is, the scheduler will frequently end up building a "large" project (or dependency chain) single-threaded at the end of the job.  If it had data on build history it could push the prerequisites for that project forward and get better overall build performance.  Statistics on individual translation units wuold be needed to do this for incremenatal builds.

    Making the "units" for this performance data somewhat insensitive to platform scale would help if it will be possible to share the database among team members: we don't all have the same vintage machines.

  • Why do you guys keep the OLE DB wizard source closed anyway? It makes code gen useless for C++ devs and ADO.NET is just a hack plenty of companies refuse to use.

    I agree you have a tough time ahead with Intellisense and editors (which should be CLR like fuzzy work anyway), but look around and you'll find guys with great AST ideas for C++ (apart from EDG etc).

    My main worry is that you guys are underfunded in C++ land, all while we are seeing 300MB silly SilverLight and WPF apps that will never ever make any business sense, not in the next 10 years for many, many app.

    If CLR team could just get their act together and polish up page faults on GDI+ and get it into harware acceleration, you have a fast enough environment to do all the editor work.

    But that's a whole different topic and project to compiler, it being better exposed for preprocessing/introspection/code-gen/optimisation/etc/etc, and the idea you ought to have it opened up so it can shine with integration that managed crowd would never understand anyway (they are ignorant enough as is)..

  • With IntelliSense improving, wiil the VC++ product continue to support generating Browse Information (.bsc files)?  If so, what will Browse Information do that IntelliSense won't?



Page 2 of 3 (36 items) 123