IntelliSense History, Part 1

IntelliSense History, Part 1

  • Comments 16

Hello, this is Jim Springfield again.  I want to start explaining our plan to fundamentally change how IntelliSense and other code browsing features work for C/C++.  The recent GDR for VS2005 and the changes that went into VS2008 were significant, but they don’t really change how these features are implemented.  This post covers the history of these features and helps set the stage for explaining what we are trying to accomplish in VC10 (the next release after VS2008).

Much of this summary is taken from my own memory of the events and from installing all of these older versions of Visual C++ and experimenting with them in order to refresh my memory.

Capturing information about a C or C++ program’s structure has been around for a very long time in Microsoft’s products.  Preceding even Visual C++ 1.0, the compiler supported generating program information through .SBR and .BSC files.  (Note:  The compiler in Visual C++ 1.0 was already version 8, so the command line tools had been around a while already.)  The SBR files contain reference and definition information for a single translation unit that the compiler generates as it compiles.  These SBR files are combined in a later step using the BSCMAKE tool to generate a BSC file.  This file can then be used to look at many different aspects of a program:  reference, definitions, caller-callee graphs, macros, etc.

Since the inception of the Visual C++ product, we have been parsing C++ code and storing information about it in some form for the use of the IDE.  This parser has been separate from the command line compiler because many features of the IDE require code understanding and requiring a build would be an onerous burden in these cases.  For instance, at many stages of editing, the code is simply not in a compilable state, so requiring a compile would not be workable.  The earliest IDE used CLW (Class Wizard) files to store this information.  These were structured as an INI file, which were common in 16 bit Windows before the registry was developed.  These provided minimal information about where classes were located and some information about resources.  These CLW files were generated using a very simple parser, which didn’t have to deal with templates or Ansi/Unicode issues.  Also, special locations in files were marked with comments that couldn’t be edited.  It was effective at the time for supporting the minimal requirements of Class Wizard, but it didn’t provide a lot of information about a program.

Visual C++ 4.0 saw the arrival of a new feature: ClassView.  ClassView displayed information about all classes, class members, and global variables.  The parser used for CLW files was not sufficient and a new parser was written and the information was stored in a new file called the NCB file.  NCB was an abbreviation for “no compile browse”.  It provided some information that building a BSC would provide, but not all.

Visual C++ 6.0 saw the introduction of a new parser (feacp) for generating NCB files.  Internally, it was called “YCB” for “yes compile browse” although it still generated NCB files.  It was called “yes compile browse” because a modified version of the actual compiler was used to parse and generate the NCB.  The C++ language had been getting larger with namespaces, templates, and exceptions and maintaining multiple parsers was not desired.  The CLW parser was still being used, however, to generate CLW files.  VC 6.0 also saw the introduction of the first “Intellisense” features such as autocomplete and parameter info.

The NCB file is very similar to a BSC file and is based on a multi-stream format developed for PDB files.  The contents of the NCB file are loaded into memory and changes are made in memory and persisted to the NCB file when shutting down.  The data structures in memory and on disk are very hierarchical and most lookups require walking through the data structures.  An element is represented through a 32bit handle which uses 16 bits to specify the module (i.e. file) the element came from and 16bits to represent the element within the file.  This limits the number of files to 64K and the number of elements within a file to 64K.  This may seem like a lot, but there are customers hitting these limits.  (Note: prior to Whidbey, there was a 16K limit on the number of files as two bits were being used for some housekeeping.)

In Visual C++ .Net (i.e. 7.0) the CLW file and associated parser were finally removed and Class Wizard features were implemented using information from the NCB file.  In 7.1, 8.0 (Whidbey), and 9.0 (Orcas), not much has changed.  Whidbey saw the biggest change as we eliminated pre-built NCB’s for libraries and the SDK, provided better support for macros, and allowed 64K files in an NCB.  There have been these incremental improvements, but the overall architecture has remained the same. 

As the NCB was used for more and more features, it became a core piece of the IDE’s technology and if it didn’t function correctly, many IDE features would not work.  FEACP needed to deal with large, complex dependencies between files and potentially incorrect code.  When a common header file was changed in a project, all dependencies would be reparsed in order to generate correct information.

Note: FEACP would only parse the header file itself once in the context of one translation unit, but all dependent cpp files would be reparsed using information gathered during the one parse of the header.  The problems this causes are collectively called the “multi-mod” problem, because it occurs when a header is used by multiple modules.

For large projects, this reparsing could take a while.  Initially, this caused the IDE to freeze as the parse would happen on the foreground UI thread.  This was addressed in later versions by doing the parsing on a background thread.  However, there were some scenarios where the foreground UI would need the results and would need to block anyways.  Also, this frequent reparsing could use a lot of CPU and memory and cause problems by using too many resources and still causing issues with the UI.  This was eventually tuned to some degree by running at lower priority and delaying reparsing until a perceived idle time.  Another solution has been to add three prioritized queues for work, which can allow more important work to get done first.  Other problems that occurred were due to corruption of the NCB file or errors in the compiler that would cause a parse to fail early in a file and would result in no information being available from that file.  There have also been issues with concurrency and locking of the NCB data in memory.  Adding the ability to quickly find information based on a simple query is very difficult and requires changes to code.  Extending the NCB format to add support for templates, C++/CLI, and other language features has also proven difficult.

All of these issues are exacerbated by larger, more complex projects.   The number of files that may need to be reparsed can become quite large and the frequency of reparsing can be high.  Also, “intermittent” failures are simply more likely to happen as the size of projects goes up.  All of these problems have been looked at over time and some fixes and incremental improvements have been made, but the fundamental issues remain.

Next time, I will cover our approach to tackling these problems in VC10, which we are working on right now.



Leave a Comment
  • Please add 2 and 4 and type the answer here:
  • Post
  • It's bizarre, but for our very app (300 Dlls inlcluding third-parties, MFC, ATL, STL application), VC++ 98 actually provided better intellisense support and speed.  It has had a large hit on the team when we finally switched to VC++ 2003.   Some of the team members have switched to Visual Assist X, other gave up on intellisense. One welcomed change in VC++ 2003 was that browser information is often not necessary - when intellisense info is available, that's cool for "F12".

    Edit-and-continue is another area that's been rendered extremely slow and generally fails since VC++ 98. Some of us still use VC++ 98 when we can.

  • IMO, there were two poor decisions made:

    The first was to eliminate the prebuilt NCBs. Yes, this allowed Intellisense to be more accurate, but I would gladly give that up for a performance increase. <windows.h> is easily one of the most complex include files a project can include, and forcing it to be parsed every time caused a huge increase in database size and parsing time.

    The second was thinking that moving work to a background thread would fix the problems. For one thing, CPU priorities do zip for the disk. One of the problems I see daily is that VS2005 decides to rebuild the Intellisense database when it thinks the system is idle, but it hogs the disk so much that it seriously impacts system performance. Second, even low priority threads using a lot of CPU cause problems. Before the QFE, it was pretty bad when you edited a shared header and three copies of VS2005 all decided to parse at the same time, completely slamming the CPU for upwards of five minutes. (Haven't been able to try the QFE yet in that scenario due to conflicts.) Allowing Intellisense parsing to be manually suspended was a good move.

    There's no doubt that VC6's Intellisense was MUCH faster -- I don't remember it being a performance problem at all. That having been said, it is nice that F12 works without having to compile, and I do like having macros show up, so there's definitely been improvement in functionality.

    One suggestion, to help performance: Can we have a way to manually exclude files from Intellisense? Often I have large portions of my code base that I don't really need Intellisense for, such as code I'm not working on, or auto-generated files. I think I'd be fine if Win32 APIs were excluded much of the time. If I could exclude header and source files, then I could improve its performance just by allowing it to do less. I think there used to be some undocumented #pragmas for doing this, but they seem to be defunct in 2005.

  • I use VC2005 and i think it's great. But sometimes, when i try to reach a class through

    the italian is called "Visualizzatore classi", i think it's the class visualizer.. anyway, when i try to reach a class, the intellisense rebuild the tree and i can't reach the class until it's done. if it's possible i think is better to turn off the rebuild of the tree when there is the focus.

  • Agreed, if there's one thing you could do, if you find out that it is not going to be possible to redo the feature from scratch by the VC10 timeframe, is to bring back the pre-built NCB files (at least as an option)

  • While I also appreciate the speed of pre-built NCB files, I think they constitute an obsolete approach.

    Case in point: right now I'm working with an MFC project in Orcas. If I type in


    IntelliSense will display: #define CreateWindow CreateWindowA

    If I switch my project's properties to Unicode and rebuild, then IntelliSense will instead show #define CreateWindow CreateWindowW

    Pretty neat and not easily obtained with prebuilt NCBs, I think.

    To give my 2 cents of ideas, isn't it possible to borrow some ideas from the explorer? I'm thinking of in later releases of Windows, when browsing a directory on a file server, first the files are displayed with a neutral icon, and then later on they are redrawn with their correct icons.

    So, instead of do a fullblown compilation of say, windows.h, wouldn't it be possible to introduce some kind of "skim" or "lite" compilation, just to list the symbols. Never mind function arguments and return flavors. Those could be parsed at a later time, when the user types in that function call etc. Sort of "just in time" Intellisense compilation :-)

  • > If I type in

    > ::CreateWindow

    > IntelliSense will display:

    > #define CreateWindow CreateWindowA

    which tells us zip about the parameters that we have to type next.

    > If I switch my project's properties to Unicode and rebuild, then IntelliSense will instead show

    > #define CreateWindow CreateWindowW

    which tells us zip about the parameters that we have to type next.

    This is exactly one of the reasons why prebuilt NCBs are useful.  They can tell us the parameters that we have to type next.

    If designed properly, prebuilt NCBs can even tell us things like LPTSTR instead of whichever underlying type it maps to in the present environment.  For functions like WideCharToMultiByte or gethostname we want to be told when a parameter really has to be an LPSTR (never Unicode), an LPWSTR (never ANSI), or LPTSTR (has to match the present environment).  Do you really consider that to be too intelligent or sensible?

  • Oh here we go.  I experimented with converting a Visual Studio 2005 SP1 solution to Visual Studio 2008, and the very first failure is Intellisense.

    '{AECA1A9A-4E97-4879-999A-2FC44788878F}' ('指定されたファイルが見つかりません。') からメタデータを読み取る際に問題が発生しました。ソリューションが再度読み込まれるまで、IntelliSense が正常に動作しない可能性があります。

    I closed the Visual Studio 2008 window, and then in Explorer I double-clicked the newly converted .sln file.  Doesn't this force the solution to be reread?  But the very same Intellisense error was repeated.  Also every time I switch between Debug and Release, it repeats again.

  • After becomming frustrated with VC2005's intellisense, I got a copy of Visual Assist X which was a vast improvement, in my opinion. That said, I recently switched to VC2008 (w/o VAX) and it seems a lot better. Its intellisense deals with my Boost.Test based projects (or similar apps whose structure is almost completely defined using macros) much better than either VC2005 or VAX ever did. You're definately getting there guys - bring on VC10!

  • Unfortunately, according to my experience, 2008's intellisense for C++ is still as useless as 2005's or 2003's. Actually, all I expect now from VC10 is a switch to turn the built-in intellisense off without having to delete feacp.dll :)

    For now, we are just stuck to VAX... Though it is not infallible, at least it does the job.

  • Hi, Jim Springfield again. This post covers our current work to fundamentally change how we implement

  • Hi, Jim Springfield again. This post covers our current work to fundamentally change how we implement

  • My wish list for Visual Studio vNext isn&#39;t that long. The things I would like to see in Visual Studio

  • My wish list for Visual Studio vNext isn't that long. The things I would like to see in Visual Studio

  • 大家好,我是Jim Springfield. 我要开始解释我们的计划,关于从根本上改变C / C 的智能感知和代码浏览功能的工作方式。最近vs2005的GDR和vs2008的变化是很重要的,但他们并没有将这些特点的真正实现。这个文章涵盖了这个功能的历史,并帮助我们解释智能感知如何在vc10(VS2008的下一个Release)完成的情况。

  • Hello, I’m Mark Hall, an architect in the Visual C++ group. I wanted to follow up on Jim Springfield’s

Page 1 of 2 (16 items) 12