Engineering Windows 7

Welcome to our blog dedicated to the engineering of Microsoft Windows 7

Feedback and Engineering Windows 7

Feedback and Engineering Windows 7

Just about every email we receive and every comment we get comes with feedback—something to change, something to do more of, something to do less of, and so on. As we’ve talked about in this blog, acting on each one in an affirmative manner is easier said than done. What we can say for certain, is that we are listening to each and every comment, blog post, news story, MS Connect report, Send Feedback item, and of course all the data and telemetry.  This post kicks off the discussion of changes made to the product with an overview of the feedback process.  We'll get into specific changes shortly and we'll continue to return to the theme of changes in the Release Candidate (RC) over the next weeks.  Yesterday on the IE Blog, you saw that we'll be updating IE 8 on Windows 7, and there we also talked about the feedback process in general.

Feedback about Windows 7 of course starts before we've written any code, and by the time we've got running code thousands of people outside of Microsoft have provided input and influenced the feature set and design of Windows 7.  As we've seen, the input from even a small set of customers can often represent a wide variety of choices--often in alignment, but just as often in opposition.  As we're developing the features for Windows 7 we work closely with PC makers, enterprise customers, and all types of customers across small business, education, enthusiasts, product reviewers and industry "thought leaders", and so on.  We shape the overall "blueprint" of the release based on this wide variety of input.  As we have design prototypes or code running, we have much more targeted and specific feedback by using tools such as usability tests, concept tests, benchmark studies, and other techniques to validate the implementation of this blueprint. Our goal with this level of feedback is for it to be representative of the broad set of Windows customers, even if we don't have a 1:1 interaction with each and every customer.  Hopefully this post will offer some insights into this process overall--the tools and techniques, and the scope of feedback. 

In the first few weeks of the Windows 7 beta we had over one million people install and use Windows 7.  That's an astounding number for any beta test and while we know it has been fun for many folks, it has been a lot of work for us--but work that helps to raise the quality of Windows 7.  When you use the beta you are automatically enrolled in our Customer Experience Improvement Program (anonymous feedback and telemetry, which is voluntary and opt-in in the RTM release).  Just by using Windows 7 as a beta tester you are helping to improve the product--you are providing feedback that we are acting on in a systematic manner.  Here is a sense of the scale of feedback we are talking about:

  • During a peak week in January we were receiving one Send Feedback report every 15 seconds for an entire week, and to date we’ve received well over 500,000 of these reports.  That averages to over 500 reports for each and every developer to look through!  And we're only through 6 weeks of using the Windows 7 beta, even though for many Windows 7 already seems like an old friend.
  • To date, with the wide usage of the Windows 7 Beta we have received a hundreds of Connect (the MSDN/Technet enrolled beta customers) bug reports and have fixes in the pipeline for the highest percentage of those reported bugs than in any previous Windows development cycle.
  • To date, we have fixes in the pipeline for nearly 2,000 bugs in Windows code (not in third party drivers or applications) that caused crashes or hangs.  While many Beta customers have said they are very happy with the quality of Windows 7, we are working to make it even better by making sure we are fixing the issues experienced by such broad and significant usage.
  • To date, we have recorded over 10,000,000 device installations and over 75% of these were able to use drivers provided in box (that is no download necessary).  The remaining devices were almost all served by downloading drivers from Windows Update and by direct links to the manufacturer's web site.  We've recorded the usage of over 2.8M unique plug-and-play device identifiers.
  • On a personal note, I've received and answered almost 2,000 email messages from folks all around the world, just since this blog started in August.  I really appreciate the discussion we're having and am doing my best to keep up with all the mail.

We have a variety of tools we draw on to help inform the decision making process. A key element that we have focused on quite a bit in Windows 7 is the role of data in making decisions. Everything we do is a judgment call as ultimately product development is about deciding what to get done from an infinite set of possibilities, but the role of data is essential and is something that has become far more routine and critical. It is important to be super clear—data is not a substitute for good judgment or an excuse to make a decision one way or another, but it most definitely informs the decision. This is especially true in an era where the data is not only a survey or focus group, but often includes a “sampling” of millions of people using Windows over the course of an extended time period.

A quick story from years ago working on Office, many years ago before the development of telemetry and the internet deciding what features to put in a release of Office could really be best described as a battle. The battle took place in conference rooms where people would basically debate until one or more parties gave up from fatigue (mental or otherwise)—essentially adrenaline-based product development. The last person standing, the one with the most endurance, or the one who pulled an all-nighter to write the code pretty much determined how features ended up or what features ended up in a product. Sort of like turning feature design over to a Survivor-like process. I’m sure many of you are familiar with this sort of process. The challenges with this approach are numerous, but inevitably features do not hold together well (in terms of scenarios or architecture), the product lacks coherency, and most importantly unless you happen to have a good match between the “winner” and the target customers, features will often miss the mark.

In the early 1990’s we started instrumenting Word and learning about how people actually used the software (this was before the internet so this was a special version of the product we solicited volunteers to run and then we would collect the data via lots of floppies). We would compile data and learn about which features people used and how much people used them. We learned things such as how much more people used tables than we thought, but for things very different than tables. We learned that a very significant amount of time the first suggestion in the spelling dictionary was the right correction (hence autocorrect). We learned that no one ever read the tip of the day (“Don’t run with scissors”). This data enabled us to make real decisions about what to fix, the impact of changes, and then when looked at the goals (the resulting documents) what direction to take word processing.

Fast forward to the development of Windows 7 and we’re focused on using data to help inform decisions we make. This data takes many forms and helps in many ways. I know a lot of folks have questions about the data – is it representative, how does it help fix things people should be using but don’t, what about doing new things, and so on. Data is an important element of making decisions, but not a substitute for clear product goals, meaningful customer engagement, and working across the ecosystem to bring Windows 7 to customers.

Let’s talk a bit about “bugs”. Up front it is worth making sure we’re on the same page when we use the much overloaded term bug. For us a bug is any time the software does something that someone one wasn’t expecting it to do. A bug can be a cosmetic issue, a consistency issue, a crash, a hang, a failure to succeed, a confusing user experience, a compatibility issue, a missing feature, or any one of dozens of different ways that the software can behave in a way that isn’t expected. A bug for us is not an emotional term, but just shorthand for an entry in our database representing feedback on the product. Bugs can be reported by a human or by the various forms of telemetry built into Windows 7. This broad definition allows us to track and catalog everything experienced in the product and do so in a uniform manner.

Briefly, it is worth considering a few types of data that help to inform decisions as some examples.

  • Customer Experience Improvement Program. The CEIP covers the full set of data collected on your PC that is provided to Microsoft in an anonymous, private, and opt-in manner. During the beta, as we state, this is defaulted on. In the retail product of course this is optional. During the course of the beta we are seeing the data about usage of new features, where people are customizing the product, what commands are being used, and in general how is Windows 7 being used. You’ve seen us talk about some of this data from Windows Vista that informed the features of Windows 7, such as the display resolution being used or the number of accounts on a machine. There are many data points measured across the product. In fact, an important part of the development cycle is to make sure that new features are well instrumented to inform us of usage during beta and down the road.
  • Telemetry. While related to CEIP in the programmatic sense, we look at telemetry in a slightly different manner and you’ve seen this at work in how we talk about system performance or about the diversity of devices such as our discussion of high DPI support. Throughout the course of the beta we are able to see how boot time evolves or which devices are successfully installed or not. Important elements of telemetry that inform which bugs we fix are how frequently we are seeing a crash or a hang. We can identify software causing a higher level of issues and the right team or ISV can know to work on the issue. The telemetry really helps us focus on the benefit of the change—fixing a bug that represents thousands of customers, a widely used device, or broadly used third party software has a much bigger impact than a bug that only a few people, lower volume device, or less used software product might address. With this data we can more precisely evaluate benefit of changes.
  • Scenario based tests. During the course of developing a feature we can take our designs and prototypes (code, paper, or bitmaps) and create a structured study of how customers would interpret and value a feature/scenario. For example, early in the planning of Windows 7 we created a full working prototype of the taskbar enhancements. With this prototype we can study different types of customers (skill levels, familiarity with different versions of Windows, competitive product customers, IT pro or end-user) and how they react to well-defined series of “tasks”. This allows a much more detailed study of the feature, as one example. As with all tests, these are not a substitute for good judgment in broader context but a key element to inform decisions.
  • Benchmarking studies. As we transitioned to the pre-beta we started to have real code across the whole product so we began validation of Windows 7 with real code in real world scenarios. We call these studies benchmarking because often we are benchmarking the new product against a baseline of the previous version(s) of Windows. We might do a study where we see how long it takes to share a printer in the home and then compare that time to complete/success rate with a Windows 7 test using HomeGroup. We might compare setting up a wireless network with and without WPA. We have many of these types of benchmarks and work to make sure that we understand both the progress we’ve made and where we might need to improve documentation, tutorials, or other forms of assistance.

This type of feedback all represents structured feedback in that the data is collected based on a systematic study and usually has a hypothesis associated with it. We also have the unstructured feedback which represents the vast array of bug reports, comments, questions, and points of view expressed in blogs, newsgroups, and the Send Feedback button—these are unstructured because these are not collected in a systematic manner, but aggressively collected by any and all means. A special form of this input is the bug reporting done through the Connect program—the technical beta—which represents bug reports, feature suggestions, and comments from this set of participants.

The Windows 7 beta represents a new level of feedback in this regard in terms of the overall volume as we talked about above. If you go back and consider the size of the development team and the time it would take to just read the reports you can imagine just digesting (categorizing, understanding, flagging) issues let alone responding to them is a massive undertaking (about 40 Send Feedback reports per developer during that one week, though as you can imagine they are not evenly distributed across teams).

The challenge of how to incorporate all the feedback at this stage in the cycle is significant. It is emotional for us at Microsoft and the source of both considerable pride and also some consternation. We often say “no matter what happens, someone always said it would.” By that we mean, on any given issue you can be assured that all sides will be represented by passionate and informed views of how to resolve it, often in direct opposition to each other plus every view in the middle. That means for the vast majority of issues there is no right or wrong in an absolute sense, only a good decision within the context of a given situation. We see this quite a bit in the debates about how features should work—multiple solutions proposed and debate takes place in comments on a blog (people even do whole blogs about how things should work). But ultimately on the Windows development team we have to make a call as we’re seeing a lot of people are looking forward to us finishing Windows 7, which means we need to stop changing the product and ship it. We might not always make the right call and we’ll admit if we don’t make the right call, even if we find changing the behavior is not possible.

Making these decisions is the job of program management (PM). PMs don’t lock themselves in their offices and issue opinions, but more realistically they gather all the facts, data, points of view, and work to synthesize the best approach for a given situation. Program management’s role is making sure all the voices are heard, including beta testers, development, testing, sales, marketing, design, customer support, other teams, ISVs, IHVs, and on and on. Their role is to synthesize and represent these points of view systematically.

There are many factors that go into understanding a given choice:

  • What is it supposed to do? At the highest level, the first question to ask is about how is something supposed to work. Sometimes things are totally broken. We see this with many many beta issues around crashes and hangs for example. But there’s not a lot of debate over these since if it crashes in any meaningful frequency (based on telemetry) it should be fixed. We know if it crashes for you then it is a “must fix” but we are looking across the whole base of customers and understanding the frequency of a crash and also whether the code is in Windows, a driver from a hardware maker, or software from a third party—each of those has a different potential resolution path to consider. When it comes to user interaction there’s two elements of “supposed to do”. First, there’s the overall scenario goal and then there’s the feedback of how different people with different experiences (opinions) of what it should do. As an example, when we talked about HomeGroup and the password/passphrase there was a bunch of feedback over how this should work (an area we will be tweaking based in part on this feedback). We of course have specifications and prototypes, but we also have a fluidity to our development process such that we do not have 100% fidelity before we have the product working (akin to architectural blueprints that leave tons of decisions to be made by the general contractor or decided while construction is taking place). There are also always areas in the beta where the feature is complete but we are already on a path to “polish” the experience.
  • How big is the benefit? So say we decide something is supposed to behave differently. Will it be twice as good? Will it be 5% better? Will anyone notice? This is always a great discussion point. Of course people who advocate for a change always are convinced that the change will prevent the feature from being “brain dead” or “if you don’t change this then the feature is dead”. We see this a lot with areas around “discoverability” for example—people want to put something front and center as a way of fixing something. We also see many suggestions along the lines of “make it configurable”. Both of these have benefits in the near term of course, but both also add complexity down the road in terms of configurations, legacy user interface, and so on. Often it is important to look at the benefit in a broader context such as how frequently something will be executed by a given person or what percentage of customers will ultimately take advantage of the improvement. It is not uncommon internally to see folks extrapolate instantly to “everyone does this”!
  • How big is the change? Early in the product cycle we are making lots of changes to the code—adding new code, rearchitecting, and moving things around a lot. We don’t do so willy nilly of course but the reality is that early in the cycle there is time for us to manage through the process of substantially changed code and the associated regressions that will happen. We write specifications and have clear views of features (scenario plans, prototypes, and so on) because we know that as the project progresses the cost of making big changes of course goes up. The cost increases because there is less time, but also because big change late in the cycle to a large system is not prudent engineering. So as we consider changes we also have to consider how big a change is in order to understand the impact across the system. Sometimes change can be big in terms of lines of code, and lots of code is always risky. But more often the change is not the number of lines, but the number of places the code is connected—so while the change sounds like a simple “if” statement it is often more complex than that. Over the years, many have talked about componentization and other systems engineering ways to reduce the impact of change and of course Windows is very much a layered system. The reality is that even in a well layered system, it is unlikely one can change things at the bottom and expect no assumptions of behavior to carry forth through subsequent upper layers. This “defensiveness” is an attitude we have consistently throughout our development process because of the responsibility we feel to maintain compatibility, stability, performance, and reliability.
  • How costly is the change relative to the benefit? Change means something is different. So any time we change something it means people need to react. Often we are deliberate in change and we see this in user interface, driver models, and so on. When new are deliberate people can prepare and we can provide tools to help with a transition. We’ve seen a lot of comments about new features that react to the cost of change. Many times this commentary is independent of the benefit and just focuses on the change itself. This type of dialog makes it clear that change itself is not always good. With many bug reports we hear “this has been in Windows for 3 versions and must be fixed in Windows 7”. Over many releases of Windows we have learned that behaviors in the system, particularly in APIs, message order and semantics, or interfaces might not be ideal, but changing them introduces more complexity, incompatibilities, and problems for people than the benefit of the change. Some view these decisions as “holding us back” but more often than not it would be a break from the past one day only to create a new past to break from the next. The existing behavior, whether it is an API or a user interface, defines a contract we have and part of building a release is making sure we have a well understood cost/benefit view, knowing that as with any aspect of the system different people will have different views of this “equation”.
  • In the context of the whole release, how important is this issue? There is the reality that all decisions need to be made in the context of the broader goals of the release. Each release stands for a set of core scenarios and principles that define the release. By definition it means that each release some things will change more than others and some things might not change at all. Or said another way, some parts of the system will be actively worked on towards a set of goals while we keep other parts of the system more or less “stable” release over release. It means that things you might want to see changed might not change, just because that is an area of the product we’re not mucking with during Windows 7. As we’ve talked about, for Windows 7 we put a lot of work into various elements of system performance. Aside from the obvious scenario planning and measurement, we also took very seriously areas of the system that needed to change to move us forward. Likewise, areas of the system where the performance gain would not be significant enough to warrant change do not change that much. We carry this forward through the whole cycle as we receive data and telemetry.
  • How does the change impact security, reliability, performance, compatibility, localizability, accessibility, programmability, manageability, customizability, and so on? The list of “abilities” that it takes to deliver windows is rather significant. Members of our development team receive ongoing training and information on delivering on all of these abilities so we do a great job across the product. In addition, for many of these abilities we have members of the team dedicated full time to delivering on them and making sure across the product we do a good job. Balancing any change or input against all of these abilities is itself a significant undertaking and an important part of the research. Often we see input that is very focused on one ability which goes counter to another—it is easy to make a change to provide customization for example, but then this change must also be customizable for administrators, end-users, and PC makers. Such complexity is inherent in the very different scenarios for usage, deployment, and management of PCs. The biggest area folks see us considering this type of impact is when it comes to changing behavior that “has been in the product forever”. Sometimes an arbitrary decision made a while back is best left as is in order to maintain the characteristics of the subsystem. We know that replacing one old choice with a new implementation just resets the clock on things that folks would like to see be different—because needs change, perspectives change, and people change.

These are just a few of the factors that go into considering a product change. As you can see, this is not something that we take lightly and a lot goes into each and every change. We consider all the inputs we have and consider all the data we can gather. In some ways it is easy to freeze thinking about the decisions we must make to release Windows 7—if you think too hard about a decision because you might start to worry about a billion people relying on something and it gets very tricky. So we use data to keep ourselves objective and to keep the decision process informed and repeatable. We are always humbled by the responsibility we have.

While writing this post, I received a “bug report” email with the explicit statement “is Microsoft going to side step this issue despite the magnitude of the problem” along with the inevitable “Microsoft never listens to feedback”. Receiving mail like this is tough—we’re in the doghouse before we even start. The sender has decided that this report is symbolic of Microsoft’s inability or lack of desire to incorporate critical feedback and to fix must fix bugs during development. Microsoft is too focused on shipping to do the right thing. I feel like I’m stuck because the only answer being looked for is the fix and anything less is a problem or further proof of our failure. And in the back of my mind is the reality that this is just one person with one issue I just happen to be talking to in email. There over a couple of million people using the beta and if each one, or for that matter just one out of 10, have some unique change, bug fix, or must do work item we would have literally years of work just to make our way through that list. And if you think about the numbers and consider that we might easily get 1,000,000 submitted new “work items” for a product cycle, even if we do 100,000 of them it means we have 900,000 folks who feel we don’t listen compared to the 100,000 folks who feel listened to. Perhaps that puts the challenge in context.

With this post we tried to look at some of the ways we think about the feedback we’re getting and how we evaluate feedback in the course of developing Windows 7. No area is more complex than balancing the needs (and desires) of such a large and diverse population—end-users, developers, IT professionals, hardware makers, PC manufacturers, silicon partners, software vendors, PC enthusiasts, sysadmins, and so on. A key reason we augment our approach with data and studies that deliberately select for representative groups of “users” is that it is important to avoid “tyranny of the majority” or “rule by the crowd”. In a sense, the lesson we learned from adrenaline -based development was that being systematic, representative, and as scientific as possible in the use of data.

The work of acting on feedback responsibly and managing the development of Windows through all phases of the process is something we are very sincere about. Internally we’ve talked a lot about being a learning organization and how we’re always learning how to do a better job, improve the work we do, and in the process work to make Windows even better. We take this approach as individuals and how we view building Windows. We know we will continue to have tough choices to make as everyone who builds products understands and what you have is our commitment to continue to use all the tools available to make sure we are building the best Windows 7 we can build.

--Steven

Leave a Comment
  • Please add 4 and 6 and type the answer here:
  • Post
  • IPv6 is, AFAIK @ least, driven by a DIFF. driver than tcpip.sys (iirc, it's "tcpip6.sys" & tcpip.sys is for IPv4) as far as drivers go @ least...

    Also, IP addresses in IPv6 are a LOT longer than the 4 section IP addresses IPv4 uses, so I wouldn't say it's that...

    Speculation's fine & dandy, but I'd like to know the REAL answer for:

    1.) Why 0 is no longer working as a blocking IP address in a HOSTS file in VISTA or Server 2008

    &

    2.) Why has the PORT FILTERING gui front been removed from the configuration GUI in VISTA &/or Server 2008?

    Again - doing so, makes NO sense for either efficiency (in regards to #1 (hosts file)), & also for LAYERED security (in regards to @2 (Port Filtering)).

    Let's hear the REAL answer to this, from "the horses mouth", in the folks from Microsoft!

    Thanks...

    APK

  • SORRY TO HAVE TO TYPE IN CAPS BUT,

    PLEASE, PLEASE, PLEASE FIX WINDOWS 7 SO IT REMEMBERS WINDOW VIEWS AND WINDOWS POSITIONS ON THE DESKTOP. THIS HAS BEEN A NAGGING ISSUE SINCE WINDOWS 3.1 AND AGAIN IT IS PRESENT IN THE WINDOWS 7 BETA, BOTH X86 AND X64. IT'S SO EASILY SOLVED YET MICROSOFT HAS YET TO ADDRESS THIS COMMON ANNOYANCE.

    THERE ARE SO MANY SMALL ANNOYANCES THAT MICROSOFT HAS YET TO FIX. ANOTHER ONE IS WINDOWS MEDIA PLAYER NOT REMEMBERING YOUR VIEW SETTINGS TOO. WHEN STARTING WMP YOU HAVE TO MANUALLY ADJUST THE "FIT TO PLAYER ON RESIZE" EACH AND EVERY TIME. VERY ANNOYING!!

    ANOTHER IS WMP AUTOMATICALLY CREATING A STARTUP ENTRY FOR WMPSNCFG REGARDLESS OF YOUR SETTINGS IN THE WMP OPTIONS MENU. CURRENTLY THE ONLY WAY TO RESOLVE THIS ISSUE IS TO GO INTO THE REGISTRY. VERY ANNOYING!

    There are those who might say oh my god you're nitpicking but I'm not. These are basic functionality issues that have been present since Windows inception. Sometimes it's the small things that make a product great. Thank you!

  • It seems that WMP12 is getting a bit of a bashing, I have to say that it is for me, the weekest point of the beta, and about the only thing that makes me think about going back to vista. The main problem is when using it to stream video. I have an Netgear EVA700 box connected via a wireless network. There are two computers in the house, one running Vista servicepack1, and the other running windows 7. Playing DIVX files on the netgear works fine when they come from the VISTA computer, but the same file when played via the windows 7 beta version has much poorer picture quality (diagonals are stepped, and panning is very jerky). Even worse the video restarts about 10 minutes into the program. I have tried several files and the results are the same.

    I have tried copying the file to an USB stick and it plays fine when the usb stick is set in the player, in other words its not the file (usb works fine) or the network (the vista machine works fine) It can only be either windows 7 or the mediaplayer 12.

    I can understand a bug causing the restart, but the large difference in picture quality, i can not understand. I thought that all the video processing took place in the netgear EVA700, and that all the computer does is send the file over the network, otherwise I do not understand how the EVA can play from a USB stick. Does anyone else have a similar experience?

  • "the reality is that early in the cycle there is time for us to manage through the process of substantially changed code and the associated regressions that will happen"

    This is precisely why there should be two betas for what is considered not as big a major release as Windows Vista.

    "There over a couple of million people using the beta and if each one, or for that matter just one out of 10, have some unique change, bug fix, or must do work item we would have literally years of work just to make our way through that list."

    This challenge can be solved by implementing a Digg/Aero taskforce-like system of fixing "bugs" (Sorry Connect doesn't do the job because it's limited to tech beta participants). Doesn't Microsoft see its potential? "rule by the crowd" is what is exactly needed to set things right in most of the cases.

    Lastly, all the product teams (right from Windows Media to WNDP) should blog and respond to feedback.

  • While we as users understand how incredibily complex and difficult it is for the Windows teams to make even a small change while minimizing its negative impact on users, as far as removing features are concerned, clearly the best approach IMO is to not remove them even when they are not being replaced by something else. Take for example the classic Start menu, or the Software Explorer in Windows Defender. Having that additional code wouldn't have added to the product's complexity as much as it would now affect millions of users who were used to it and wouldn't want it to change at any cost. Once you "give" a feature to users, you have a moral responsibility to NOT TAKE IT AWAY. Wherever features are removed, it is best to keep them as an alternative choice forever, follow the additive model as far as possible, not the destructive one which throws away things based on management "decisions". Microsoft does not even provide a rationale behind each and every feature removal; these lists are only complied by users who used them in their day to day activities and one fine day discover that it's been pulled and are helpless without it. If you really follow this additive model where no features (that make up these lists) are removed, Microsoft will see a very huge numbers of users who are holding back merely for removed features, migrating to the newer operating system. As a aside, did Microsoft ever think why before Windows Vista, there was not such a huge number of (www.wikipedia.org/wiki/Features_removed_from_Windows_Vista). The features removed list is itself a bug report for a beta for things that are getting overlooked or removed without careful thought, but Microsoft is hardly doing anything to justify their removal, let alone add them back as they were in previous versions of Windows. As a matter of fact, most of the comments on this post are regarding crippling/dumbing down/removals. Anyways, I'm despondent about seeing them back ever at this stage of the beta cycle, moreover "telemetry drives Microsoft innovation nowadays."

  • The beta stage is more about checking several tasks, and catch problems rather than extensively changing the way particular software works.

  • http://tech.slashdot.org/comments.pl?sid=1143349&threshold=-1&commentsort=0&mode=thread&pid=27012231

    Take a read, Microsoft...

    APK

  • I found the (imo) rather flimsy reasoning behind WHY the PORT FILTERING gui controls were allegedly removed in Windows VISTA, Server 2008, & Windows 7, after consulting with Mr. Mitch Tulloch... here tis:

    <b>From Chapter 27 of the Vista Resource Kit that explains the rationale for removing the TCP/IP Filtering UI:</b>

    ----

    "Windows XP Service Pack 2 actually has three different firewalling (or network traffic filtering) technologies that you can separately configure, and which have zero

    interaction with each other:

    Windows Firewall that was first introduced in Service Pack 2

    TCP/IP Filtering, which is accessed from the Options tab of the Advanced

    TCP/IP Properties sheet for the network connection

    IPsec rules and filters, which you can create using the IPsec Security

    Policy Management MMC snap-in

    On top of this confusion, Windows Server 2003 Service Pack 1 had a fourth network traffic filtering technology that you could use: the Routing and Remote Access Service

    (RRAS), which supported basic firewall and packet filteringthe problem, of course, is that when more than one of these firewalls is configured on a computer, one firewall can block traffic that another allows"

    ----

    <b>Lame reasoning imo!</b>

    I say this, because it is TRIVIAL to create exceptions rules in most any software (or hardware based) firewall generally, & to match that in Port Filtering is quite simple also (even easier imo, provided you know what port's involved, & that's what the IANA lists are for, after all).

    AND

    Once a malware gets inside? One of the FIRST things it does, is disable a software firewall... & with NO OTHER BARRIERS IN THE WAY, such as PORT FILTERING RULES?

    You get, what you get (infested systems galore online today).

    APK

    P.S.=> HOWEVER - Mr. Tulloch & I are currently in progress searching for the reasoning behind the removal of 0 as a valid IP blocking address in a HOSTS file, but even HE was unaware of WHY this was done... but, with any luck? We're going to find out - &, I'll let you all know, here, since nobody else here has to date... that is, if the thread isn't dead by then... apk

  • HI,

    I have installed the beta, since I am blind and there is no way of installing any version of Windows without sighted help, I had a person telling me what was on the screen during the installation. Most things went ok, but since this person has almost no knowledge of Windows is a regular user, we missed the place where you choose the correct account. I have an account but  it seems to be of the most restricted type. I cannot see files on an attached flash card, where my Screen reader installation is! I cannot start narator the built in very limitted screen reader, for some reason narator have never been working on the Wisown versions I have used since 2000 where it was introduced. Before I can start using the beta I need to know a way to be able to install my screen reader and it would be best if it can be done without too much work in a GUI environment since I will be using the same person for the screen reading.

    claus

  • Upon speaking with Mr. Mitch Tulloch & doing my own queries about it as noted above over @ /. (slashdot), a Mr. Harm Sorensen was good enough to point out that HOSTS files using 0 are indeed smaller & faster on disk as I noted, & using 0 is thus smaller & faster to load into the local DNS cache (doing the math in theory alone shows anyone this much), BUT, once in memory? I had already heard tell that the load eliminates repeat entries, but NOT that the API does a conversion of 0 entries into 0.0.0.0 ... so, in RAM, it may not be a bloating, but on disk? It unquestionably, is.

    Getting those answers people, & they are still in favor of what I stated on BOTH points (HOSTS files not being able to use 0 anymore for a blocking IP Address, AND, on Port Filtering being foolishly removed for very lame reasons which affect the practice of "layered security", by removing the ability to use it easily since the gui for it is gone).

    APK

    P.S.=> To Microsoft: I really hope you folks have a GOOD SOLID TECHNICAL reason for doing both of those things I note above, because thusfar, I cannot see it & neither can others... the justifications are not good enough for such cripplings AND inefficiencies being implemented! apk

  • Additional findngs that myself and Mr. Harm Sorensen (noted in my last post above) found in regards to HOSTS file usage were rather astounding, & don't make a lot of sense (especially the result), read on:

    Mr. Sorensen espouses the use of a local DNS server to cache URL-to-IP address equation, which do work to do the job but, may be vulnerable to attacks/exploits, for one thing (& more below it, read on please)!

    E.G.-> Attacks & exploits such as those Mr. Dan Kaminsky found in BIND &/or the recent DJBNS (Dan J. Bernstein) a few days prior to this post OR just plain old "DNS-Poisoning" & DNS server programs eat up cpu cycles, memory, & other forms of I/O as well... needlessly for most people that do not really need one because of HOSTS file usage (for both speed and security online).

    What Mr. Sorensen found, was that disabling the DNS Client causes more disk I/O than was present using the DNS Client service.

    (Which fails on HOSTS files larger than 1mb or so, not exactly SURE of what size begins to make the local DNS client start to lag but it is near that size)

    Turning off the DNS Client Service caused the system to constantly re-reference the HOSTS file from disk, which IS expected.

    To the tune of 1500ms to Open/Read/Close the HOSTS file in order to obtain a URL-to-IP address equation from it (whether for blocking or speeding up access to said site online).

    This to myself, also seemed excessive & impractical to use as he felt also because it makes sense & I am not one to argue with numbers, @ least not until they do not make sense, and they don't if you see the results people get using custom HOSTS files for both more speed & more security online.

    I.E.-> Because of what Mr. Sorensen found, it also made no sense to myself this should give people the speed boosts they see using custom HOSTS file!

    E,G,-> Speed boosts folks are seeing, such as Mr. Oliver Day of SecurityFocus.com noted in his recent article regarding the "Resurrecting the Killfile" on 02/24/2009 this year. His results? See quote:

    ----

    "The host file on my day-to-day laptop is now over 16,000 lines long. Accessing the Internet — particularly browsing the Web — is actually faster now." Oliver Day, SecurityFocus.com

    ----

    Which is also what I see here, as does anyone else using a HOSTS file for faster & more secure online internet experiences. I can provide such testimonials easily, for anyone curious in regards to this. Just ask.

    Anyhow/anyways:

    Here, I offset more of that disk I/O bound lag of access/reaccess via placing my HOSTS file onto a TRUE SSD (not flash ram based type) called a CENATEK RocketDrive via altering the DataBasePath parameter found here in the registry:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

    (Which is composed of RAM (2gb PC-133 SDRAM) much faster access than std. HDD's have by many orders of magnitude since there are no read/write heads per se, as in a mechanical HDD. Speed of RAM, vs. Speed of disk basically).

    Other folks usually won't/don't have that "luxury" so they ought to be REALLY feeling Mr. Sorensen's lag he found via analysis using both Process Explorer &/or Process Monitor (hello Dr. Mark Russinovich), but strangely, just using the words of Mr. Oliver Day above?

    Well, apparently? They're not.

    Mr. Sorensen also found that once his HOSTS file was first called upon, that 1500ms lag cut into 1/2 around 700ms... &, we could not figure it out, until he noted that the Operating System's native diskcache was caching the file content, which made COMPLETE SENSE to myself also.

    This also makes one think that the DNS Client caching system for a local DNS cache may be unnecessary, because as is, it 'chokes' on a HOSTS file of the size of mine, causing INCREDIBLY noticeable lag, but when the DNS Client is stalled? NO more lag.

    However, that SHOULD theoretically, begin causing another form of lag: Diskbound access & reaccess of the HOSTS file for resolution of URL's to their correct online IP address.

    Still, all the theory in the world does not seem to be holding up in regards to creating a slowdown apparently if one just reads MR. Day's quote from above, because that is what I experience here, as do other users of the HOSTS file (especially large HOSTS files).

    Either the local diskcache is doing a GREAT job, seemingly replacing the need for the DNS Client service, or we are still overlooking something (perhaps it is a matter of "the weight of each term" like in mathematics? The disk access end is not as great a detriment as thought, & is TOTALLY OFFSET by the OS diskcache caching the HOSTS file content)?

    I'd say it may be a good idea to EXTEND the size of the array/datastructure/buffer used to populate the DNS Client service so it does not lag on HOSTS files, or even do away w/ it altogether (because of the results Mr. Sorensen saw in his analysis).

    Either way, it seems HOSTS files are 'bucking the odds' & making it less efficient on disk via no longer allowing 0 as a blocking IP address (vs. the larger & slower to disk access 0.0.0.0, or, even worse still, the std. 127.0.0.1 loopback adapter).

    Thoughts?

    APK

    P.S.=> AND again, as regards "Port Filtering":

    Folks @ MS, do also consider putting back the Ports Filtering GUI back into the local area connection icon item for the sake of layered security as well, per the reasons noted above... apk

  • Regarding the information posted by Mr Kowalski above about hosts files, here follows a summary of my findings (on Windows XP SP3; other versions may differ):

    - If the DNS Client service is started, it reads the hosts file into its in-memory cache once at the start, and again if it detects a change in the file.

    - The time taken to read the hosts file exhibits exponential growth relative to the number of entries already read. I suspect this is due to the data structures and methods used to add data to the cache; certainly, when reading the file in, the DNS Client is CPU-bound rather than I/O-bound.

    (As as example on my computer: a 6,400 entry hosts file takes approximately 500 ms to read in fully, a 64,000 entry hosts file takes about 1.5 minutes and extrapolating the data I found that a 640,000 entry hosts files would take over 3 hours to load in to the cache.)

    - This makes large hosts files unsuitable for use with the DNS Client service. So one alternative - which Mr Kowalski uses - is to stop the DNS Client service. This causes the DNS API to read the entire hosts file upon every name resolution.

    - This takes a linear amount of time relative to the size of the file, and is constrained by I/O rather than CPU. This leads to the essence of Mr Kowalski's complaint: by disallowing integer IP addresses and thus requiring the longer dotted octet notation, it slows name resolution down for him as the hosts file is larger.

    Hope this helps clarify.

  • To continue from where Mr. Sorensen left off (great job Harm by the by):

    The only real slowdown I can HUMANLY perceive in using a LARGE HOSTS file here, and it takes me a program to note it which I wrote this year to remove duplicated entries in HOSTS files and to convert the larger, and yes, slower 127.0.0.1 loopback adapter address?

    (Which afaik, & iirc, 127.0.0.1 does actually use SOME cpu + I/O to do its job vs. others like -> 0.0.0.0 to plain jane smaller & faster 0 in a HOSTS file to block out bad sites)

    Was loading mine into my program, @ first @ least, read on:

    I.E.-> I use this program & I populate my HOSTS file thus, daily & have been for nearly 12++ yrs. now to protect myself & others from known BAD websites &/or adbanners that house malicious code on them.

    (My sources are reputable, such as the HOSTS files @ wikipedia, & also BISS/BlueTack, mvps.org, & my own original list + stopbadware.org & Mr. Dancho Danchev's blog for ZDNet, to note just some).

    NOW - At first, because I do this "oldschool/primitive" method of profiling my code, by inserting a hi-res timer into the routine & 'ticking off' the amt. of time it takes to perform said routine?

    Well - I saw slower loads into my arrays (re-dimmable type, via using listboxes (yes, I know, NOT as efficient as a redimmable array, but it works))

    That alone told me that accessing a HOSTS file using 127.0.0.1 or 0.0.0.0 would be slower for the DNS cache system (or, really anything else parsing that file's contents) because of greater size.

    Again - Just common-sense really.

    HOWEVER:

    Running into the DNS Client lagging was like (for lack of a better analogy here) IRON MAN falling out of the sky when he went up as far as the SR-71 Blackbird did, & he yells:

    "We iced up Jarvis: DEPLOY FLAPS! Jarvis?!? Come ON, WE GOTTA BREAK THE ICE!"

    I did but - funny part was, that I have been disabling the local DNS Client service via services.msc for decades now & forgot all about it!

    (I noted it in this guide which has been showing people getting NO malware infestations by using it, for more than 1++ yrs. now, see here -> Thronka's reply, specifically: http://www.xtremepccentral.com/forums/showthread.php?s=0f176a41d58da62679173e0c55362014&t=28430&page=3 & there, I realized I had to amend it for that 'tiny detail' & did so)

    It was my "way around it", disabling the local DNS cache, as Mr. Sorensen noted above, & it works!

    (The funniest part is, despite all the CPU or DiskBound numbers saying it's the 'wrong thing to do'? That folks like Mr. Oliver Day's quote above from -> http://www.securityfocus.com/columnists/491 where he goes faster online, and many others do so as well online using custom HOSTS files like mine)

    People can undoubtedly actually go a LOT faster online this way (via blocking adbanners & not using javascript aid this tremendously for speed, but, also for security too by cutting out the root causes (again, imo just common sense))

    ----

    NOW, on running a LOCAL DNS Server here:

    The ONLY problem I have with DNS servers is illustrated today on MS-Patch Tuesday (2-3 patches to MS' own DNS server) & that of the DJBDNS server showing holes 2 days ago, and that of BIND which Mr. Dan Kaminsky discovered & the entire internet was in an uproar about.

    ----

    Bottom-line? I don't trust them anymore... or, not as much & I do use one external to my systems!

    (No need for AD here, which has HEAVY dns dependencies is why, & only single system here now online & no need to run yet another program I do not need consuming RAM, CPU &/or disk I/O, or other forms of resources also)

    I do however, still use the "Best the the business" imo, in OpenDNS, who immediately worked & responded to Mr. Kaminsky's findings to secure themselves.

    However, if THEY don't have an address I need to reach cached, as Mr. Day later noted & I covered in my security guide from last year?

    I can still reach it via a "hardcode" I turned Mr. Sorensen onto by equating the right IP address to the right URL inside my HOSTS file!

    (That is also in addition to blocking out bad sites &/or adbanner, this manifests itself in gaining yet more speed to these websites, by omitting the roundtrip URL-to-IP resolution that results & is shown by a PING of said website, IF referencing a DNS server, which varies from 30 - N return trips times)

    But, by also using a hardcode as I do (which I do not distribute that model to others, they have to do that themselves)?

    0 ms response!

    (& most sites don't OFTEN change their IP address, & if they do? They let you know, WELL beforehand, when changing hosting providers)

    Anyhow, HOSTS files are IMMENSELY useful as another "layer of security" here, & even SpyBot "Search & Destroy" does a population of your local HOSTS file vs. such attacks (they are yet another reputable source I use also to populate mine, in addition to those above).

    APK

    P.S.=> As Mr. Sorensen did, I am clarifying my "POV" here, & using the results of others who have also found HOSTS files useful in this capacity for both speed & security online, layered security mind you, today...

    AND, folks @ MS:

    Please, do also consider reinstating the PORT FILTERING gui front-end in Windows' own local network connection advanced properties back into VISTA/Server 2008/Windows7.

    Your rationale above is flawed per the VISTA resource kit (which Mr. Mitch Tullock of windowsnetworking.com nicely provided) - I say this, because the fact remains that IPSec, Software Firewalls, AND port filters use diff. drivers & operate @ diff. layers of the IP stack in Windows, & if you take 1 down (which malwares often seek to do, disabling the software firewall for example)?

    The other 2 are in the way.

    You folks @ MS saying "we will remove 1 only" is contradicting your own statement, because you still would have 2 discrete & disparate methods in the way that will NOT "sync" automatically as to the ports you allow or disallow, & personally?

    I find creating IP security policies (IPSec) the most difficult of them ALL to work with, vs. software firewalls &/or Port Filtering (I use all 3 in addition to my LinkSys router & they all work, flawlessly & fast -> "HANDLES LIKE A DREAM!" IronMan/Tony Stark on his init. test flight of his armor from the great film last year).

    Sorry for the ramble, hope this all works out! apk

  • Well, I another way to evidence this changes and improvements , in Windows 7 64 Bit exepecially RC Build:

    Absolute Performance :

    Improve Very very well Boot of windows 7

    Improve very very well preview of open applications in Supertaskbar ( when mouse is passed over preview of open programs , the preview of programs it does not follow in real time , the advances of real applications operations), Fix this bad bug , it seems being a bug or lag latency of Direct 3d.

    In other words , eliminate all Lag or Latency or slow performance Of Api Direct 2d-Direct Write- Direct 3d , for fast desktop composition in all scenarios !!!!!!!!!!!!!!!!!!!!!!!!!!

    Complete All Api:

    Graphics Infrastructure: Direct Write-Direct 2d- Ui Animations- Direct 3d , for extreme performance to end ALL lAGS OF AERO INTERFACE in windows 7 64 bit!!!!!!!!!!!

    Complete all api and module for Support Multicore and many core Machines extreme scalability , in windows 7 64 Bit !!!!!!!!!!

    And in the and i note for you another way, that in windows 7 64 bit, Directx 11 for GpGPU and Api and Module for Multicore , do are the imperative Voice!!!!!!!!!!!!

    Optimize code and bugs of all setions:

    Interface ( no lag , fast , usability , all extreme)

    Fas Speed on ( Boot , many many applications open contemporary , " see Many and multicore optimizations" , with alla Api of case)

    Eliminate Bugs relative to Supertaskbar , with usability supertaskbar extreme !!!!!!!

    Thank you !

  • Well , dear windows 7 Team , i write for another words , for improve windows 7 64 bit system in RC Build.

    If Possible , make it :

    Gui : Improve speed of Aero gui ( Eliminate all lags and latency of flip 3d , preview of open applications , make it preview in real time ) , improve memory management of DWM !!!

    Improve DWM , so that video memory occupation , remaining costant in TIMES , after that All Applications Graphics are closed !!!!!!

    Performance General:

    Improve performance of windows 7 64 bit in this areas:

    Boot !!!!

    Gui , same in the top !!!

    MUlticore optimizations !!!

    And OVERALL , make a windows 7 performanve, remaining ever COSTANT IN TIMES !!!!!!!!!!!!!

    Stability : Make a great stability in windows 7 64 bit !!! The major stability in Windows History !!!!

    conjugated both Performance and Stability and overall costant IN TIMES !!!

    Small Note : If possible change positions of flip 3d   open windows , not oblique positions , but frontal positions , similar to EXPOSE ' , and do make the user possibility to change keybord shortcut of flip 3d ( it will be good idea , to activate flip 3d with ONLY ONE keybord shortcut and not combination of 2 keyboard shortcut ).

    And remaining flip 3d active also when shortcut is released , while deactivate it push newly shortcut .

    If it possible !!!!

    Thank you !!!

Page 4 of 7 (105 items) «23456»