One of the things that annoys me a little bit about the Microsoft revolution is the desire for proof-before-action. For example, if I want to change a setting to (increase spam filtering | reduce false positives) I have to go back and get historical data about whether or not this will have the desired effect and projected desired effect, what the trade-off is, and so forth. This is quite reasonable, actually. I think that this is an important part of the test-and-roll-out process and should not be overlooked with new technologies. Usually, it takes quite a while to gather the required information and the changes don't go through as often as I like, but that's the price we're willing to pay to avoid making costly mistakes.
On the other hand, when it comes to certain parts of our filtering product, I believe that I have exceptionally good instincts. I'm like a Jedi. I can use the Force to foresee the effects of a change I want to make. Let me share two examples.
In September 2004, I had only been on the job for two months. It was during this time that the Dan Rather scandal hit wherein 60 Minutes reported a story on President Bush's military records based on documents that later turned out to be forged. Anyhow, there was a flurry of spam going around with the subject line "Dan Rather must be fired!" I wrote a spam rule based on that subject line, but in the comments of the rule I said that I predicted the rule would expire (ie, not get anymore hits) within 10 days. I created the rule on Sept 16, 2004. The last time it was hit? Sept 25, 2004, nine days after I created it. Even after only processing spam for a couple of months I had already acquired an intuitive feel for the nature of spam runs.
The second example I can think of is a flurry of pharmaspam with the subject line "The Ultimate Online Pharmaceutical." We were seeing lots and lots of these messages and we blocked them, but I kept seeing false positive reports. I suggested to my manager that we make the rule that blocked those messages so aggressive that we drop the message and not even deliver it to the users' spam quarantines, but he declined saying that was too risky. Yet, a week and a half later, upon directives from his manager, he made the rule so aggressive that it dropped the message and didn't deliver it to users' spam quarantines. In other words, eventually we did what I suggested we do ten days earlier.
A more recent example is the time another spam analyst tried to block an obscene word in the subject and made the rule quite aggressive. I suggested loosening the score because on such a short word, it can have unintended matches like "peacock", "poppycock" or "Stephen Leacock." Sure enough, a month later, I found a false positive with the subject line "I need your John Hancock on this." (I've changed what the rule matches to some fictional examples, but you get my point). I didn't need to think of a specific example at the time I made the suggestion to back off on aggressiveness, I just knew (from experience, I guess) what makes a good spam rule and what doesn't.
In Brett Steenbarger's blog and book on Trader Performance, he says that expert performers don't know how they made the transition from novice to professional. Thus, when they offer advice it's something not particularly helpful like cut your losses short and let your profits run. This is good advice but not particularly helpful to the rest of us who know we're supposed to do that but the question is how do we do it? Anyhow, experts are good at what they do and they have an intuitive feel for the market. Similarly, in some (but not all) regards when it comes to anti-spam measures, I have an intuitive feel for it.
Thus, when I am supposed to go back and provide proof for some of the configuration changes I want to make, it can be frustrating. I know that nobody is going to believe that I somehow know that what I want to do is going to be effective, but nonetheless I still need them to take my word for it. I can't always explain how I know it, but most of the time when it's implemented it is effective. Delays in the fight against spam have a direct impact on user-experience, and believe-you-me doing all this extra research introduces delays. The anti-spam industry must react quickly in order to keep up with the spammers themselves because the challenges evolve so quickly. It helps to have somebody on your side who has that feel for the world of spam, even if it is only a small part of what the spammers are doing.
This post is a bit more of a rant, I suppose, but I needed to get it off my chest.
Some scientist say we are only using about 3% of our brain capacity conciously.
I'd say the rest of 97% is used by our subconscious, and its way more effective. Only problem is, we only get the results in an intuitive way, we can't really explain them.
What you're describing is the zen of technology. Many of the duties of any system administrator fall into this category. Trying to explain or document how to do tasks is extremely frustrating. Because you just do it. I don't have to think about the steps it takes to do something, it just happens.
What you're describing seems like what is referred to as "System 1" in dual process theories (psychology): http://books.google.com/books?vid=ISBN0521796792&id=FfTVDY-zrCoC&pg=RA3-PA436&lpg=RA3-PA436&ots=__0IJmebcG&dq=heuristics+and+biases+%22system+1%22+%22system+2%22&sig=QpSvMLhvBDtHlu-s1ZF_xAcQYUQ
In my admittedly poor understanding, System 1 is the part of our brain that delivers intuitive judgements and takes advantage of heuristics to do so quickly. System 2 provides a more reasoning approach.
System 2 processes can be transferred to System 1 if done often enough. There's probably some other criteria that govern how fast a certain process can move into System 1. We're familiar with lots of them, for instance, riding a bike quickly becomes something you just "do without thinking".
I guess some people are just able to take certain things and easily move them into System 1. This "power" has differnet names -- Taoism calls it Wu Wei ("without action")http://en.wikipedia.org/wiki/Wu_wei. (A good two thousand years before modern psychology could define it :)).
But, it's hard to write down onto paper for approval -- cue losing points on math exams at school for "not showing your work".
Then again, there are systemic flaws in our brains (hence 'Heuristics and Biases') (http://www.nku.edu/~garns/165/pptj_h.html has a few examples).
So perhaps management needs to define a limit (and revise it periodically) on what they let you do intuitively and where they need more justification...