When I was at the Software Engineering Institute, I contributed to ATAM. During my time at Microsoft, I've been working on a related method, called the Lightweight Architecture Alternative Assessment Method.
 
ATAM is a scenario-driven method - it organizes quality attributes hierarchically in a utility tree with scenarios at the leaves of the tree. A scenario operationalizes a quality attribute into a measurable expression incorporating context, stimulus and response. Rather than saying "my system needs to be scalable", we say "while operating in steady state (context), the add-to-cart transaction profile experiences a 10x increase in utilization (stimulus) and the system responds with no more than 20% degradation response time for other transactions (response)." Importantly, ATAM (and LAAAM) also treat non-run-time quality attributes (although we're not strict about that characterization): "after deployment (context), a new settlement rule is added (stimulus) with no developer involvement and is deployed with no system downtime (response).
 
The utility tree is the mechanism by which we organize and prioritize scenarios. In ATAM, architectural decisions are analyzed against highly-prioritized scenarios to identify sensitivity points (architectural decisions that have a significant impact on quality attributes) and tradeoffs (sensitivity points affecting multiple quality attributes in competing ways - e.g. improving performance scenarios and degrading flexibility scenarios).
 
LAAAM takes the scenario-driven perspective of ATAM and derives a lightweight approach to assessing high-level architectural decisions. Where ATAM focuses on a highly rigorous, formal analysis of the impact (sensitivity and tradeoff implications) of any architectural decision, LAAAM considers "strategies" - higher-level architectural approaches. Strategies can take on many forms, but are frequently at the level of "implement using loosely-coupled communicating autonomous services" and "implement using a database-oriented application integration approach."
 
LAAAM produces an "assessment matrix": one dimension of the matrix (usually vertical) is the set of scenarios that will be assessed against; the other dimension (usually horizontal) is the set of strategies that will be assessed. Each cell of the matrix thus represents the assessment of a specific strategy in the context of a specific scenario. This assessment incorporates three dimensions: fit, development cost and operations cost. Fit describes the general viability of achieving the scenario using the strategy; fit also incorporates as assessment of risk, organizational impact (e.g. utilization of non-standard technology) and alignment with strategic direction in the enterprise. Development cost assesses how difficult it will be to implement the scenario using the strategy. Operations cost assesses the operational impact of the strategy in the context of the scenario. Each of these dimensions is scored on five point scale: high, moderate-high, moderate, low-moderate, low. Unfortunately, the meaning of "high" is opposite in the context of fit and in the context of cost - we fix this when we turn the scores into numerics: high is a score of 2 and low is a score of 0 for fit, but high is a score of 0 and low is a score of 2 for development and operations cost.
 
Once we've assessed each scenario/strategy pair, we add up the scores for each strategy. LAAAM anticipates the potential for the need to weight both assessment dimensions (fit, development cost, operations cost) and scenarios in order to accommodate organizational priorities. I argue to keep weights equal unless there's a strong reason to do otherwise, and I favor weighting scenarios over weighting assessment dimensions (using the prioritization described above - more on this later).
 
At the end of a LAAAM assessment, the result isn't a cut-and-dried "this alternative is obviously the best." Instead, LAAAM guides us to identify the real quality drivers, build consensus among the stakeholders on the relative importance of these drivers, explore the strengths and weaknesses of architectural alternatives under consideration, potentially exclude some clearly inadequate approaches.
 
I've applied the LAAAM approach with several Microsoft customers as part of my engagement as an Architect Advisor. I find it to be a great mechanism to get objectivity around architectural alternatives, even though I'm "the Microsoft guy."
 
I've been trying to get a paper on LAAAM out the door for a long time - I'll keep at it.  In the meantime, check out MSF for CMMI ® Process Improvement (MSF v4). LAAAM is the "Assess Alternatives" activity in the "Create Solution Architecture" workstream of the "Planning" phase.  (Thanks to David Anderson.)
 
Many thanks to Tim Mallalieu for his contributions to LAAAM.