Serco Inc. v. United States , 81 Fed. Cl. 463 ( 2008 )


Menu:
  • OPINION

    ALLEGRA, Judge.

    This consolidated post-award bid protest ease is before the court on the parties’ cross-motions for judgment on the administrative record. It involves a government-wide acquisition contract (GWAC)2 awarded by the General Services Administration (GSA) to provide technology products and services to the entire Federal government. Sixty-two offerors competed for a chance to perform task orders under this GWAC. In ranking the technical proposals of these of-ferors, GSA teams assigned adjectival ratings to various subfactors and then converted them into whole numbers (e.g., 3, 4, 5). Combining, averaging and weighting these figures, the agency ended up with technical scores that were carried out to three decimal points (e.g., 3.817)—and it made critical distinctions among the sixty-two offerors based upon the thousandths of a point. Based upon these technical scores, twenty-eight contractors were designated by the agency as “presumptive awardees.” GSA then purported to conduct price reasonableness and tradeoff analyses to take into account price—but, conspicuously, none of these comparisons resulted in any of the “presumptive awardees” being displaced by a lower-priced offeror. Indeed, GSA ultimately made awards to offerors whose prices were 59th, 60th and 61st out of the sixty-two offers—prices that the agency claims were “fair and reasonable” despite being twice as high as the lowest winning offer, as much as thirty percent higher than the independent government cost estimate, and more than two standard deviations to the mean of the evaluated prices for all the offerors.

    Eight unsuccessful offerors—Serco, Inc. (Serco); CGI Federal Inc.(CGI); STG, Inc. (STG); Artel, Inc. (Artel); Advanced Technology Systems Inc. (ATS); Apptis, Inc. (Apptis); Nortel Government Solutions, Inc. (Nortel); and The Centech Group (Cen-tech)—vigorously assert that the process used to make the awards under this GWAC was arbitrary, capricious, and contrary to law. They contend that they were prejudiced by an array of errors—some broad and systemic, others heterogeneous. Not so, responds defendant and five of the awardees (here as defendant-intervenors), arguing that GSA acted reasonably, and well within its lawful discretion, in making distinctions that were admittedly fine, but, nonetheless, compelled by the breadth and complexity of this procurement.

    In the main, plaintiffs are correct. For the reasons that follow, the court concludes that GSA, in attaching talismanic significance to technical calculations that suffer from false precision, made distinctions that, in their own right, likely were arbitrary, capricious and contrary to law, but certainly became so when the agency failed adequately to account for price and to make appropriate tradeoff decisions. Those compounding errors prejudiced the plaintiffs and oblige this court to set aside the awards in question and order appropriate injunctive relief.

    I. BACKGROUND

    The administrative record in this case reveals the following:

    *466The so-called “Alliant” GWAC is to be administered by GSA pursuant to section 5112(e) of the Clinger-Cohen Act, 40 U.S.C. § 11302(e) (formerly 40 U.S.C. § 1412(e)).3 Alliant is designed to provide federal agencies with a broad range of information technology (IT) products and services, including computers, ancillary equipment, software, firmware and similar applications, network design, support services, and related resources such as telecommunication and security. Alliant contemplates the multiple-award of indefinite delivery, indefinite quantity (MA/IDIQ) contracts, with a ceiling of $50 billion, to be performed, on a task order basis, during a five-year base period and one, five-year option period.4 Under the Alliant Solicitation No. TQ2006MCB0001 (the Solicitation), individual task orders could range as high as $1 billion in value; successful offer-ors, however, are guaranteed a minimum take of only $2,500. Alliant offers a wide range of contract types, including fixed-price, cost reimbursement, labor-hour and time and material.

    A. The Solicitation

    GSA issued the Solicitation on September 29, 2006. The Solicitation advised that GSA “contemplate[d making] approximately 25 to 30 awards ... but reserves the right to place fewer or more awards, depending upon the quality of the proposals received.”5 Those receiving awards under the Solicitation are eligible to perform task orders under the contract. The Solicitation indicated that “[a]ward will be made to responsible Offerors whose proposals are determined to provide the ‘best value’ to the Government.”

    The Solicitation indicated that the procedure for evaluating proposals would begin with an “acceptability review,” to be graded on “pass/fail” basis, focusing on whether a given offer contained all requested information in the appropriate formats. Acceptable proposals were then to be subject to a technical evaluation, a cost/price evaluation and, ultimately, a best value tradeoff. Regarding this last step, the Solicitation advised: “Consistent with FAR 15.101-1, the Government will conduct a ‘best value’ tradeoff, in which differences in non-price factors and evaluated price will be compared between the Offerors in order to determine which Offeror represents the best value to the Government.”6 This provision emphasized that technical factors would be “significantly more important than cost or price,” but added an important caveat, stating “the closer the technical *467scores of the various proposals are to one another, the more important cost or price considerations become in determining the overall best-value for the Government.”

    1. Proposal Requirements

    The Solicitation indicated that two technical evaluation factors—Past Performance and the Basic Contract Plan—“are approximately equal in importance to each other, and when combined, are significantly more important than Cost or Price.” The Solicitation encouraged offerors to include their “best terms from a technical and cost/price standpoint,” as GSA “intend[ed] to evaluate proposals and award contracts without discussions with Offerors, except for clarifications, as described in FAR 15.306(a).”

    a. Past Performance Proposal Requirements

    The Solicitation indicated that in evaluating past performance, the agency would “look ‘retrospectively’ and consider the Offer- or’s history of success in delivering high quality service and solutions on contract efforts of similar scope and complexity to those anticipated under the Affiant Contract.” To facilitate this, the Solicitation required offer-ors to present information about contracts comparable to Affiant in two tables. Table 1 was to include contracts the offeror had performed in a “multiple-award environment,” such as GWACs and other forms of multiple award contracts, while Table 2 was to highlight the offeror’s efforts under unique, single-award contracts. Offerors could list up to twenty and fifty efforts in Tables 1 and 2, respectively, and could select up to three of the customers identified in Table 2 for the Government to contact as references. The Solicitation also permitted offerors to submit a narrative summary, not to exceed five pages, of the past performance efforts listed in the tables, including “whatever information [offerors] believe[d] may be relevant to the Government as it evaluates [the] proposal[s.]”

    The Solicitation advised offerors that GSA “intends to use reasonable efforts to check approximately ten (10) efforts (total) for each Offeror, selected from Tables 1 and 2, including the three (3) efforts which the Offeror has identified.” However, the agency reserved the “right to check more or fewer efforts, at its discretion,” adding that it could not “guarantee that it will contact any particular effort listed in Table 1 or Table 2, even the efforts specifically listed by the Offer- or.”7 The Solicitation also stated that the agency might rely upon past performance information from other sources, including the interagency Past Performance Information Retrieval System (PPIRS).

    b. Basic Contract Plan Proposal Requirements

    In contrast to past performance, the purpose of the Affiant “Basic Contract Plan” (BCP) was to allow the agency to “look ‘prospectively’ and consider the Offeror’s level of commitment to the Affiant program and potential for high quality service and solutions.” The Solicitation indicated that in this plan, the offeror should explain “how it will continuously identify, mitigate, manage and control risks within its holistic approach for managing the comprehensive scope of the Affiant program.” Offerors were asked, inter alia, to “identify any gaps or weaknesses in past performance and specifically address them,” and to “convey [the] ability to insure successful performance of all aspects of the Basic Contract and Orders to include all 3 component areas.” The latter were defined as “Infrastructure, Application Services and IT Management Services.” The offerors were further instructed to “demonstrate a clear understanding of the management and performance requirements” of the Solicitation by providing a description of its management approach to three subfactors: resources, pro*468gram management and corporate commitment.

    The Solicitation also generally required the submission of a subcontract plan; various certifications; evidence of a top secret facility clearance; information concerning any teaming arrangements; and documentation of the offeror’s achievement of small business subcontracting goals.

    c. Cost/Price Proposal Requirements

    Offerors were required to propose hourly rates for eighty labor categories, by contract year and location of performance, for the potential ten-year term. The labor categories described in the Solicitation ranged from non-technical jobs, such as administrative/clerieal assistants and help-desk specialists, to highly technical jobs such a application systems analyst, computer scientist, data architect, disaster recovery specialist, hardware and software engineers and web designers. These categories were further subdivided in terms of knowledge and skill levels (e.g., entry-level, journeyman, senior, master). The Solicitation instructed offerors how to enter this rate information into a spreadsheet supplied by GSA. Those instructions explained, inter alia, that GSA would use the data in the spreadsheet to calculate loaded hourly rates for each applicable labor category, which, in turn, would be applied, for purposes of price analysis, to the independent Government estimates of hours for each category (which were already in the spreadsheet but protected from access by offerors). By this method, the agency intended to generate a total evaluated price for each bidder.

    2. Evaluation Criteria

    The evaluation criteria to be applied to proposals were set forth section M of the Solicitation. Those criteria can be described briefly as follows:

    a. Past Performance

    The Solicitation identified five subfactors within Past Performance, each of which were “approximately equal in importance to each other:” (i) quality of service; (ii) schedule; (iii) cost control; (iv) business relations; and (b) subcontract management/socioeconomic goals.

    Under the quality of service subfactor, GSA was to evaluate the offeror’s ability to provide high-quality technological services. The agency was to consider whether, in past contracts, performance risk factors had been identified, mitigated, and managed. Offerors that employed “innovative and unique quality assurance tools and methodologies to ensure efficient and effective design, development and implementation of quality solutions” were to be more highly rated. For the schedule subfactor, the agency evaluated an offeror’s “ability to meet all schedule goals related to completion of the contract, task orders, milestones, delivery schedules and administrative requirements.” With regard to cost control, the agency evaluated the offeror’s “ability to deliver a service at the agreed to price/cost to include their ability to effectively forecast, manage and control contract costs, as well as report and analyze variances.” Under the business relations subfactor, the offeror needed to demonstrate its ability to “integrate and coordinate all activity needed to execute the contract” and to be an “effective business partner,” including such things as the timeliness, completeness, and quality of problem identification; corrective action plans; quality of proposals, change orders, and task order requests; and the contractor’s history of cooperative behavior and customer satisfaction. Finally, the Solicitation indicated that the agency would evaluate the offeror’s “ability to select, trace and manage subcontractors” and whether they had “met their small business utilization goals in the past.”8

    *469b. Basic Contract Plan Criteria

    The BCP technical factor was broken into three subfactors: (i) resources—including internal performance capabilities and the ability to supplement core internal capabilities; (ii) program management, which focused, inter alia, on the offeror’s program management information system and quality control program; and (iii) corporate commitment, in which the offerors were to describe “how the Alliant program will be optimized through its business development, technological innovations, proposal management and contract administration efforts.” According to the Solicitation, these subfactors, as listed above, were “in descending order of importance when determining the overall rating for specific Technical Evaluation Factors.” Although offerors were not told exactly how these subfactors would be weighted for evaluation purposes, the SSP employed the following weighting: 50 percent for resources, 30 percent for program management, and 20 percent for corporate commitment. Additionally, as discussed below, for its evaluations, GSA further subdivided the three sub-factors into differing numbers of “elements” of equal weight, tracking the language of the Solicitation. As revealed by the Solicitation, the plans were also to be evaluated to determine whether they “sufficiently addresse[d] any gaps or weaknesses not addressed in past performance.”

    c. Discriminators

    The Solicitation described various “discriminators” that would be applied in the event that offerors were “deemed virtually equivalent” in terms of past performance. Thus, the Solicitation indicated that if there was such an equivalency with respect to past performance, offerors would be “evaluated more favorably” if they “demonstrated success in providing integrated IT solutions” involving infrastructure, application services, and management services or if they had “successful performance on efforts that include OCONUS work [work in the continental United States], cost-type contracts, balanced with fixed price contracts, or include significant subcontract management.”

    By comparison, the Solicitation did not, in its discussion of the BCP evaluation, list specific “discriminators” to be used when of-ferors were deemed virtually equivalent. Instead, this part of the Solicitation cited qualifications that would lead to proposals being rated “more favorably” or “even more favorably.” For example, in evaluating internal resources, the Solicitation indicated that “[o]fferors that propose Program Managers with proven track records of managing programs similar to the Alliant in scope and magnitude will be evaluated more favorably.” Under this subfactor, it further stated—

    An Offeror that can demonstrate that its purchasing policies and practices are efficient and provide adequate protection of the Government’s interests by providing evidence from the cognizant Federal agency ACO [ (administrative contracting officer) ] of successful progress toward an approved purchasing system will be rated more favorably. An Offeror that provides documentation from the cognizant Federal agency ACO as evidence of an approved purchasing system pursuant to FAR 44.305 will be evaluated even more favorably.

    Following this pattern, under the program management subfactor, the Solicitation indicated that “[o]fferors providing comprehensive and thorough descriptions that demonstrate a program management information system and is capable of producing timely quality data products will be evaluated more favorably.” Likewise, offerors providing evidence from “a cognizant Federal agency ACO of successful progress toward an approved EVMS [Earned Value Management System] on an ongoing project will be evaluated more favorably,” while those offerors with “an EVMS that has been approved by a cognizant Federal agency will be evaluated even more favorably.” Certain features involving the offerors corporate commitment were likewise cited as being evaluated more favorably.

    d. Cost And Price

    The Solicitation confirmed that there would be a price analysis and might be a cost analysis. Price analysis was defined as “the *470process of examining and evaluating a proposed price without evaluating its separate cost elements and proposed profit,” while cost analysis was described as “the review and evaluation of the separate cost elements and profit in an Offeror’s proposals and the application of judgment to determine how well the proposed costs represent what the cost should be, assuming reasonable economy and efficiency.”

    The Solicitation indicated that the agency “anticipates that pricing for this acquisition ■will be based on adequate price competition and therefore does not require submission or certification of cost or pricing data.” Nonetheless, the Solicitation warned that offerors should submit supporting documentation for direct labor, labor escalation, and each indirect cost consistent with their organization’s cost accounting system and be prepared to allow the agency to examine internal books and records so as to “permit an adequate evaluation of the proposed price in accordance with FAR 15.403-3.” It further indicated that “[cjost/price proposals will be evaluated using proposal analysis techniques consistent with FAR 15.404-1 to ensure that proposed direct and indirect rates for each labor category during the base period and the option period are fair, reasonable, and predictable for anticipated work under the Basic Contract.” In this regard, it gave examples of four techniques that might be employed to “ensure a fair and reasonable price,” to wit: (i) “Adequate Price Competition;” (ii) “[cjomparison of proposed Loaded Hourly Labor Rates received in response to the Solicitation;” (in) “[comparison of previously proposed Loaded Hourly Labor Rates for the same or similar labor categories using independent Government Price estimates;” and (iv) “[ojverall evaluated price using independent Government estimates.”

    In terms of cost analysis, the Solicitation indicated that the agency might employ a variety of techniques “to ensure a fair and reasonable price,” among which were “[v]eri-fication that the Offeror’s cost submissions are in accordance with the contract cost principles and procedures in FAR Part 31,” and “[verification from an Offeror’s cognizant audit agency that the Offeror’s cost controls and surveillance systems are adequate.”

    B. The Evaluation Process

    GSA received sixty-six proposals on or before the deadline for submissions, November 17, 2006. After the “acceptability review,” four offerors were eliminated. GSA took the following steps in evaluating the remaining sixty-two proposals, including those of each of the plaintiffs in this matter.

    1. The Basic Process

    Pursuant to the SSP, the source selection evaluation team (SSET) consisted of the source selection authority, Steven J. Kempf; an SSET chair, who was also the procuring contracting officer, Mary Catherine Beasley; team leaders; team members; and advisors. A cosVprice evaluation team (CPET) and a technical evaluation team (TET) also were formed. The TET, in turn, consisted of a Past Performance evaluation team (PPET) and a Basie Contract Plan evaluation team (BCPET). The PPET consisted of three evaluators; the BCPET had four; and the CPET had three.

    The adjectival rating system used by the Alliant TET is set forth in the SSP. As defined by the plan, those rating, as they applied to the BCPs, were as follows: (i) Exceptionally High Confidence [EH]— awarded if the BCP meets “all contractual requirements and exceeds many requirements to the Government’s benefit” and “essentially no reasonable doubt exists that the Offeror will successfully perform the required effort;” (ii) Significant Confidence [S]—awarded if the BCP meets “all contractual requirements and exceeds some requirements to the Government’s benefit” and essentially “[l]ittle doubt exists that the Of-feror will successfully perform the required effort;” (iii) Acceptable Confidence [A]— awarded if the BCP meets “all contractual requirements” and “[s]ome doubt exists that the Offeror will successfully perform the required effort;” (iv) Marginal Confidence [M]—awarded if the BCP “marginally meets contractual requirements (minimal information)” and “[s]ubstantial doubt exists that the Offeror will successfully perform the *471required effort;” and (v) Little to No Confidence [L/N]—awarded if the BCP “does not meet many contractual requirements” and “[significant doubt exists that the Offeror will successfully perform the required effort.” (Emphasis in original). These definitions, in turn, formed the basis for a detailed set of technical evaluation standards, tailored specifically to each subfactor.

    The same set of adjectival ratings, albeit with sightly different individual descriptions, were used to evaluate past performance. For the past performance factor only, the Soicitation also isted a “neutral” evaluation, reserved for where “[performance in this area is not appicable to effort assessed, indicating neither a favorable nor unfavorable past performance evaluation.”

    2. Past Performance Evaluation and Scoring

    As anticipated in the Soicitation, to facilitate the past performance evaluations, GSA retained a poling firm, Calyptus, to conduct telephone interviews of references identified in offerors’ proposals.9 Calyptus used a Government-prepared “cai sheet” isting references to contact, and questionnaire “scripts” that were set forth in the SSP. Describing this interview process, the SSP stated—

    Contractor personnel will interview the point-of-contact referenced in the tables, reading the script ... with questions____ the contractor personnel wUl then transcribe the responses to the questions, and transmit a copy of the transcript to the point-of-contact for confirmation that the [transcript] was an accurate record of the conversation. As this task is not subjective, the PCO determined that it was acceptable to have contractor support personnel do the task.

    The scripts required the Calyptus personnel to ask open-ended questions, such as “[h]ow effectively did the Offeror conform to the contract requirements?” or “[t]o what extent did the Offeror successfully manage risk under the contract?” Although the Calyptus personnel were encouraged to ask follow-up questions, they were provided minimal guidance as to what to ask. Nor were they provided with any guidance as to the type of information the questions were intended to elicit or the significance that particularly-worded answers might have in assigning adjectival ratings to particular subfactors.10 Although, as noted, the SSP called for the transcripts of the interviews to be shared with the interviewees for verification, the record suggests that Calyptus fulfilled this step only as to one offeror. If, after several attempts, Calyptus could not supply nine transcripts for an offeror, the SSP authorized GSA to use information from the PPIRS, if available.

    The three members of the PPET reviewed the Calyptus transcripts and, using the information contained therein, assigned adjectival ratings to five subfactors for each proposal, recording those scores on rating sheets, together with their notes.11 At this point, the team met to discuss the evaluations, some*472times causing an evaluator to reconsider the score he had given for a subfactor. Once the evaluators’ final scores were complete, each team leader prepared a consensus evaluation of each offeror, containing both adjectival ratings for each of the subfactors and narrative comments discussing the offeror’s strengths, weaknesses, and other notable qualities. The composite adjectival ratings were then assigned a numeric score—5 for EH, 4 for S, 3 for A, 2 for M and 1 for L/N. These numbers were then added together and divided by 5, with the denominator corresponding to the number subfactors, so as to produce an average enumerated as the “Past Performance (PP) Evaluation Consensus Score.”12

    3. BCP Evaluation And Scoring

    To facilitate its work, the four-member BCPET subdivided the three subfactors to be employed in evaluating the plans—resources, program management and corporate commitment—into various “elements.” Each element had equal weight within its subfaetor. The team used a total of 15 elements, apportioned among the three subfactors as follows:

    Resources subfactor:

    Recruiting/retention and secrel/top secret clearances
    Resumes of key personnel
    Business systems
    Purchasing system
    Subcontracting management and supplemental core capabilities
    OCONUS capabilities and continuity of operations

    Program management subfactor:

    Management information system (MIS) Earned value management system (EVMS)
    Quality controls, surveillance methodologies

    Corporate commitment subfactor:

    Corporate structure, organization chart, and lines of authority
    Proposal management
    Risk mitigation
    Business process improvements
    Technical innovation
    Business development

    Each of the four members of the BCPET assigned an adjectival rating to each of the elements, again recording these scores on a rating sheet, together with notes. Unlike past performance, no consensus rating was made on the offeror’s BCP. Those individual ratings for each element—a total of sixty in all (four raters times fifteen elements)—were then converted into numerical scores using the same scale described above (e.g., 5 for EH, etc.). These numbers were then averaged to produce an overall score for each element (e.g., (5 + 4 + 4 + 4)/4 = 4.25). The overall numeric scores for each element were then averaged within each subfactor to produce an overall score for each of the three subfactors, that is, one score for resources, program management and corporate commitment, respectively. The three subfactor scores were then averaged in a weighted calculation—50 percent for resources, 30 percent for program management and 20 percent for corporate commitment—to produce a composite figure enumerated as the “BCP Average Evaluation Score.”13

    The Past Performance (PP) Evaluation Consensus Score and the BCP Average Evaluation Score were then averaged (added together and divided by two) to produce a *473figure enumerated as the “Weighted BCP-PP Average Score.” The latter score was used to rank the technical proposals of each offeror—from 1 to 62.

    4. Cost And Price Analysis

    As described above and in the Solicitation, GSA used a spreadsheet to develop a total estimated price for each offeror, based upon estimated demand over a ten-year term. The agency developed its Alliant labor estimate from various sources of information, including, inter alia, usage rates experienced under existing GSA IT services contracts. The three-member CPET inquired into whether an offeror’s cost structure appeared reasonable, assuming reasonable efficiency by examining offerors’ previously audited rates and verifying their cost submissions.

    The evaluation team presumed that because of the competitive nature of the procurement and the number of bidders, the prices received were fair and reasonable.14 Nonetheless, the team calculated a mean evaluated price, and standard deviations to the mean, in order to stratify prices statistically, as low or high. They also examined the percentage deviation of proposed prices from the Independent Government Cost Estimate (IGCE) (which, at approximately $41.4 million, was considerably higher than the mean of approximately $34.3 million of the sixty-two responsive bids). Despite finding that the prices of several potential awardees were more than two standard deviations to the mean, and considerably higher than the IGCE and the mean price, the team concluded that every offered price was “fair and reasonable.”

    C. Trade/Off Analyses and the Source Selection Decisions

    On May 25, 2007, Ms. Beasley, as the SSET chair, and the leaders of each evaluation team briefed the source selection authority (SSA), Mr. Kempf, on their results to date. At the meeting, the SSET indicated that there was a “natural break” between the 27th and 28th highest technically-ranked of-ferors (derived using the Weighted BCP-PP Average Score). For this presentation, the SSET prepared a number of slides in which it highlighted the strengths and weaknesses of the offerors that were “above” and “below” the “natural break.” There is no indication in this slide presentation that in describing the “natural break,” the SSET focused at all on price. It recommended that the twenty-seven offerors above the “natural break” be considered for award and that the remaining offerors be considered unsuccessful.

    Following that briefing, the SSET prepared a report setting forth the final scores and conclusions of the TET and CPET and sixty-two separate exhibits summarizing, inter alia, the trade-off analyses done for each responsive proposal and the SSET’s ultimate conclusion as to whether a particular offeror should receive an award.15 Pairings in the trade-off analyses were primarily made using two conventions that depended upon whether the evaluated offeror was within the presumptive award group. If the evaluated of-feror was within the presumptive award *474group, it was compared to the three highest technically ranked offerors not in the presumptive award group that had a lower price than the evaluated offeror. If the evaluated offeror was not in the presumptive award group, it was compared to the three lowest scoring offerors within the presumptive award group that had a higher price than the evaluated offeror. In sum, each evaluated offeror was compared to at least three other offerors. Appendix B, a chart that is essentially taken directly from the record, lists the trade-off comparisons that purportedly were made by the SSET.

    The written tradeoff analyses generally followed one of three blueprints, each of which varied somewhat in terms of the level of documentation and analysis. All three forms contained certain baseline features: They compared the individual subfactor adjectival ratings of the evaluated offeror to the ratings of the three comparison offerors. Then, they compared the four offerors in terms of “Past Performance Discriminators” and the “BCP Discriminators,” the results of which were presented in charts and briefly summarized. Under the Past Performance Discriminators section, the agency described how one offeror had more or less of a particular type of work experience than another. The BCP discriminators, on the other hand, focused only on whether the offeror had an approved purchasing system, an approved EVMS, or whether the offeror was in the process of obtaining either one of those systems.

    At this point, the written analyses diverged. For some offerors, including five plaintiffs (ATS, NGS, STG, Serco and CGO), the SSET merely added to the basic features described above a boilerplate “Conclusion” that read as follows:

    The Source Selection Authority chose to not make an award to this Offeror. [X] was technically ranked [Y] out of 62 Offer-ors. The Government considered the technical merit of the [X] proposal and the proposed price. Though [X]’s proposed price was within the [Z] standard deviation of the mean overall Price of all the Offer-ors, the Government determined that [X]’s technical merit was not worth its lower price when compared with other Offerors of higher technical merit and higher prices.

    (Emphasis in original). Notably, the last sentence of this boilerplate was the only mention of price in these trade-off discussions.

    The tradeoff analyses for a second group of offerors, which includes three plaintiffs (Artel, Centech and Apptis), went a bit further. In addition to the analysis described above, these trade-off recommendations contained separate comparative discussions for the evaluated offeror vis-a-vis each of the three offerors selected for tradeoff. But, these additional comparisons did little more than parrot back the strengths and weaknesses of the proposals represented in the Solicitation’s evaluation criteria, thus highlighting the same characteristics that had led to the initial overall technical scores and related rankings. These comparisons again focused on price in only a conclusory fashion, typically stating either:

    [X]’s proposal price was [Y] more than [Z]’s price. Considering the much greater technical merit of the [X] proposal when compared with the inferior [Zjproposal, as described above, the Government determined that the X proposal was a better value to the Government.

    or

    [X]’s proposal price was [Y] more than [Z]’s price. Considering this difference in price over a ten-year contract period and the greater technical merit of the [X] proposal when compared with the [Z] proposal, as described above, the Government determined that the [X] proposal was a better value for the Government.

    These paragraphs were employed for offer-ors listed inside and outside the presumptive award group.

    For the three highest-priced offers in the presumptive award group—those of Man-Tech, SAIC and CSC—the agency conducted a more thorough analysis.16 The tradeoff *475analyses for these three offerors included all of the documentation discussed above—the charts, the prose, and the comparisons based on the evaluated criteria—but also articulated specific, additional advantages to the proposals and discussed other superior qualities in addition to those already accounted in the overall technical score. For example, as to ManTeeh, it indicated that “ManTech’s higher prices reflect the use of highly specialized, secure personnel, rigid test and evaluation methodologies, and for one experience, the use of two laboratories to ensure quality assurance methodologies and testing prior to client software development.” Commenting specifically on whether it made sense for the government to pay a premium for ManTeeh’s services, the analyses stated—

    Overall, although ManTech’s evaluated price exceeded two standard deviations on the high-end of Offerors, and was the second highest price of all Offerors, the Government has determined that their high rates were consistent with their technical ability and success in providing Integrated Command, Control, Communications and Intelligence (C4I) support to the Department of Defense, which requires proven experience with enterprise architectures, the latest eybersecurity automated tools, and a cadre of highly-cleared specialized technicians, engineers, and diffieult-to-find enterprise architecture personnel who are the key to C41 project success.

    The analysis concluded that “[t]he Government determined that ManTech’s technical merit was worth its higher price when compared with other Offerors of less technical merit with lower prices.”

    In a memorandum dated July 26, 2007, the head of the SSET summarized the process used to rank the offerors and provided the results of that process, both with an eye toward their being presented to the SSA. Attached to this memorandum was a chart, the critical portions of which are reproduced below:

    *476AJliant Past Basic Weighted Tech Total Price
    Offeror Performance Contract BCP-PP Rank Evaluated Rank
    (PP) Plan (BCP) Average Order Price Order
    Evaluation Average Score
    Consensus Evaluation
    Score Score
    L-3 Comms. 4.800 4.029 4.415 $41,322,310 52
    .45 Harris 4.600 ,4.108 4.354 $38,936,210
    ¿24 311 >.'5.000 ■3,642 ' '41321 '■-'■ $31;SSS,607
    35 3KA 4.600 3.958 4.279 $35,479.005
    ..-51 3oozAllen 4.200 4.267 4.233 $41,004.846
    -’43 GHD “4.400 4.017 '4:208 $38.142,140
    28 3AE 4.200 4.213 4.206 $31,805,838
    ,55 2SC .4.200 4.125 ■A. 163 $47,505,252
    1 .41 ITS : 4.750 ■3.533 ‘4,142 ■ $36,520,763
    •61 3AIC 4.000 4.213 4.106 10 $50,799,512
    -'■14 INDUS Corn. 4.800 '3.400 4,100 11 ■,. .$28,317,285
    "44 rASC ' 4.000 '4.183 ■4.'092 12 $38,585,071
    61 VtanTech 4.600 3.533 4.067 13 $53,062,367
    3SS ’4.400 '3.700 . 4:050 14 .$26,484,059
    Lockheed Martin 3.800 4.275 4.038 15 $28,532,027 -15
    46 3earinR?oint 4.200 3.804 4.002 16 $40,461,128
    ¿37 Raytheon. 4.200 ,; 3.796 3,998 17 ’-$34,895,936
    ■36 Accenture 4.200 ■3,788 31994 18 $34,452,849
    , 2 4CI 4.400 3.571 3.985 19 $21,058,803
    ■ ‘.'33 UNISYS ,'4.200 ■.'3.729 3.965 20 ¿'.$33,255-,709
    27 DB.C 4.000 3.925 3.963 21 $31,714,726
    46 IBM 4.200 3.679 3.940 22 $39,368,972
    ■■'l'S «TC 4:400 3.463 3.931 23 -1$30,1631777
    _2C :aci 4.000 ■3.854 3.927 24 l $30,537,485
    12 ISIS ,4.400 3.433 3.917 25 $27,666,647
    *477ATT 4.000 3.833 3.917 25 $34,226,519 35
    AMTI 4.200 3.575 3.888 27 $31,512,216 23
    32 Alien 3.800 3.858 3.829 28 $33,007,426
    _56 2ENTECH 4.500 3.125 3.813 29 r i
    _55 TYBRIN 4.200 3.404 3.802 30 r i
    _7 EDS 3.750 3.842 3.796 31 J_L
    34 Stanley 4.000 3.538 3.769 32 $33,612,936
    25 VlacB 4.400 3.108 3.754 33
    57 ARINC 4.200 3.304 3.752 34 r i
    47 !GI Federal 4.000 3.496 3.748 35 r i
    38 Apptis 4.000 3.492 3.746 36 J_L
    S 3wRI 3.S00 3.675 3.738 37 f 1
    4.000 3.463 3.731 38 r i
    _5< 2NSI 4.000 3.413 3,706 39 r i
    jy ?s 4.000 3.404 3.702 40 J_L
    _13 3MX 4.400 2.996 3.698 41 [ 1
    _2i ATS 3.800 3.538 3.669 42 Í 1
    31 ÍCeane 4.Q00 3.329 3.665 43 J_L
    _4C AS! 4.200 3.083 3.642 44 f 1
    5 3TG 4.000 3.254 3.627 45 r i
    It MGS 4.000 3.192 3.596 46 r i
    _4S Verizon 4.000 3.142 3.571 47 f 1
    11 Alliant Solutions 4.000 3.100 3.550 48 f 1
    21 American Sys. 3.400 3.650 3.525 49 J_L
    1C T3 Alliance 3.750 3.242 3.496 50 [ 1
    22 ARTEL 4.250 2.742 3.496 50 J_L
    Lucent Techs. 4.200 2.763 3.481 52 J_L
    VtcNeil Techs. 4.000 2.883 3.442 S3 r i _58
    _62 nccc 4.200 2.646 3.423 54 J_L
    3 ?ROSOFT 4.000 2.763 3.381 55
    6 fRAWICK 4,000 2.679 3.340 56 r i
    4 BPS 4.000 2.675 3.338 57 r i
    17 Abacus 3.600 2.983 3.292 58 f 1
    _2S Honeywell 2.800 3.779 33290 59 f 1
    _n ?earson Alliant 3.000 3.425 3.213 60 r i
    SYS 4.000 2.238 3.119 61 \ 1
    42 JOMTek 2.800 2.196 2.498 62 L

    Regarding this chart, the memorandum indicated that “[r]ow 30 [corresponding to the 28th-ranked offeror] represents the first natural break detected by the SSET that includes a minimum set of 25 contracts,” adding that “[t]he SSET recommends limiting awards to only those contractors highlighted in ‘green’ [through Alion].” [Green indicated by shading] The memorandum did not explain why the “natural break” had shifted a notch lower since the May 2007 meeting— from between the 27th and 28th ranked of-ferors to between the 28th and 29th ranked offerors.

    Around this time, the SSET began finalizing its source selection recommendations for each offeror, which included the trade-off analyses described above. Some of these recommendations are dated July 24, 2007, one is dated July 28, 2007, and the remainder (a majority) are dated July 30, 2007. These memoranda recommended that the same top 28 technically-ranked offerors be extended awards. However, one additional award was contemplated. Thus, the memorandum for EDS recommended that it receive an award because of the breadth of its experience, the fact that it had an approved purchasing system, and the fact that its price was “significantly lower” than those offerors who had received a similar technical ranking.

    In a SSD Memorandum dated July 31, 2007, Mr. Kempf concurred with the SSET’s recommendation that twenty-nine contracts be awarded to the top twenty-eight technically-ranked offerors, plus EDS. By way of a prelude, Mr. Kempf admitted that “[t]he Alli-ant tradeoff analysis posed challenges.” Summarizing the process that was used and *478his reliance on the tradeoff analyses prepared by the SSET, he indicated further—

    I found the evaluated proposals to be arrayed along a continuum reflective of their strengths and weaknesses, in technical quality and price. There is a fairly smooth continuum from best to least technically capable, and from lowest to highest in price. For me, the critical area was where the selection decision approaches the zone of 25-30. Within that zone, I saw a soft “break” point between the proposal ranked 28th overall in technical quality (that of Alion), and the next two proposals (Cen-tech and Tybrin), as these latter two companies’ proposals had progressively less technical quality coupled with significantly higher prices. As explained below, I also found reason to include one additional closely-ranked company (EDS) that offered distinct benefits to the Government. Inclusion of EDS brings the total number of awardees to 29.
    With the tentative selection of this group of 29 awardees, my attention turned to the remaining proposals, starting with Centech, Tybrin, Stanley, and so on. As found in the individual company exhibits, prepared at my direction, my consideration took the form of a “trade-off’ analysis. The predicate for this trade-off analysis is that the Government seeks an advantageous mix of technical quality and attractive prices. Once I had provisionally identified the 29 best value proposals for Alliant, I wanted to answer this question: Did any other proposal offer compelling reasons to displace one or more of the apparent awardees[?] As explained in the individual Offeror exhibits, I found the answer to be “no”—and I believe that the group of 29 awardees do represent the best value and most advantageous selections.

    Mr. Kempf then commented further on the evaluations of four offerors: Centech, Tyb-rin, EDS and Stanley. The breadth of these comments varied, with more attention paid to Centech and Stanley.17 In these comments, Mr. Kempf explained that EDS was receiving an award because “[b]y its willingness to deliver attractive prices, EDS has demonstrated a characteristic that I believe will pay large dividends during the Fair Opportunity process when Alliant contracts are accessed by customers.” Mr. Kempf finally indicated that, based on his review of the SSET reports, none of the other three offerors warranted an award.

    Based upon this decision, Mr. Kempf selected twenty-nine companies for award— again, the top twenty-eight ranked technical offers, plus EDS. By letters dated July 31, 2007, GSA informed the unsuccessful offerors that they had not been awarded a contracted. Following debriefings, beginning in late August of 2007, each of the plaintiffs (except Serco) filed protests with the Government Accountability Office (GAO).

    D. Procedural History of this Case

    On September 26, 2007, Serco, Inc. (Serco) filed a complaint in this court challenging the *479award decisions and seeking a variety of injunctive relief. On September 27, 2007, the court granted motions to intervene filed on behalf of Electronic Data Systems (EDS), International Business Machines (IBM) and Indus Corporation (Indus). At a status conference held on September 28, 2007, counsel for the parties indicated that eight other unsuccessful offerors had filed protests of the award decisions. Beginning on October 18, 2007, when Stanley filed a complaint with this court and ending on November 15, 2007, when Centech filed its complaint, each of these other eight protestors eventually found their way to this court.18 In the meantime, an additional party—General Dynamics One Source LLC (General Dynamics)—was permitted to intervene in this case. Each of these new complaints were consolidated with that of Serco and a consolidated briefing schedule for the filing of motions for judgment on the administrative record and cross-motions was established. On December 21, 2007, Stanley filed a notice of voluntary dismissal of their complaint; subsequent filings revealed that the reason for the dismissal was that the GSA, upon further review, had awarded a contract to Stanley.19 Thereafter, Stanley became the fifth and final defendant-intervenor in this case. Oral argument on those cross-motions was conducted on February 4, 2008. That argument was broken into two segments—a morning session focusing on cross-cutting issues and an afternoon sessions dealing with issues more uniquely ascribed to the individual plaintiffs.

    II. DISCUSSION

    Before turning to plaintiffs’ claims, we begin with common ground.

    A. Standard of Review

    The Federal Circuit, in Bannum, Inc. v. United States, 404 F.3d 1346, 1355 (Fed.Cir.2005), instructed that courts must “distinguish ... [a] judgment on the administrative record from a summary judgment requiring the absence of a genuine issue of material fact.” Toward this end, Bannum teaches that two principles commonly associated with summary judgment motions—that the existence of a genuine issue of material fact precludes a grant of summary judgment and that inferences must be weighed in favor of the non-moving party—do not apply in deciding a motion for a judgment on the administrative record. Id. at 1356-57. The *480existence of a question of fact thus neither precludes the granting of a motion for judgment on the administrative record nor requires this court to conduct a full blown evidentiary proceeding. Id.; see also Intl. Outsourcing Servs., LLC v. United States, 69 Fed.Cl. 40, 45-46 (2005).20 Rather, such questions must be resolved by reference to the administrative record, as properly supplemented—in the words of the Federal Circuit, “as if [the Court of Federal Claims] were conducting a trial on [that] record.” Bannum, 404 F.3d at 1357; see also Int’l Outsourcing, 69 Fed.Cl. at 46; Carlisle v. United States, 66 Fed.Cl. 627, 631 (2005); Doe v. United States, 66 Fed.Cl. 165, 174-75 (2005), aff'd, 221 Fed.Appx. 994 (Fed.Cir.2007).21

    Bannum’s approach to deciding motions for judgment on the administrative record fits well with the limited nature of the review conducted in bid protests. In such cases, this court will enjoin defendant only where an agency’s actions were arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law. 5 U.S.C. § 706(2)(A) (2006); see also 28 U.S.C. § 1491(b)(4) (2006). By its very definition, this standard recognizes the possibility of a zone of acceptable results in a particular case and requires only that the final decision reached by an agency be the result of a process which “consider^] the relevant factors” and is “within the bounds of reasoned decisionmak-ing.” Baltimore Gas & Elec. Co. v. Natural Res. Def. Council, Inc., 462 U.S. 87, 105, 103 S.Ct. 2246, 76 L.Ed.2d 437 (1983); see Software Testing Solutions, Inc. v. United States, 58 Fed.Cl. 533, 538 (2003); Gulf Group, Inc. v. United States, 56 Fed.Cl. 391, 396 n. 7 (2003). As the focus of this standard is more on the reasonableness of the agency’s result than on its correctness, the court, to apply this standard properly, must exercise restraint in examining information that was not available to the agency. Failing to do so risks converting arbitrary and capricious review into a subtle form of de novo review.22 *481At all events, this court will interfere with the government procurement process “only in extremely limited circumstances.” CACI, Inc.-Federal v. United States, 719 F.2d 1567, 1581 (Fed.Cir.1983) (quoting United States v. Grimberg, 702 F.2d 1362, 1372 (Fed.Cir. 1983)). Indeed, a “protester’s burden is particularly great in negotiated procurements because the contracting officer is entrusted with a relatively high degree of discretion, and greater still, where, as here, the procurement is a ‘best-value’ procurement.” Banknote Corp. of Am., Inc. v. United States, 56 Fed.Cl. 377, 380 (2003), aff'd, 365 F.3d 1345 (Fed.Cir.2004).23

    The aggrieved bidder must demonstrate that the challenged agency decision is either irrational or involved a clear violation of applicable statutes and regulations. Banknote Corp., 365 F.3d at 1351, aff'g, 56 Fed.Cl. 377, 380 (2003); see also ARINC Eng’g Servs. v. United States, 77 Fed.Cl. 196, 201 (2007).24 Moreover, “to prevail in a protest the protester must show not only a significant error in the procurement process, but also that the error prejudiced it.” Data Gen. Corp. v. Johnson, 78 F.3d 1556, 1562 (Fed.Cir.1996). To demonstrate prejudice, “the protestor must show ‘that there was a substantial chance it would have received the contract award but for that error.’ ” Alfa Laval Separation, Inc. v. United States, 175 F.3d 1365, 1367 (Fed.Cir.1999) (quoting Statistica, Inc. v. Christopher, 102 F.3d 1577, 1582 (Fed.Cir.1996)).25 Finally, because in-*482junctive relief is relatively drastic in nature, a plaintiff must demonstrate that its right to such relief is clear. See Banknote Corp., 56 Fed.Cl. at 380-81; Seattle Sec. Servs., Inc., 45 Fed.Cl. at 566.

    B. Alleged Errors in the Evaluation Approach

    It is with this basic analytical framework in mind, that we now turn to the specific allegations of error here. Some of those allegations are systematic and cross-cutting, in that, if true, they impact all or virtually all the plaintiffs. Others sweep more narrowly, potentially impacting one or, at most, a few of plaintiffs. For a variety of reasons, the court will consider these claims chronologically, that is, in the order that they arose during the evaluation process. The court will focus first on claims that involve the integrity of the performance data initially generated by the agency. The court will then consider, in turn, alleged errors involving: (i) the reliability of the calculations that gave rise to the technical rankings that were used to identify the “presumptive awardees”; (ii) the agency’s consideration of price and price reasonableness (or failure to do so); and, ultimately, (iii) the propriety of the agency’s tradeoff analyses.

    1. Gathering of Past Performance Information—Unequal Treatment

    We begin by analyzing the survey process that GSA used to develop past performance information about the offerors. While, in reviewing this process, the court must “accord an agency’s technical evaluation great deference,” Hamilton Sundstrand Power Sys. Inc. v. United States, 75 Fed.Cl. 512, 516 (2007), it must, nonetheless, be sensitive to the agency’s fundamental duty to “[ejnsure that contractors receive impartial, fair and equitable treatment.” FAR § 1.602-2; see also Precision Images, LLC

    v. United States, 79 Fed.Cl. 598, 619 (2007); HWA, Inc. v. United States, 78 Fed.Cl. 685, 697 (2007). “This obligation necessarily extends to the equal and impartial evaluation of all proposals,” this court has stated, “for it is well-established ... that a ‘contracting agency must treat all offerors equally, evaluating proposals evenhandedly against common requirements and evaluation criteria.’ ” Hamilton Sundstrand Power Sys., 75 Fed.Cl. at 516 (quoting Banknote Corp. of Am., 56 Fed.Cl. at 383).26 “[Ujneven treatment goes against the standard of equality and fair-play that is a necessary underpinning of the federal government’s procurement process and amounts to an abuse of the agency’s discretion.” PGBA, LLC v. United States, 60 Fed.Cl. 196, 207 (Fed.Cl.2004), aff'd, 389 F.3d 1219 (Fed.Cir.2004); see also Comprehensive Health Servs., Inc. v. United States, 70 Fed.Cl. 700, 721 (2006). Plaintiffs claim such an abuse occurred here.

    They assert that the process by which Calyptus obtained information from past performance references was fatally flawed, leading different offerors to be treated unequally in their past performance evaluations. Their concerns focus on claims that the Calyptus personnel were given inadequate guidance as to the type of past performance information that they should elicit from the references. In this regard, plaintiffs asseverate, inter alia, that the scripts provided to the Calyp-tus employees included questions that were insufficiently tailored to the evaluation criteria and thus were inadequately designed to produce information relevant to those criteria.27 They argue that those employees were ill-equipped to determine whether the answers they initially received were responsive and to ask the right follow-up questions if they guessed that the answers were not. Some plaintiffs assert that because those employees had no understanding of how their transcript entries corresponded to the adjectival rating system, they were insensitive to *483the significance that recording (or failing to record) particular phrases or words could have on GSA’s past performance evaluation. The result, these plaintiffs argue, was a process that captured past performance information in a haphazard, catch-as-cateh-can fashion.

    As should be obvious, there is nothing inherently wrong with an agency using a survey, telephone interview, or questionnaire to elicit information regarding past performance. To the contrary, a review of agency regulations, see, e.g., 48 C.F.R. § 1815.304-70(d)(3) (1999), not to mention the decisional law, suggests that such surveying has been frequently employed without objection.28 Nonetheless, such surveys must satisfy FAR § 15.305(a)(2), which requires an agency to consider “[t]he currency and relevance of [past performance] information, source of the information, context of the data, and general trends in contractor’s performance.” Reflecting these requirements, cases examining the use of surveys have focused on whether they were reasonably designed to generate information of sufficient reliability to support the evaluation methodology to be used in assessing past performance. See Redcon, Inc., 2000 C.P.D. ¶ 188, 2000 WL 1690204, at *6 (2000); ENMAX Corp., 99-1 C.P.D. ¶ 102, 1999 WL 335687, at *5 (1999); Pacific Ship Repair and Fabrication, Inc., 98-2 C.P.D. ¶ 29, 1998 WL 412421, at *3 (1998). Those cases suggests that the presence vel non of the following indicia aids in assessing that reliability: (i) whether the questions are specific enough to elicit information responsive to evaluation criteria; (ii) whether the definitions of adjectival ratings were made available either to the surveyors or the references; (in) whether the surveys are conducted by personnel otherwise knowledgeable regarding the procurement; and (iv) whether steps were taken to verify the accuracy of the survey responses.29 The “key question,” as the GAO has noted, is whether the survey is such that “the past performance information is accurately conveyed.” Redcon, 2000 WL 1690204, at *6.

    Here, the questions employed by Calyptus largely were not designed to produce answers responsive to the past performance evaluation standards. Those standards made critical distinctions based upon the degree to which the offeror exceeded or met contractual requirements; whether the offeror’s track record on a given factor was “outstanding,” “above-average,” “acceptable,” “marginal,” or “adverse;” and the degree of doubt the reference had as to the ability of the offeror to perform successfully under the Affiant contract {e.g., “no,” “little,” “some,” “substantial,” and “significant”). By comparison, the scripts were more generic, with ten of the *484thirteen listed questions asking only “[h]ow effectively” or “[t]o what extent” the offeror performed certain tasks. With little focus engendered by the questions themselves— and with the Calyptus personnel having little or no knowledge of the underlying adjectival rating scales and Alliant procurement—it should come as little surprise that the answers received (or at least those transcribed) often did not provide the sort of detail that would allow agency personnel to evaluate past performance rationally in accordance with the evaluation criteria. This was particularly true at the upper end of the rating scale, in which various cryptic forms of high praise had to be pigeonholed into either the EH or S confidence ratings. Thus, for example, in assessing quality of service—which, under the rating system, required an offeror to “exceed! ] many requirements to the Government’s benefit” to garner an EH rating— the survey instead generically asked “[h]ow effectively did the Offeror manage your requirement?” To that question, more than one reference simply responded “very good,” “very well,” “very effectively,” or “most effectively.” It was left to the evaluators to extrapolate a rating out of these two-word descriptors.

    That arbitrariness thereby crept into the process is illustrated by the various anomalous ratings that were made. For example, on the cost control subfactor, CENTECH received uniformly positive comments, such as “5 (out of 5),” “excellent for cost control,” “managed costs exceptionally well,” “exceptional,” and “excellent control over cost,” and yet received an S rating. Likewise, STG’s references all positively described its quality of service as “very well,” “top quality,” “exceptional,” “phenomenal success” and “excellent”—yet it too received an S rating. Finally, on the schedule subfactor, CGI received comments of “very effective,” “very successful,” “excellent,” and “very well” and again received an S rating, even though two awar-dees (INDUS and ManTech) with very similar comments managed to receive an EH rating.30 And these examples of situations in which the evaluators had difficulty in matching the evaluation criteria with the underlying data are by no means isolated. Other errors, indeed, undoubtedly were caused by the script’s failure to ask questions regarding the nature of the contract that had been performed—what FAR § 15.305(a)(2)(i) would refer to as the “context of the data.” For example, in a number of instances, Ca-lyptus asked references that had operated under firm, fixed-price contracts about a given offeror’s cost controls and passed along their answers to the evaluators without ever identifying the contracts as being firm and fixed-price. Ignorant of the nature of the contract, the evaluators proceeded to rate various offerors on this cost control information, assigning strengths (in the case of several awardees) and weaknesses (in the case of at least one protester) on this basis—even though the Solicitation prohibited GSA from considering performance under firm, fixed-price contracts in evaluating cost controls.31

    It is conceivable that these errors were the result of faulty individual ratings, rather than systematic problems with the survey questions themselves. But, one merely needs to compare the rating standards to the questions in the script to conclude that the latter was a major culprit here. It is also conceivable that unresponsive answers might have been provided even if the questions posed had been more precise. Perhaps tacitly *485recognizing this, cases evaluating the reasonableness of the use of past performance surveys have also considered whether the agency took other steps to ensure the reliability of the survey responses—for example, by employing survey personnel who knew the rating system or, at least, training the surveyors to identify unresponsive answers. On this count, however, there was a double failure here. First, while GSA encouraged the Calyptus employees to ask follow-up questions, it gave them little guidance as to when it was necessary to ask such questions and, if so, what to ask. Perhaps more important, given the nature of the results, no guidance was provided on how to obtain answers that would allow the evaluators to distinguish rationally between EH and S ratings.32 Second, the agency failed to enforce the one safeguard that it had included in the SSP to ensure that the answers listed in the transcripts actually reflected what the references had said (or perhaps intended). Thus, GSA did not compel its contractor to provide a copy of the completed transcript to each reference for corrections and additions. In fact, although the record here does not include the evaluation materials for all sixty-two offerors, it is significant that, among the twenty or so offerors covered by the record, there is evidence that this corroboration step occurred only with respect to one offeror— IBM—and that those transcripts were returned heavily-edited.33 This demonstrates that GSA not only failed to take adequate steps to ensure that the past performance information it received was relevant to the evaluation factors, but also to take reasonable steps to ensure that the sometimes sketchy information it obtained was accurate.34

    In a last ditch attempt to resurrect the surveys, defendant observes that we do not know what the evaluations would have yielded if more relevant questions had been asked and the transcripts had been supplied to the past performance references for correction. Perhaps, defendant speculates, plaintiffs would have fared no better or even worse. But, this dim view misses the point, for while the agency would have had considerable discretion in evaluating plaintiffs’ past performance had a proper process been used to develop performance information, it did not have the discretion to employ a survey system that did not generate veriflably accurate and relevant past performance information. Viewed in terms of prejudice, plaintiffs need not demonstrate what would have resulted had a rational evaluation process been employed. Indeed, for them to attempt to do so would require them to engage in substantial speculation—not only as to how GSA would draft a proper survey, but also as to how *486each of their many references would answer those surveys and how the GSA evaluators would convert those answers into adjectival ratings. To demonstrate prejudice, rather, plaintiffs must show that but for this error (and the others to be discussed below), they had a substantial chance of receiving an award. A review of the transcripts provides more than enough evidence from which to conclude that chance existed for each of plaintiffs had the surveys here been conducted in a reasonable fashion. No more is required. See, e.g., Data Gen. Corp., 78 F.3d at 1562 (protestor need not show that “but for the alleged error the protestor would have been awarded the contract;” “such a rule would make it virtually impossible for a protestor ever to prevail”).35

    2. Calculating the Technical Rankings—False Statistical Precision

    In the next stage of the process, GSA took the initial observations that it had accumulated in terms of past performance (PP) and the basic contract plans (BCP) and developed a composite technical score. As the accompanying chart illustrates, the agency took slightly different paths in calculating the two numbers that were averaged to create that technical score, again enumerated as the “Weighted BCP/PP Average Score.” In calculating the past performance component of that score, GSA first developed a consensus adjectival rating for each subfaetor {e.g., S), then converted those subfaetors into whole numbers {e.g., 4) and averaged them into an average PP score {e.g., 4.50). Despite instructions to this effect in the SSP, GSA did not make use of consensus adjectival ratings in calculating the weighted BCP average score. Rather, it first assigned adjectival ratings to each of the elements that made up the three BCP subfactors {e.g., S). It then converted those ratings into whole numbers and developed an average number for each element {e.g., 4.25) and used those averages to calculate a value for each subfactor {e.g., 4.21), which were then weighted to calculate the weighted average BCP score {e.g., 3.125). The average PP score and the weighted average BCP score were then averaged, yet again, to produce the ‘Weighted BCP/PP Average Score,” expressed in numbers carried out to three decimals {e.g., 3.813).

    *487[[Image here]]

    But were these numbers actually significant to three decimals, such that GSA could make rational distinctions based upon the thousandths of a point—as it did? There are strong indications that the answer to this question is no.

    Precision of thought is not always reflected in the number of digits found to the right of a decimal point—indeed, as with other constructs, there can be, to paraphrase Holmes, a “kind of precision that obscures.” Regarding statistical accuracy, a prominent treatise on numerical analysis explains that numerical computations may be impacted by “errors in the input data,” adding that “input data can be the result of measurements with a limited accuracy, or real numbers, which must be represented with a fixed number of digits.” Lars Elden & Linde Wittmeyer-Koch, Numerical Analysis: An Introduction 7 (1990). Numerical calculations may also be impacted by rounding errors—those that “arise when computations are performed using a fixed number of digits in the operands.” Id. Introducing accuracy and rounding errors into a calculation does not mean that the product thereof is meaningless; it does mean, though, that the error bound on that calculation must reflect the uncertainty of the inputs, thereby potentially reducing the number of digits in the output that are significant—e.g., those digits to the right of the decimal point. Id. at 12 (“When approximate values are used in computations then, of course, their errors will give rise to errors in the results.”); 16 McGraw-Hill Encyclopedia of Science & Technology 461 (9th ed. 2002) (“Computations cannot improve the precision of the measurement.”). Generally speaking, the result of a computation that involves multiple numbers should be reported at the same precision level as the least precise measure— to the digit precision of the “weakest link,” so to speak.36

    *488Under this “propagation of error” rule, it is inappropriate, for example, to treat the average of 4.6 ± .1 and 4.244 ± .001 as 4.422 ± .001; rather, that average is accurately presented only as 4.4 ±. 1. See The Concise Encyclopedia of Mathematics, supra, at dll-12 (describing the error propagation rules as they apply to addition and division). To either include additional decimals in this average (e.g., 4.4% 22) or to understate the error bound (e.g., ± .001 rather than ± .1) is to fall prey to what is called “false precision.” This false precision can act like a siren charming the unwary into making arbitrary comparisons based upon digits that are not meaningful. See Danowsky, supra, at 1; see also Michael J. Smithson, Statistics with Confidence: An Introduction for Psychologists 48-49 (2000) (“decisions based on [false precision] will be made with greater confidence than is warranted”). Those succumbing to this seduction render conclusions that appear valid, but, in fact, are not. Given these limitations, to continue our example, it would be inappropriate to conclude that a value of 4.385 is less than that of 4.4 ± .1 because the former is within the error bound of the latter—rather, both figures should be viewed as roughly equivalent. Accordingly, if one must distinguish between two items marked by the values in this example, that choice must be made on some other basis. And that need to look elsewhere is not avoided by simply extending the number of digits in the quantities to be compared because those additional digits, owing to the error bound, are not significant-any comparisons made primarily or exclusively based upon those digits would be invalid ab ovo.

    Here, of course, GSA made a host of fine-line distinctions using the average BCP/PP technical scores, determining, most prominently, which firms were among the “presumptive awardee” group based, ultimately, upon as few as 8 thousandths of a point. Yet, there are strong reasons to suspect that these comparisons were undermined by false precision, particularly since no error propagation analysis of the sort described above was performed. For one thing, there is no indication that the initial numerical data generated in evaluating the offerors’ proposals— data that, in the first instance, was captured in whole numbers (e.g., 3, 4, or 5)—had anywhere near the level of accuracy reflected in the final evaluation scores (e.g., 3.841). Indeed, in assigning adjectival scores to the various subfactors involved here, agency personnel effectively engaged in rounding (performing a computation with one digit in the operand)—they were, in other words, forced to choose one versus another adjectival rating even when a fraction (e.g., 3.8 rather than 4) would have been more accurate. Such imprecision and rounding is most evident in the evaluation of past performance, the accu*489racy of which depended, at the least, upon the following variables: (i) the accuracy associated with the scripts that were used by the contractor to query references; (ii) the accuracy of the contractor’s various employees in interpreting and transcribing the answers received from the references; and (iii) the accuracy and potential rounding errors that occurred when the narrative answers recorded by the contractor were converted to adjectival scores and then to numbers. That the results of these observations were, by virtue of repeated calculations, eventually expressed in figures that were carried to three decimals gives those figures no more reliability than they had ab initio. Again, while repeated division creates decimals that can add an aura of precision, it does not actually increase accuracy. Nor can defendant reasonably claim that systematic imprecision was not introduced into the process by the aforementioned variables—a review of the record, including the examples highlighted above, demonstrates otherwise.37 Yet, in so many ways, it appears that GSA donned blinders to the error propagation that the imprecision of its initial data ultimately injected into its final technical rankings—occluding uncertainties that, as we will see, had serious ramifications for the integrity of the award analysis here.

    In short, there is little doubt in the court’s mind that GSA should have reported fewer significant digits in its findings to better reflect the uncertainty in its technical rankings. Had it done so, many of the observations that were predicated upon the detailed BCP/PP averages, including the notion that there was a “natural break” somewhere along the rankings, would have vanished, requiring GSA to make its award decisions on some other, albeit perhaps less convenient, basis. Indeed, one of the grandest ironies in this ease is the fact that the supposed “natural break point” here kept shifting—as identified by the SSET, it was between the 27th and 28th ranked offerors; as set by the SSA, it was between the 28th and 29th ranked offerors; and, ultimately, after Stanley was given an award, it settled between the 29th and 30th ranked offerors. This should have been a telltale sign that the break point here was not so “natural” and that more caution was warranted in mining information from these statistics. Defendant, for its part, has advanced no reason to convince the court that these statistics could be wielded in the fashion that they were—indeed, despite being offered a specific opportunity to do so, defendant has not provided a single case in which an agency was found properly to base important award decisions on technical rankings carried to the thousandths of a point. And with good reason. At the least, the obvious threat of false precision in the numbers employed here ought to have caused the agency to be more circumspect in relying on its multi-digit technical rankings, let alone in piling Pelion and Ossa upon those rankings by establishing the group of “presumptive awardees” and then, to top everything off, requiring a “compelling” reason to alter it. Other cases, indeed, suggest that agencies have been much more cautious in applying similar statistics, tending to view relatively close technical scores as being essentially equivalent.38

    *490More debatable is whether GSA’s reliance on these imprecise statistics is, in legal terms, arbitrary, capricious or otherwise contrary to law. To reach that conclusion, one must conclude that it is appropriate to apply basic principles of numerical analysis to government rankings that were initially based upon adjectival ratings. While convincing arguments can be made in this regard, see “Statistical Report Flaws: A Spotter’s Guide,” supra, (“Ratings based on calculated results are subject to the same precision limits____Pseudo-precision of calculated results is also especially questionable when the input numbers are symbolic rather than actual measurements.”), the court is faced both with a paucity of precedent on this point and the reality that decisions every day are made based on statistics that suffer some degree of imprecision. Moreover, while the court is relatively confident that error bounds precluded GSA from making decisions based upon the thousandths of a point, fully gauging the extent of the prejudice here seemingly would require the court to determine more precisely the error bounds on the final BCP/PP averages—a task better suited for an expert (or perhaps for the agency upon a remand).

    In the end, this court concludes that it need not decide whether vel non the false precision here, standing alone, is fatal to the awards made.39 That is so primarily because it is not so much the generation of these statistics, but the way in which they were wielded, that poses the most potential for concluding that the awards here were arbitrary, capricious and otherwise contrary to law.40 For the moment, it is sufficient to conclude that the existence of that imprecision greatly intensified the need for the agency to make reasoned decisions considering price and, relatedly, best values.41 Whether it did so is now the topic to which the court turns.

    3. Flaws in the Price and Price Reasonableness Analyses

    After developing the weighted BCP/PP average scores, GSA next purported *491to consider price. In a negotiated procurement, the government is not required to make the award to the firm offering the lowest price, unless the solicitation specifies that price will be determinative. Here, of course, that was not the case. But, as will be explained, price, nonetheless, had to be a significant factor in the evaluation here—and it was not.

    Long ago, Congress rejected the notion of giving contracting officers the authority to ignore price considerations in negotiated procurements. See Schoenbrod v. United States, 187 Ct.Cl. 627, 410 F.2d 400, 402-03 (1969); see also Paul v. United States, 371 U.S. 245, 252-53, 83 S.Ct. 426, 9 L.Ed.2d 292 (1963). Any final question in this regard were put to rest by the Competition in Contracting Act of 1984, Pub.L. No. 98-369, 98 Stat. 1175, which unambiguously requires:

    In prescribing the evaluation factors to be included in each solicitation for competitive proposals, [the agency] ... shall include cost or price to the Federal Government as an evaluation factor that must be considered in the evaluation of proposals.

    41 U.S.C. § 253a(c)(l)(B). The legislative history of CICA indicates that this provision, as well as others designed to promote competition, were designed not only to allow the Federal government to obtain the “best products,” but to do so at the “best prices”— to avoid paying “$436 for an ordinary claw hammer ... where it can be bought for $7.” H.R. Rep. 98-1157, at 18 (1984); see also S. Rep. 98-50, at 32 (1983) (noting that “price” should have a “significant bearing on the selection for award”). “Congress intended for competition to affect the amount of money that the Government pays for goods and services,” two well-known commentators have stated, and “any competitive process that does not require films to compete on the basis of the amount of money that they want, and, in which differences in the amount of money sought cannot affect the outcome of the competition, is not consistent with that intention.” Vernon J. Edwards and Ralph C. Nash, “Price as a ‘Significant’ Evaluation Factor: Has the GAO Misinterpreted CICA?,” 20 No. 8 Nash & Cibinic Rep. ¶ 40 (hereinafter “Price as a Significant Evaluation Factor”).

    Giving effect to the statute and its legislative history, the FAR ordains that “[p]rice or cost to the Government shall be evaluated in every source selection.” FAR § 15.304(c)(1); see also Northrwp Grumman Info. Tech., Inc., 2005 C.P.D. ¶ 45, 2005 WL 735939, at *9 (2005). Following this lead, GAO has repeatedly held that price must be a “significant evaluation factor;” that it must be given “meaningful consideration.” See, e.g., MIL Corp., 2005 C.P.D. ¶ 29, 2004 WL 3190217, at *7 (2004); Eurest Support Servs., 2003 C.P.D. ¶ 139, 2001 WL 34118414, at *6 (2001); RTF/TCI/EAI Joint Venture, 98-2 C.P.D. ¶ 162, 1998 WL 911892, at *8 (1998); H.J. Group Verdures, Inc., 92-1 C.P.D. ¶ 203, 1992 WL 48487, at *3 (1992); see also Price as a Significant Evaluation Factor, supra (surveying cases). It follows, a fortiori, that price can neither be a nominal evaluation factor nor relegated to the role of being a mere consideration in determining whether a proposal is eligible for award. See Lockheed Missiles & Space Co., Inc. v. Bentsen, 4 F.3d 955, 959-60 (Fed.Cir.1993); MIL Corp., 2004 WL 3190217, at *7; Electronic Design, Inc., 98-2 C.P.D. 69, 1998 WL 600991, at *5-6 (1998). These are not minor distinctions—an evaluation that fails to give price its due consideration is inconsistent with CICA and cannot serve as a reasonable basis for an award. See MIL Corp., 2004 WL 3190217, at *7; Boeing, Sikorsky Aircraft Support, 97-2 C.P.D. ¶ 91, 1997 WL 611539, at * 10 (1997).

    GWACs are not immune from these requirements. To the contrary, the Federal Acquisition Streamlining Act of 1994 (FASA), Pub.L. No. 103-355, 108 Stat. 3243, which codified existing authority for agencies to enter into task and delivery order contracts, did not carve out an exception for such contracts on the premise that price competition would occur for each task order. Thus, FASA’s legislative history made clear that all the CICA competition requirements are to apply to multiple award contracts, stating:

    In addition, the conference agreement would provide general authorization for the use of task and delivery order contracts to acquire goods and services other than advi*492sory and assistance services. The conferees note that this provision is intended as a codification of existing authority to use such contractual vehicles. All otherwise applicable provisions of law would remain applicable to such acquisitions, except to the extent specifically provided in this section. For example, the requirements of [CICA], although they would be inapplicable to the issuance of individual orders under task and delivery order contracts, would continue to apply to the solicitation and award of the contracts themselves.

    H.R. Conf. Rep. 103-712, at 18. Relying on these reports, the GAO has concluded that “there is no exception to the requirement set forth in CICA that cost or price to the government be considered in selecting proposals for award because the selected awar-dees will be provided the opportunity to compete for task orders under the awarded contracts.” MIL Corp., 2004 WL 3190217, at *7.42

    But this begs the question—did GSA adequately consider price in making the award decisions here? To be sure, GSA conducted a variety of statistical reviews that appeared to consider price—for example, it compared the offerors’ prices to the mean price and indicated that it would look “favorably” upon those that were within one standard deviation thereof. Likewise, it compared the prices to the IGCE, and indicated that it would look “favorably” upon those that were below that figure. Yet, ultimately, it made awards to several offerors that did not meet one of these benchmarks and to three offer-ors that did not meet either—CSC, SAIC and ManTech. The latter trio, in fact, offered prices that were ranked 59th, 60th and 61st out of the 62 offerors, respectively. Now, defendant asserts that although it indicated that it would look “favorably” on offers that were below its benchmarks, it did not say what it would do if those thresholds were exceeded. But, one cannot reconcile this bit of ipse dixit with the legal requirement that price be a significant evaluation factor here— for price to be a significant factor, there must be ramifications if a price substantially exceeds the norms. Indeed, here the variances were staggering—the prices of CSC, SAIC and ManTech were more than two standard deviations beyond the mean, suggesting that they were truly outliers. In raw terms, they exceeded the IGCE by about $10 million (and the mean price by nearly $17 million) and were more than double the price of the lowest-priced awardee. And these seemingly high-priced offerors were not alone in receiving awards, as offerors with the 51st and 52nd highest prices—prices that also well exceeded the mean price—managed to make the award list, as well.

    Despite strong circumstantial evidence that price did not play a significant role here, defendant steadfastly maintains that price was given all the weight it was due under the law and the Solicitation. Of course, it immediately undercuts those assurances by repeatedly discounting, in its briefs, the need for considering price at this stage of the procurement. Thus, it claims that this is so because there will be “continued price and quality competition among the awardees throughout the life of the program” in the award of task orders—thereby coming within a hair of making an argument that, as mentioned above, has been decisively rejected by the GAO.43 But, while defendant may have *493had a good recipe for considering price, it failed to follow through—and “the proof of the pudding is the eating.”44 Aside from the isolated award to EDS, one searches in vain for evidence that the agency not only performed calculations that involved price, but actually acted upon those results. Certainly, there is no indication that price played a role in establishing the “presumptive awardee” listing that the agency employed in making its best value tradeoff decisions—no member of that group was delisted or, insofar as the record indicates, even shifted up or down as a result of the agency’s supposed consideration of price. Nor, contrary to defendant’s claims, is there any indication that price truly played a role in determining the so-called “natural break point”—the supposedly fixed, yet thrice-shifted, line between the presumptive awardees and the otherwise unsuccessful offerors here.45 Accordingly, the court concludes that GSA gave price neither the weight it was entitled under the Solicitation nor that which it must be afforded under CICA and the FAR. See Kathpal Technologies, Inc., 2000 C.P.D. ¶ 6, 1999 WL 1295948, at *9 n. 13 (1999) (agency conclusion that all rates were essentially equally and “fair and reasonable,” despite the fact that some rates were nearly twice as high as others “resulted in a source selection that so minimized the potential impact of price as to make it a nominal evaluation factor”)46

    Defendant then compounded this mistake by failing to conduct a proper price reasonableness analysis. Before turning to this point, it is best to define the legal parameters of such an analysis.

    Prior to 1996, 41 U.S.C. § 401(a) provided that purchases should be made at the “lowest reasonable cost.” That provision, however, was repealed in 1996 by the National Defense Authorization Act for Fiscal Year 1996, Pub.L. No. 104-106, § 4305, 110 Stat. 186, 665. See Ralph C. Nash & John Cibinic, “Price Reasonableness: A Much Misunderstood Term.” 17 No. 4 Nash & Cibinic Rep. ¶22 (2003). Under CICA, however, contracting officers are still required to use “data other than certified cost or pricing data to the extent necessary to determine the reasonableness of the price of [a] contract.” 41 U.S.C. § 254b(d)(l). This requirement is reflected in FAR § 15.402(a), which states that the contracting officer “must” purchase supplies and services at “fair and reasonable prices.” This leitmotif reappears in FAR § 15.404-l(a) which states that “(t]he objective of proposal analysis is to ensure that the final agreed-to price is fair and reasonable,” adding, in subparagraph (1) thereof, that “[t]he contracting officer is responsible for evaluating the reasonableness of the offered prices.” Paragraph (b)(2) of this same provision then states that “[t]he Government may use various price analysis techniques and procedures to ensure a fair and reasonable *494price.” That paragraph goes on to catalog seven examples of approved techniques. FAR § 15.404-l(b)(2).

    Even a cursory reading of these provisions thoroughly rebuts the notion, heavily advanced by several defendant-intervenors, that neither the FAR nor the Solicitation required GSA to preclude any offerors from receiving an award on the basis of its price reasonableness analysis. Per contra. Ultimately, the focus of such an analysis is not on whether the majority—or even a vast majority—of the prices offered are reasonable, but on whether a given awardee’s price is fair and reasonable.47 Only a favorable finding in the latter regard discharges the agency’s responsibilities under CICA and, here, the Solicitation.48 Defendant’s argument to the contrary rests on the proposition that price reasonableness was irrefutably demonstrated here under FAR § 15.404(b)(2)(i). That provision indicates that among the techniques available for ensuring a fair and reasonable price is a “comparison of proposed prices received in response to the solicitation,” adding that “[njormally, adequate price competition establishes price reasonableness (see 15.403-1(c)(1)).” But, while this provision certainly suggests that an agency may conclude that an awardee’s price is reasonable if it is consistent with the prices received from various competitors, it no way suggests that the same conclusion should obtain if that price is inconsistent with those of competitors.49 After all, it is not competition for competition’s sake, but the favorable comparison of a given offeror’s price to those of the other contestants, that provides the necessary assurance that a given price is “fair and reasonable.” See Cibinic & Nash, supra, at 1313 (“The mere presence of competition is inadequate to assure that the prices proposed are fair and reasonable.”). To reach the latter finding without the comparison—or despite it—makes no sense.

    Nor does FAR § 15.404-l(b)(2)(i) indicate otherwise. Indeed, it conspicuously states that adequate price competition “normally” establishes price reasonableness, leaving the distinct impression that there are exceptions to this general rule. Providing a window into those limitations, FAR § 15.404(b)(2)(i) cross-references FAR § 15.403-l(c)(l), which, in turn, defines adequate price competition in the following terms—

    A price is based on adequate price competition if—

    (i) Two or more responsible offerors, competing independently, submit priced offers that satisfy the Government’s expressed requirements and if—
    (A) Award will be made to the offeror whose proposal represents the best value (see 2.101) where price is a substantial factor in source selection; and
    (B) There is no finding that the price of the otherwise successful offeror is unreasonable.

    FAR § 15.403—l(c)(l)(i); see Erinys Iraq. Ltd. v. United States, 78 Fed.Cl. 518, 531 (2007). This provision again makes clear that, even where there is competition, the focus of the price reasonableness analysis *495still is on the price of an “otherwise successful offeror.” It follows that while the depth of an agency’s price reasonableness analysis and its ultimate findings on that count are both matters of discretion, see OMV Med, Inc., 99-1 C.P.D. ¶ 52, 1999 WL 140177, at *5-6 (1999), and while an agency may employ that discretion in using a general analysis of prices to confirm the reasonableness of the price of an otherwise successful offeror, no amount of discretion gives it a license to use a generalized analysis of prices as a proxy for making an individualized determination that a given price is actually “fair and reasonable.”50 More specifically, the latter requirement is not met by a finding that most of the prices offered are fair and reasonable, unless the offer of an otherwise successful offeror is among that select group. See FAR §§ 15.402(a), 15.404-l(a), 15.408-2; see also DFARS 215.403(e)(1)(A) (“the determination of adequate price competition must be made on a case-by-case basis. Even where price competition exists, in certain cases it may be appropriate to obtain additional information to assist in price analysis.”).

    Of course, here, GSA found that every evaluated price was “fair and reasonable.” Indeed, every price analysis in the record concluded with the following language, haec verba—“In relation to the IGCE for overall price and Mean Overall Price among all of-ferors (Government and Contractor Site combined over 10 years), the Offeror’s Mean Overall Price is in line with adequate price competition and is, therefore, considered fair and reasonable.” What is striking about that sentence is that it is omnipresent. It is found in exhibits in which the agency noted that the offeror’s evaluated price was “below” the IGCE and “within one standard deviation” to the mean and likewise in exhibits in which it was noted that the offeror’s evaluated price was “above” the IGCE and “outside two standard deviations” to the mean. That is to say, GSA wrote that the evaluated prices for offerors were “in line with adequate price competition,” even when its own statistics demonstrated that they were not— even when a price was an outlier that significantly departed from the pattern established by the wide majority of the offers and exceeded every statistical barometer of reasonableness employed. Such a conclusion— awkwardly mismatched with the analysis from which it supposedly springs—cannot be squared with the FAR. It can be obtained only from a price reasonableness analysis that was so unreasonable as to be arbitrary, capricious and contrary to law—one that, truth be told, was no price reasonableness analysis at all. See Multimax, Inc., 2006 C.P.D. ¶ 165, 2006 WL 3300346, at *8 (2006) (agency’s price analysis was unreasonable where “[tjhere is no indication that the agency every reviewed the results of the formula to assure that the prices at the extreme end of the ranges reflected reasonable pricing; rather, the agency mechanistically applied the formula and accepted the results without further analysis.”); Crawford Labs., 97-2 C.P.D. ¶ 63, 1997 WL 532508, at *4 (1997) (protest sustained where agency provided no rational basis to support its reasonableness determination).

    In sum, GSA has provided no rational basis for its determination of price reasonableness of contract awards at prices that, at least using the methods GSA employed, appear to be neither fair nor reasonable.51 *496Without meaningful support for these price reasonableness decisions, the court is compelled to conclude that contracting officer failed to satisfy his obligation under FAR § 14.408-2 and the Solicitation to determine that the award prices were fair and reasonable. This failure, combined with the agency’s inadequate consideration of price, clearly prejudiced all the plaintiffs—even those with relatively higher prices—as it impacted not only the tradeoff decisions that were made by the agency, but led to awards being made to offerors who potentially were ineligible to receive those awards under the FAR. The agency’s inadequate treatment of price, therefore, constitutes yet another reason why the award decisions here must be set aside.

    4. Flaws in the Tradeoff Analysis

    Lastly, plaintiffs hotly contest GSA’s best-value, tradeoff decisions. To be sure, as noted at the outset, plaintiffs have a significant burden of showing error in that regard because a court must accord considerable deference to an agency’s best-value decision in trading off price with other factors. See Info. Tech. & Apps. Corp. v. United States, 51 Fed.Cl. 340, 343, 346 (2001), aff'd, 316 F.3d 1312 (Fed.Cir.2003); see also E.W. Bliss Co. v. United States, 77 F.3d at 449; Bean Stuyvesant, LLC v. United States, 48 Fed.Cl. 303, 320 (2000); Cubic Def. Sys., Inc. v. United States, 45 Fed.Cl. 450, 458 (1999). In the court’s view, they have met that burden.

    The best value process is described in FAR § 15.101-1 as follows:

    (a) A tradeoff process is appropriate when it may be in the best interest of the Government to consider award to other than the lowest priced offeror or other than the highest technically rated offeror.
    (b) When using a tradeoff process, the following apply:
    (1) All evaluation factors and significant subfaetors that will affect contract award and their relative importance shall be clearly stated in the solicitation; and (2) The solicitation shall state whether all evaluation factors other than cost or price, when combined, are significantly more important than, approximately equal to, or significantly less important than cost or price.
    (c) This process permits tradeoffs among cost or price and non-cost factors and allows the Government to accept other than the lowest priced proposal. The perceived benefits of the higher priced proposal shall merit the additional cost, and the rationale for tradeoffs must be documented in the file in accordance with 15.406.

    The documentation requirement is reiterated in FAR § 15.308, which states that “[t]he source selection decision shall be documented, and the documentation shall include the rationale for any business judgments and tradeoffs made or relied on by the SSA, including benefits associated with additional costs.” See also FAR § 15.305(a). Even where, as here, a solicitation provides that technical criteria are more important than price, an agency must select a lower-priced, lower technically scored proposal if it reasonably decides that the premium associated with selecting the higher-rated proposal is unwarranted. See Magney Grande Distribution, Inc., 2001 C.P.D. ¶ 56, 2001 WL 287038, at *3 (2001); Oshkosh Truck Corp., 93-2 C.P.D. ¶ 115, 1993 WL 335049, at *6 (1993).

    In determining whether an agency has complied with these regulations, this court must determine whether the agency’s decisions are grounded in reason. TRW, Inc., 98 F.3d at 1327; Comprehensive Health Servs., Inc. v. United States, 70 Fed.Cl. at 721. Various concepts serve to add a skeletal framework to this inquiry.

    First, the regulation requires the agency to make a business judgment as to whether the higher price of an offer is worth the technical benefits its acceptance will afford. *497See, e.g., TRW, Inc., 98 F.3d at 1327; Dismas Charities, Inc., 61 Fed.Cl. at 203. Doing this, the decisional law demonstrates, obliges the agency to do more than simply parrot back the strengths and weaknesses of the competing proposals—rather, the agency must dig deeper and determine whether the relative strengths and weaknesses of the competing proposals are such that it is worth paying a higher price.52 Second, in performing the tradeoff analysis, the agency need neither assign an exact dollar value to the worth associated with the technical benefits of a contract nor otherwise quantify the non-cost factors. FAR § 15.308 (“the documentation need not quantify the tradeoffs that led to the decision”); Widnall v. B3H Corp., 75 F.3d 1577, 1580 (Fed.Cir.1996).53 But, this is not to say that the magnitude of the price differential between the two offers is irrelevant—logic suggests that as that magnitude increases, the relative benefits yielded by the higher-priced offer must also increase. See Beneco Enters., Inc., 2000 C.P.D. ¶ 69, 1999 WL 1713451, at *5 (1999). To conclude otherwise, threatens to “minimize[] the potential impact of price” and, in particular, to make “a nominal technical advantage essentially determinative, irrespective of an overwhelming price premium.” Coastal Sci. and Eng’g, Inc., 89-2 C.P.D. ¶ 436, 1989 WL 237564, at *2 (1989); see also Lockheed Missiles & Space Co., 4 F.3d at 959-60. Finally—and many cases turn on this point—the agency is compelled by the FAR to document its reasons for choosing the higher-priced offer. Conclusory statements, devoid of any substantive content, have been held to fall short of this requirement, threatening to turn the tradeoff process into an empty exercise.54 Indeed, apart from the regulations, generalized statements that fail to reveal the agency’s tradeoff calculus deprive this court of any basis upon which to review the award decisions. See Johnson Controls World Servs., 2002 WL 1162912, at *6; Satellite Servs., Inc., 2001 C.P.D. ¶ 30, at *9-11; SiNor, Inc., 2000 C.P.D. ¶ 159, 1999 WL 33210196, at *3 (1999).55

    *498Measured by these standards, it is apparent that most of the relevant tradeoff decisions made in the source selection process here were materially flawed. The tradeoff analysis was deficient, first, because, in most instances, “it failed to indicate whether the government would receive benefits commensurate with the price premium it proposed to pay.” Lockheed Missiles, 4 F.3d at 959-60. While the SSA certainly found that a given technical proposal was higher ranked than another, he did not explain whether the relatively minor differences in technical scores evidenced a true technical superiority. Nor did he explain how or why, consistent with the terms of the Solicitation, the supposed added value of given proposal was worth its extra costs—the core inquiry here. Thus, while there was discussion of the differences in technical merit, there was scant discussion of the significance of those differences in terms of contract performance or agency needs, and even less as to whether it was worth paying a particular premium to obtain that advantage. The decisional law suggests this was not enough to meet the requirements of the FAR.56 Simple math, indeed, reveals that the agency apparently was willing to pay premium of as much as $3.6 million for a technical-ranking advantage of a mere one-tenth of a point—seemingly, a huge premium in a procurement in which the lowest-priced award went for $21,059,803.57 In contending that the analyses here, nonetheless, were adequate, defendant cites the tradeoff analysis that was performed for ManTech. And, indeed, that analysis discussed, in some detail, the importance of ManTech’s secure and specialized personnel, its automated tools and its laboratories, and other facilities to the success of specifically-identified missions of the Department of Defense. But, if anything, this comparison only serves to highlight the deficiencies in virtually all the other tradeoff decisions in the record, which conspicuously lack precisely the type of cost-benefit discussion found in the ManTech analysis.58

    Of course, it is conceivable that the SSA, in his own mind, made such costybenefit comparisons, but merely failed to capture them on paper. But, that too would violate the FAR and its documentation requirements. To comply with the FAR, documentation must “include the rationale for any business judgments and tradeoffs made, including the *499benefits associated with additional costs.” Si-Nor, 1999 WL 33210196, at *3; see also Comprehensive Health Servs., 70 Fed.Cl. at 725; Park Tower Mgmt., Ltd. v. United States, 67 Fed.Cl. 548, 561 (2005). In the present case, in almost every relevant instance, the record includes little or no documentation, evidence or explanation of the benefits that the agency associated with the awardee’s supposedly superior technical ratings which would outweigh what, in many circumstances, was a significantly higher price. Si-Nor, 1999 WL 3321096, at *3 (summary conclusion that the awardees’s “outstanding past performance outweighs the lower price offered by” the protestor held to be inadequate). Absent a more detailed rationale, there is simply no way for this court to determine whether the agency, in fact, conducted a tradeoff analysis that adequately reflected price and was not arbitrary. And that, in itself, is fatal to the agency’s tradeoff analysis here.

    Defendant offers up several surrogates for measuring the reasonableness of the tradeoff analyses conducted by GSA. For example, it repeatedly emphasizes the sheer number of pages that the agency dedicated to that purpose—going so far as to include an appendix to its opening brief listing the specific pages that the agency spent on each offeror. But, while having very little space dedicated to tradeoff comparisons might be indicative of a problem, the converse is not true—the court cannot, like an indolent schoolmaster grading a term paper, assess the reasonableness of a tradeoff analysis by the weight of the paper involved. Indeed, an examination of the tradeoff exhibits here reveals that, but for a few select passages, they simply reiterate descriptions of the procurement and adjectival ranking and discriminator information found in the technical evaluation portion of the source selection documents. Again, defendant makes much of the sheer volume of this information. But, if all an agency need do, in order to select a higher-priced offer over a lower-priced one, is to recite that the former has a better technical ranking than the latter, then performing a tradeoff analysis is truly a waste of good paper. A plain reading of the FAR reveals that more is envisioned—the agency must document how the perceived benefits of the higher priced proposal merit paying the additional cost. The decisional law, as well as common sense, suggest that this entails more than what is represented by relisting the adjectival ratings that gave rise to the technical ranking and then flatly concluding, as often was done here, that the government has “determined that [X]’s technical merit was not worth its lower price when compared with other Offer-ors of higher technical merit and higher prices.” Indeed, such formulaic incantations particularly miss the mark where, as here, they are invoked whether the prices compared are close together or a gulf apart.

    Finally, some discussion is warranted not about the content of the comparisons, but rather about which firms were compared. Plaintiffs have mounted a multi-pronged assault on the basic formula used by GSA in establishing most of tradeoff pairings. Recall that formula had two parts, to wit: If the evaluated offeror was within the presumptive award group, it was compared to the three highest technically ranked offerors not in the presumptive award group that had a lower price than the evaluated offeror. If the evaluated offeror was not within the presumptive award group, it was compared to the three lowest scoring offerors within the presumptive award group that had a higher price than the evaluated offeror. As can readily be deduced, under this formula, each offeror was initially compared to a minimum of three offerors; in fact, at least on paper, certain offerors with high prices were compared to many others. Plaintiffs, for their part, claim that this formula again placed undue emphasis on the technical rankings and failed adequately to take into account price. They also complain about particular comparisons between them and other offerors that were or were not made. Some go so far as to argue that the agency was compelled to compare all the offerors to each other.

    To an extent, the arguments regarding the use of the formula have been overtaken by events. Thus, it makes little sense, at this point, to debate fine points regarding the merits of particular comparisons (e.g., IBM v. STG) when the court already has concluded that the past performance survey information *500developed by the agency was unreliable, that the multi-digit technical rankings suffered from some degree of false precision, and that the agency inadequately considered price. In fact, though, these overarching rulings have ramifications for the tradeoff pairings that might be made in the future. As a threshold matter, that is so because it simply is not true that the agency was compelled to compare all the offerors to each other—nothing in the FAR, nor any case construing it, remotely suggests this to be the case.59 Rather, like other aspects of the best value process, the selection of particular pairings is undoubtedly committed to agency discretion. In exercising that discretion, an agency not only can, but probably must, compare the weakest of the “presumptive awardees” with the strongest of the unsuccessful offerors. In determining how to apply this formula, however, the agency must employ past performance information that is reliable. It must recognize limitations in its statistics so as to better identify offerors with technical rankings that are essentially equivalent. And, it must take into account price—not only in deciding whether a given premium is appropriately paid for a given technical ranking differential, but also in preliminarily identifying which offerors present the weakest “best value” cases and thus the greatest need for comparisons. Indeed, care must be taken lest the agency, in determining a “presumptive” group of awardees and requiring “compelling” reasons for displacing them, so diminish price as to render its consideration insignificant.60

    In sum, the court finds that GSA’s value determinations were also arbitrary, capricious and otherwise contrary to law.

    C. Redux

    So where does this leave us? In his book, Some Economic Factors in Modem Life, Sir Josiah Charles Stamp described the potential for over-reliance on statistics in what would become known as “Stamp’s Law of Statistics”:

    The individual source of statistics may easily be the weakest link. Harold Cox tells a story of his life as a young man ... He quoted some statistics to a Judge, an Englishman, and a very good fellow. [The Judge] said, Cox, when you are bit older, you will not quote statistics with that assurance. The Government are very keen on amassing statistics—they collect them, add them, raise them to the nth power, take the cube root and prepare wonderful diagrams. But what you must never forget is that every one of those figures comes in the first instance from the village watchman, who just puts down what he damn pleases.

    Josiah Stamp, Some Economic Factors in Modem Life 258-59 (1929). The situation here, of course, is not as bad. Indeed, the court harbors no doubts that the agency here made a good faith effort to distinguish between the sixty-two offerors which responded to the Solicitation. And no party here has argued otherwise. Yet, on a variety of planes, the agency’s effort came up well short, resulting in award decisions that were arbitrary, capricious and otherwise contrary to law.

    Defendant intimates that the court should afford the agency more slack than usual, on account of the size of this procurement and the number of offerors to be evaluated. But, given the extraordinary breadth of discretion already afforded to agencies in government procurements, it is hard to fathom what form a still more relaxed rule of deference might take. Would such a rule permit the adoption of procedures that would allow the agency to *501rely on performance information that is unverified and unresponsive to its stated evaluation criteria? Not, it would seem, without a wholesale revision of the fairness principle embodied in CICA and the FAR—“a cornerstone of effective competition.” Cibinie & Nash, supra, at 899. Would such a rule allow the agency to treat demonstrably imprecise statistics as being precise? Not unless deference somehow magically makes insignificant digits significant. And would this heightened deference permit the agency to dispense with any reasonable consideration of price, leaving that question for a later day? Certainly not, again, without some substantial modification of CICA and FASA—and with Congress heading the opposite direction in tending, in recent years, toward enhancing, rather than diminishing, the importance of price. But whatever the reach or meaning of the salvifie rule defendant would have this court apply, one thing is certain—it has no foundation in the Solicitation, the FAR or the governing procurement statutes. Per contra. While an agency certainly may choose to pursue a GWAC pursuant to its mandate to “efficiently fulfill the Government’s requirements,” 10 U.S.C. § 2304(3) and 41 U.S.C. § 253(h), it may not obtain efficiencies in derogation of the FAR and other governing statutes. Nor, as should be obvious, does the raw size of a procurement afford an agency the license to engage in what otherwise would be arbitrary and capricious conduct.

    Lastly, a further word about prejudice. All the parties agree that the nature of the prejudice analysis is somewhat different in the context of GWAC than in the ordinary bid protest case. Fundamentally, the question here is not simply whether, but for the errors associated with their respective case, a protester had a substantial chance of receiving a contract award instead of one of the current awardees, see, e.g., Alfa Laval Separation, 175 F.3d at 1367. Rather, the court must also ask whether, but for those errors, the same protester had a substantial chance of receiving a contract award in addition to the other awardees. These twin inquiries take on added dimensions in a case in which the correction of a systemic error might not only improve the protester’s evaluation, but diminish that of a current awardee, or even ehminate that awardee from further consideration altogether (as might be true under a proper price reasonableness analysis). With all of these varied dimensions, and since it beyond peradventure here that the slightest shifting of a single adjectival rating could have significant impact not only on the ranking of a given protester, but also on who they might be compared with in a tradeoff analysis, the court is left with the firm conviction that the combined impact of the errors encountered here clearly prejudiced each of the protesters.

    D. Injunctive Relief

    Having concluded that the instant procurement was legally flawed and that each plaintiff, to varying degrees, was thereby prejudiced, the court must determine whether plaintiffs have made three additional showings to warrant injunctive relief, to wit, that: (i) they will suffer immediate and irreparable injury; (ii) the public interest would be better served by the relief requested; and (iii) the balance of hardships on all the parties favors them. Idea Intern., Inc. v. United States, 74 Fed.Cl. 129, 137 (2006); Bannum, 60 Fed.Cl. at 730; Seattle Sec. Servs., 45 Fed.Cl. at 571. No one factor is dispositive to the court’s inquiry as “the weakness of the showing regarding one factor may be overborne by the strength of the others.” FMC Corp. v. United States, 3 F.3d 424, 427 (Fed.Cir.1993); see also Seattle Sec. Services, 45 Fed.Cl. at 571. In the instant case, the existence of irreparable injury to plaintiffs, the balancing of harms in favor of the plaintiffs, and the public interest all lead this court to grant injunctive relief to plaintiffs.

    1. Irreparable Injury

    When assessing irreparable injury, “[t]he relevant inquiry in weighing this factor is whether plaintiff has an adequate remedy in the absence of an injunction.” Magellan Corp. v. United States, 27 Fed.Cl. 446, 447 (1993). Plaintiffs argue that they will suffer irreparable harm if an injunction is not granted, because the only other available relief—the potential for recovery of bid preparation costs—would not compensate them for *502the loss of valuable business on the Alliant contract. This type of loss, deriving from a lost opportunity to compete on a level playing field for a contract, has been found sufficient to prove irreparable harm. See Impresa Construzioni Geom. Domenico Garufi v. United States, 52 Fed.Cl. 826, 828 (2002); United Int’l Investigative Servs., Inc. v. United States, 41 Fed.Cl. 312, 323 (1998) (“[T]he opportunity to compete for a contract and secure any resulting profit has been recognized to constitute significant harm.”); Magnavox Elec. Sys. Co. v. United States, 26 Cl.Ct. 1373, 1379 (1992) (same); Bean Dredging Corp. v. United States, 22 Cl.Ct. 519, 524 (1991) (bidder would be irreparably harmed because it “could recover only bid preparation costs, not lost profits, through an action at law”).61 Accordingly, plaintiff has adequately demonstrated that it will suffer irreparable harm if injunctive relief is not provided.

    2. Balance of Hardships

    Under this factor, “the court must consider whether the balance of hardships leans in the plaintiffs’ favor,” requiring “a consideration of the harm to the government and to the intervening defendant.” Reilly’s Wholesale Produce v. United States, 73 Fed.Cl. 705, 715 (2006); see also Heritage of America, LLC v. United States, 77 Fed.Cl. 66, 79 (2007); PGBA, 57 Fed.Cl. at 663. Defendant and defendant-intervenors intimate that enjoining the performance of the contracts would delay implementation of a contract designed to provide state-of-the-art technology to the entire Federal government. But, this court has observed that “‘only in an exceptional ease would [such delay] alone warrant a denial of injunctive relief, or the courts would never grant injunctive relief in bid protests.’ ” PGBA, 57 Fed.Cl. at 663 (quoting Ellsworth Assocs., Inc. v. United States, 45 Fed.Cl. 388, 399 (1999)); see also Reilly’s Wholesale, 73 Fed.Cl. at 716. This is not such an “exceptional case,” for a variety of reasons.

    First, defendant has not indicated that setting aside the Alliant awards would work an immediate hardship on agencies looking to fulfill their technology requirements. In fact, the record reveals that the existing technology contracts will not expire until at least December of this year—and, without information indicating otherwise, it is reasonable to assume that those contracts, like most government contracts, may be extended for a period of time should circumstances warrant. See, e.g., PGBA, 57 Fed.Cl. at 663. Second, the delay and administrative burdens associated with setting aside the current awards and requiring the agency to take curative actions are, of course, problems of defendant’s own making. Those ill effects, moreover, may be ameliorated by a properly-framed injunction that, as defendant has repeatedly urged, is narrowly tailored to the problems encountered. See, e.g., Reilly’s Wholesale, 73 Fed.Cl. at 716. In these circumstances, the balance of hardships tilts in the plaintiffs’ favor.

    3. Public Interest

    Plaintiff also contends that the public interest will be served by granting the requested preliminary injunctive relief. “Clearly, the public interest in honest, open, and fair competition in the procurement process is compromised whenever an agency abuses its discretion in evaluating a contractor’s bid.” PGBA, 57 Fed.Cl. at 663; see also Rotech Healthcare, Inc. v. United States, 71 Fed.Cl. 393, 430 (2006); Cincom Sys., Inc. v. United States, 37 Fed.Cl. 266, 269 (1997); Magellan Corp., 27 Fed.Cl. at 448. In the present case, the public’s interest likewise lies in preserving the integrity of the competitive process, particularly in the context of a massive procurement that will impact the public potentially for a decade.

    III. CONCLUSION

    In closing, the court must note that the presentations made by the many counsel involved in this case were extraordinarily help-*503M—both greatly simplifying and complicating the ultimate resolution of this case. The court further wishes to note that the cooperation of those same attorneys was essential to the efficient resolution of this matter.

    Based on the foregoing:

    1. Plaintiffs’ motions for judgment on the administrative record are GRANTED, in part, and DENIED, in part, and defendant’s cross motion for judgment on the administrative record is DENIED, in part, and GRANTED, in part.

    2. Defendant, acting by and through the General Services Administration, is hereby ENJOINED from performing or allowing others to perform on the contract(s) awarded pursuant to Affiant Solicitation No. TQ2006MCB0001. Said parties also must suspend any related activities that may result in additional obligations being incurred by the United States under the contract(s).

    3. Defendant, acting by and through the General Services Administration, is hereby ENJOINED from:

    a. Relying, in making future award decisions pursuant to the Affiant Solicitation, on the results of the survey conducted by Calyptus, unless defendant, consistent with this opinion, confirms the accuracy of those results and supplements them with information that is responsive to the past performance evaluation criteria specified in the Solicitation;
    b. Relying, in making future award decisions pursuant to the Affiant Solieitation, on any BCP/PP combined scores similar to those previously derived herein unless defendant, consistent with this opinion, confirms the accuracy of any such scores and, in particular, determines which digits of such scores are significant enough to be relied upon;
    c. Failing, in making future award decisions pursuant to the Affiant Solicitation, to consider price and price reasonableness to the extent required by applicable statutes, the FAR and the Solicitation, as interpreted by this opinion; and
    d. Faffing, in making future award decisions pursuant to the Affiant Solicitation, to make and fully document tradeoff decisions to the extent required by the FAR and the Solicitation, as interpreted by this opinion.

    4. Nothing herein shall be deemed to prevent defendant and all or some of the plaintiffs from mutually agreeing to resolve this matter in such fashion as they deem appropriate.

    5. This opinion shall be published as issued after 3:00 P.M. on March 5, 2008, unless the parties identify protected and/or privileged materials subject to redaction prior to said time and date. Said materials shall be identified with specificity, both in terms of the language to be redacted and the reasons for that redaction (including appropriate citations to authority).

    IT IS SO ORDERED.

    Appendix A—Technical Acronyms

    A Acceptable Confidence

    AGO Administrative Contracting Officer

    BCP Basic Contract Plan

    BCPET Basic Contract Plan Evaluation Team

    CICA Competition in Contracting Act

    COOP Continuity of Operations

    CPET Cost/price Evaluation Team

    EH Exceptionally High Confidence

    EVMS Earned value management system

    FAR Federal Acquisition Regulation

    FASA Federal Acquisition Streamlining Act

    GWAC Government-wide Acquisition Contract

    IGCE Independent Government Cost Estimate

    IT Information Technology

    L/N Little to No Confidence

    *504M Marginal Confidence

    MA/IDIQ Multiple-award of indefinite delivery, indefinite quantity

    MIS Management Information System

    OCONUS Outside the continental United States

    PCO Procuring Contracting Officer

    PP Past Performance

    PPET Past Performance Evaluation Team

    PPIRS Past Performance Information Retrieval System

    S Significant Confidence

    SSA Source Selection Authority

    SSD Source Selection Decision

    SSDM Source Selection Decision Memorandum

    SSET Source Selection Evaluation Team

    SSP Source Selection Plan

    TET Technical Evaluation Team

    TOR Task Order Request

    Appendix B—Trade Off List by Offeror

    Exhibit C-l L-3 Communications Titan Corporation (L-3)—tradeoff with Stanley, MaeB, Apptis

    Exhibit C-2 Harris Corp, Government Communications Systems Division (Harris)—tradeoff with Stanley, MacB, Apptis

    Exhibit C-3 SI International, Inc. (SII)—tradeoff with SwRI, Smartronix, STG

    Exhibit C—4 System Research and Application Corporation (SRA)-tradeoff with Stanley, SwRI, Serco

    Exhibit C-5 Booz Allan Hamilton, Inc. (BAH)—tradeoff with Stanley, MaeB, Apptis

    Exhibit C-6 General Dynamics One Source, LLC (GD)-tradeoff with Stanley, Mac B, Apptis

    Exhibit C-7 BAE Systems, Inc. (BAE)-ti’adeoff with MacB, SwRI, Smartronix

    Exhibit C-8 Computer Sciences Corporation (CSC)—tradeoff with Centech, Tybrin, Stanley

    Exhibit C-9 I.T.S. Corporation (ITS)—tradeoff with Stanley, MacB, Apptis

    Exhibit C-10 Science Application International Corporation (SAIQ—tradeoff with Centech, Tybrin, Stanley

    Exhibit C-ll Indus Corporation (Indus)—tradeoff with SWRI, Smartronix, STG

    Exhibit C—12 TASC, Inc. (TASQ—tradeoff with Stanley, MaeB, Apptis

    Exhibit C-13 ManTech Advanced Systems International, Inc. (ManTech)—tradeoff with Centech, Tybrin, Stanley

    Exhibit C-14 QSS Group, Inc. (QSS)—tradeoff with STG, Lucent, Prosoft

    Exhibit C-15 Lockheed Martin Integrated Systems, Inc. (Lockheed)—tradeoff with SwRI, Smartronix, STG

    Exhibit C-16 Bearing Point, Ine. (Bearing Point)—tradeoff with Stanley, MaeB, CGI

    Exhibit C-17 Raytheon Company (Raytheon)—tradeoff with Stanley, MacB, SwRI

    Exhibit C-18 Accenture National Security Services LLC; (Accenture)—tradeoff with Stanley, MacB, SWRI

    Exhibit C-19 NCI Information Systems, Inc. (NCI)—tradeoff only with Lucent

    Exhibit C-20 Unisys Corporation (Unisys)—tradeoff with MacB, SwRI, Serco

    Exhibit C-21 Dynamic Research Corporation (DRC)—tradeoff with MacB, SwRI, Serco

    Exhibit C-22 International Business Machines Corporation (IBM)—tradeoff with Stanley, MacB, Apptis

    Exhibit C-23 MTC Technologies, Ine. (MTC)—tradeoff with SwRI, Smartronix, STG

    Exhibit C-24 CACI, Inc.—Federal (CACI)—tradeoff with SwRI, Smartronix, STO

    Exhibit C-25A RS Information Systems, Inc. (RSIS)—tradeoff with SwRI, STG, T3 Alliance

    Exhibit C-25B AT & T Government Solutions, Ine. (AT & T)—tradeoff with Stanley, MacB, SwRI

    *505Exhibit C-27 (Tetra Tech) Advanced Management Technology, Inc.(TtAMTI)—tradeoff with SwRI, Smartronix, STG

    Exhibit C-28 Alion Science & Technology Corporation (Alion)—tradeoff with MacB, SWRI, Serco

    Exhibit C-29 The Centech Group (Centech)—tradeoff with ManTech, SAIC, CSC Second

    Exhibit C-30 Tybrin Corporation (Tybrin)—tradeoff with ManTech, SAIC, CSC

    Exhibit C-31 Electronic Data Systems Corporation (EDS)—tradeoff with STG, Lucent, Prosoft

    Exhibit C-32 Stanley Associates, Inc. (Stanley)—tradeoff with AT & T, IBM, Accenture

    Exhibit C-33 Macaulay-Brown, Inc. (MacB)—tradeoff with Alion, AT & T, IBM

    Exhibit C-34 ARINC Engineering Services, Inc. (ARINC)—tradeoff with ManTech, SAIC, CSC

    Exhibit C-35 CGI Federal, Inc. (CGI)—tradeoff with Bearing Point, ManTech, SAIC

    Exhibit C-36 Apptis, Inc. (Apptis)—tradeoff with IBM, Bearing Point, ManTech

    Exhibit C-37 Southwest Research Institute (SwRI)—tradeoff with Alion, TtAMTI, AT & T

    Exhibit C-38 Serco, Inc. (Serco)—tradeoff with Alion, AT & T, IBM

    Exhibit C-39 CNS, Inc. (CNSI)—tradeoff with ManTech, SAIC, CSC

    Exhibit C-40 Perot Systems Government Services (PS)—tradeoff with ManTech, SAIC, CSC

    Exhibit C-41 Smartronix, Inc. (SMX)—tradeoff with Alion, TtAMTI, AT & T

    Exhibit 042 Advanced Technology Systems, Inc. (ATS)—tradeoff with Alion, AT & T, IBM

    Exhibit C-43 Keane Federal Systems, Inc. (Keane)—tradeoff with Alion, AT & T, IBM

    Exhibit C-44 Analytical Services, Inc. (ASI)—tradeoff with IBM, Bearing Point, ManTech

    Exhibit 045 STG, Inc.—tradeoff with EDS, Alion, TtAMTI

    Exhibit C-46 Nortel Government Solutions Inc. (NGS)—tradeoff with Alion, TtAMTI, AT & T

    Exhibit 047 Federal Network Systems, LLC (FNS/Verizon)—tradeoff with Bearing Point, ManTech, SAIC

    Exhibit 048 Alliant Solutions, LLC—tradeoff with Alion, TtAMTI, AT & T

    Exhibit 049 American Systems Corporation—tradeoff with Alion, TtAMTI, AT & T

    Exhibit C-50A Artel, Inc.—tradeoff with Alion, TtAMTI, AT & T

    Exhibit C-50B T3 Alliance—tradeoff with Alion, TtAMTI, AT & T

    Exhibit C-52 Lucent Technologies, Inc. (Lucent)—tradeoff with EDS, Alion, TtAMTI

    Exhibit C-53 McNeil Technologies, Inc. (McNeil)—tradeoff with ManTech, SAIC, CSC

    Exhibit C-54 TKC Communications, LLC (TKCC)—no tradeoff as TKCC is highest priced proposal

    Exhibit 0-55 Professional Software Engineering, Inc. (Prosoft)—tradeoff with EDS, Alion, TtAMTI

    Exhibit 0-56 Trawick & Associates (Trawick)—tradeoff with EDS, Alion, TtAMTI

    Exhibit 0-57 Engineering and Professional Services, Inc. (EPS)—tradeoff with EDS, Alion, TtAMTI

    Exhibit 0-58 Abacus Technology Corporation (Abacus)—tradeoff with Alion, TtAMTI, AT & T

    Exhibit C-59 Honevwell Technology Solutions, Inc. (HTSI)—tradeoff with Alion, AT & T, IBM

    Exhibit 0-60 Pearson Alliant—tradeoff with Alion, TtAMTI, AT & T

    Exhibit C-61 SYS Technologies, Inc. (SYS)—tradeoff with ManTech, SAIC, CSC

    Exhibit 0-62 Communication Technologies, Inc. (COMtek)—tradeoff with IBM, Bearing Point, ManTech

    . A listing of the technical acronyms used in this opinion may be found in Appendix A.

    . See Knowledge Connections, Inc. v. United States, 79 Fed.Cl. 750, 754 (2007) (describing the procurement process under the Clinger-Cohen Act). According to the record, Alliant and a related GWAC for small businesses are the successors to two existing contracts: (i) Applications 'n Support of Widely-Diverse End User Requirements (ANSWER); and (ii) Millenia, due to expire in December 2008 and April 2009, respectively.

    . Originally, the Solicitation indicated that no protest under the Federal Acquisition Regulations (FAR) (48 C.F.R.) would be authorized in connection with the issuance or proposed issuance of a task order, except on the grounds that the order increased the scope, period, or maximum value of the Alliant contract. Moreover, another part of the Solicitation indicated that ”[f]ormal evaluation plans or scoring of quotes or offers” on task orders "are not required.” However, Congress recently enacted section 834(a) of the National Defense Authorization Act for Fiscal Year 2008, Pub.L. No. 110-181, 122 Stat. 3, which amended 10 U.S.C. § 2304(c) to provide, over a three-year test period, for GAO protests of task orders valued in excess of $10,000,000. That provision is effective 120 days after the statute’s enactment (January 28, 2008), id. at § 834(a)(2)(C), leading defendant and certain of the parties to conclude that it will apply to Alliant task orders.

    . As recounted in the Source Selection Decision (SSD), the number of expected awards "was based upon market research, responses to the [requests for information], outreach meetings, and discussions with potential customers.” The SSD further indicated that ”[t]his number was arrived at to assure adequate competition under the GWAC, meet the objectives of the procurement, and finally to provide the industry a reasonable basis upon which to bid and formulate both their pricing and basic contract plan." Under the Solicitation, GSA reserved the right, over the term of the contract, to terminate particular awardees (an "off ramp”) or to add additional awardees (an "on ramp”) through a new procurement, as the government’s needs dictate.

    . Similar language was contained in the Source Selection Plan (SSP).

    . Unbeknownst to the offerors, the SSP indicated that “the Government intends to check nine (9) efforts selected from Tables 1 and 2, including the three efforts which the Offeror has identified.” The SSP further indicated that "[i]n the event that an evaluator is unable to reach the point-of-contact for a targeted effort after three attempts (via phone or email), allowing 24 hours between each attempt (which is considered ‘reasonable’), the reference will be marked as ‘not available’ (N/A) and an alternate effort will be substituted.” The SSP provided detailed rules on how an alternate effort should be selected.

    . Consistent with the requirements of the FAR, the SSP indicated that—

    If an evaluator receives adverse past performance information from a point-of-contact during an interview, the evaluator will provide the adverse information to the PCI, who will decide whether to contact the Offeror and give the Offeror an opportunity to "rebut” the adverse evaluation. The process and decision will be consistent with FAR 15.306.
    Offerors will not have the opportunity to comment on adverse information captured in a past performance system, such as PPIRS, as that opportunity has already been provided at the time of the evaluation [sic] submission in the system.

    . In the Solicitation, offerors were notified that "the General Service Administration may use the services of non-government evaluators."

    . Apart from the questions themselves, the interviewers received only the following guidance—

    Please encourage the reference to be as specific as possible about how the Offeror performed a task or accomplished a milestone. Attempt to identify specific performance metrics that were met or exceeded. For example, the following narrative ... “the Offeror exceeded the performance metric which decreased turnaround time by 25% on average” is better than just stating that the Offeror did a "good” job. If the reference identified problems, be sure to ask what corrective actions were taken, if any, to address the problem. The key is assessing the Offeror’s performance risk, so encourage the responder to describe how something was accomplished or why a certain action took place. Again, please encourage your interviewee to provide you with objective, quantifiable, or specific statements in your transcript whenever possible.

    (Emphasis in original). The instructions ended with the following bolded statement—“Please email the transcript of the response to the point-of-contact you interviewed to verify the responses.”

    . If the evaluators came across adverse past performance information in the interviews, the established procedures required them to forward the information to the Contracting Officer for him to determine whether to provide the affected offeror with an opportunity to respond to the negative information.

    . To the extent that any subfactor received a neutral score, the denominator in this calculation shrunk by one (e.g., if the proposal had four numeric scores and one neutral score, the average was produced by adding the numeric scores and dividing by 4).

    . It appears that the process used to establish the BCP rating deviated from the SSP, which provided—

    After the evaluations are completed by each evaluator, the team leaders will caucus members to form a consensus evaluation for each Offeror, that is, a single adjectival rating for each Technical Factor for each Offeror. Each team will rank order Offerors for their technical factor. Individual evaluator comments will be consolidated into potential discussion questions. BCP team members will evaluate an Offeror's resources using the specific adjectival ratings in Annex D[] as discriminators.

    . The SSP defined “Adequate Price Competition" using the following bullets:

    • Two or more responsible Offerors, competing independently, submit priced offers that satisfy the Government’s expressed requirement;
    • Award will be made to the Offeror whose proposal represents the best value where price is a substantial factor in source selection; and
    • There is no finding that the price of the otherwise successful Offeror is unreasonable. Any finding that the price is unreasonable must be supported by a statement of the facts and approved at a level above the Contracting Officer.
    It further indicated that “overall Price analysis will consist of:”
    • Adequate Price Competition;
    • []In relation to the IGCE for each loaded hourly labor rate price and each Median hourly labor rate price among all offers ...; the Government will identify those rates that vary significantly;
    • []In relation to the IGCE for Overall Price and the Median Overall Price among all offer-ors (Government and Contractor Site combined over 10 years); the Government will identify Overall Pricing that varies significantly-
    The plan added ”[w]hen the IGCE and Median vary significantly to one another, the Median labor rates and overall Median Price will take precedence for adequate price competition.”

    . The record contains only the exhibits (Exhibits Cs) summarizing the analysis as to plaintiffs, the defendant-intervenors and certain other of-ferors.

    . ManTech and SAIC had prices well above the IGCE and above two standard deviations from the mean overall price among offerors, and CSC’s price was above the IGCE and fell just *475below the second standard deviation from the mean.

    . Mr. Kempf noted that while Centech’s proposal ranked 29th in terms of technical quality, it was one of the highest-priced proposals, at a rank of 56th out 62 offerors. Highlighting particular features of that offer, Mr. Kempf indicated that—

    Centech’s lesser technical merit than other Of-ferors chosen for award was centered in its Basic Contract Plan. In some cases, Centech did not have the breadth of resources that other successful Offerors brought to the table. It did not provide information on the size of its workforce and it had a comparatively small number of personnel (161) with security clearances with no comprehensive plan to boost this number nor did it provide any detailed information on its plans for providing a staff with security clearances when needed. Though Centech indicated that it had a "significant” OCONUS presence for a company of its size, its proposal did not demonstrate an understanding or the ability to quickly staff an OCO-NUS task order request (TOR).

    Mr. Kempf’s comment regarding Stanley were less kind. Thus, he stated that ”[w]hen compared with other Offeror’s, Stanley’s proposal was, in general, undistinguished,” adding that ”[t]here was nothing remarkable in its area of corporate commitment and its lack of innovation was noted in several evaluation subfactors.” In addition, after summarizing negative comments received from several of Stanley's past performance references, Mr. Kempf stated that ”[t]here was noted risk, based on Stanley’s track record, that Stanley may not have the innovative tools and methodologies to support Alliant customers when compared to other Offerors that employed innovative and very effective quality assurance techniques, including highly effective metrics and quality controls.”

    . Several parties attribute the delay in filing their complaints to the fact that it took the GAO a number of weeks to dismiss all the Alliant protests pending before it. On October 1, 2007, GAO notified all the protesters of the filing of Serco’s case and invited them to submit comments regarding whether their respective protests should be dismissed under § 21.11(b) of the GAO's bid protest regulations. The latter provision states: "GAO will dismiss any case where the matter involved is the subject of litigation before, or has been decided on the merits by, a court of competent jurisdiction.” After GAO received those comments, several weeks passed before it dismissed all the protests—the latest such dismissal occurred on October 31, 2007 (Apptis withdrew its protest on November 8, 2007, while still awaiting a dismissal). Of course, nothing prevented any of the plaintiffs from voluntarily dismissing their GAO protests and immediately filing in this court—as some did.

    . In an affidavit filed with the court explaining this action, Mr. Kempf admitted that a mistake had been made in assigning an adjectival rating to Stanley’s purchasing system. Correcting that single mistake increased Stanley’s Weighted BCP/PP Average Score from 3.769 to 3.821 and caused GSA to take "corrective action” in making an award to Stanley. Explaining this decision, Mr. Kempf stated—

    After this adjustment, I see little difference between the price and technical merit of Stanley when compared to the next highest ranked offeror selected for award: Alion. As I discussed in my Source Selection Decision Memorandum (SSDM), the agency’s announced goal was to award 25 to 30 contracts to the best value proposals. The corrected presentation of data, above, reveals a natural break between Stanley and CENTECH rather than between Alion and CENTECH, as in the original award. CENTECH and Tybrin have less technical merit and a significantly higher price thus not a best value for the Government; each has documented weaknesses or only acceptable Basic Contract plans that do [not] warrant a best value award as discussed in Exhibits C-29 and C-30 of the SSDM.

    Mr. Kempf did not comment upon or renege any of the criticisms that he had previously leveled at Stanley, to wit, that its proposal was "undistinguished,” "demonstrated a lack of innovation,” and involved "noted risk” by the SSET. In terms of technical ranking, the new "natural break” described by Mr. Kempf was between Stanley at 3.821 and CENTECH at 3.813.

    . Bannum was based upon RCFC 56.1, which was recently abrogated and replaced by RCFC 52.1. However, the latter rule was designed to incorporate the decision in Bannum. See RCFC 52.1, Rules Committee Note (June 20, 2006); see also NVT Techs., Inc. v. United States, 73 Fed.Cl. 459, 462 n. 3 (2006); Bice v. United States, 72 Fed.Cl. 432, 441 (2006).

    . In Bannum, the Federal Circuit noted that, in Banknote Corp. of Am. Inc. v. United States, 365 F.3d 1345, 1352-53 (Fed.Cir.2004), it had erroneously conflated the standards under RCFC 56 and 56.1, albeit in dicta. In this regard, the Bannum court stated that—

    Although it never reached the factual question of prejudice, the Banknote II court added that it is the trial and appellate courts' task to "determine whether there are any genuine issues of material fact as to whether the agency decision lacked a rational basis or involved a prejudicial violation of applicable statutes or regulations.” This language equates a RCFC 56.1 judgment to a summary judgment under RCFC 56 and is unnecessary to the Banknote II holding. Because the court decided the issue by an interpretation of the solicitation, e.g., making a legal determination, the court in Banknote II did not need to consider whether the trial court overlooked a genuine dispute or improperly considered the facts of that case.

    Bannum, 404 F.3d at 1354. Prior decisions of this court have made the same error. See, e.g., JWK Int'l Corp. v. United States, 49 Fed.Cl. 371, 387 (2001), aff'd, 279 F.3d 985 (Fed.Cir.2002). Indeed, while various decisions of this court refer to a "motion for summary judgment on the administrative record,” see, e.g., ManTech Telecom. & Info. Sys. Corp. v. United States, 49 Fed.Cl. 57, 64-65 (2001), aff'd, 30 Fed.Appx. 995 (Fed.Cir.2002), there was no such thing under former RCFC 56.1 (nor is there under RCFC 52.1), as properly construed.

    . See Murakami v. United States, 46 Fed.Cl. 731, 734 (2000); Aero Corp., S.A. v. United States, 38 Fed.Cl. 408, 410-11 (1997); see also Florida Power & Light Co. v. Lorion, 470 U.S. 729, 743-44, 105 S.Ct. 1598, 84 L.Ed.2d 643 (1985) ("The task of the reviewing court is to apply the appropriate APA standard of review ... to the agency decision based on the record the agency presents to the reviewing court;” where a record in an arbitrary and capricious review is incomplete, "the proper course, except in rare circumstances, is to remand to the agency for additional investigation or explanation. The reviewing court is not generally empowered to conduct a de novo inquiry into the matter being reviewed and to reach its own conclusions based on such an inquiry.”); FCC v. ITT World Communications, Inc., 466 U.S. 463, 469, 104 S.Ct. 1936, 80 L.Ed.2d 480 (1984); Camp v. Pitts, 411 U.S. 138, 142, 93 S.Ct. 1241, 36 L.Ed.2d 106 (1973) (the focal point of arbitrary and capricious review "should be the administrative record already in existence, not some new record made initially in the reviewing court”). As this court has explained elsewhere, Esch v. Yeutter, 876 F.2d 976 (D.C.Cir.1989)—often cited as authority for "supplementing” the administrative record—-is heavi*481ly in tension with these Supreme Court precedents and relies upon authorities (principally a law review article) that do not support its liberal view of supplementation. See Murakami, 46 Fed.Cl. at 734-36; ARINC Eng’g Servs., LLC v. United States, 77 Fed.Cl. 196, 201 n. 5 (2007); cf. GraphicData, LLC v. United States, 37 Fed.Cl. 771, 780 (1997) (‘‘[A] judge confronted with a bid protest case should not view the administrative record as a[n] immutable boundary that defines the scope of the case.”).

    . As noted by the Federal Circuit, "[pjrocurement officials have substantial discretion to determine which proposal represents the best value for the government.” E.W. Bliss Co. v. United States, 77 F.3d 445, 449 (Fed.Cir. 1996); see also Galen Med. Assocs., Inc. v. United States, 369 F.3d 1324, 1330 (Fed.Cir.2004); TRW, Inc. v. Unisys Corp., 98 F.3d 1325, 1327-28 (Fed.Cir. 1996); LaBarge Prods., Inc. v. West, 46 F.3d 1547, 1555 (Fed.Cir. 1995) (citing Burroughs Corp. v. United States, 223 Ct.Cl. 53, 617 F.2d 590, 597-98 (1980)); EP Prods., Inc. v. United States, 63 Fed.Cl. 220, 223 (2005), aff'd, 163 Fed.Appx. 892 (Fed.Cir.2006); JWK Int'l Corp. v. United States, 52 Fed.Cl. 650, 655 (2002), aff'd, 56 Fed.Appx. 474 (Fed.Cir.2003).

    . In Banknote Corp., the Federal Circuit expounded upon these principles, as follows:

    Under the APA standard as applied in ... [bid protest] cases, "a bid award may be set aside if either (1) the procurement official’s decision lacked a rational basis; or (2) the procurement procedure involved a violation of regulation or procedure.” [Impresa Construzioni Geom. Do-menico Garufi v. United States, 238 F.3d 1324, 1332-33 (Fed.Cir.2001)]. When a challenge is brought on the first ground, the test is “whether the contracting agency provided a coherent and reasonable explanation of its exercise of discretion, and the disappointed bidder bears a 'heavy burden' of showing that the award decision had no rational basis.” Id. at 1332-33 (citations omitted). "When a challenge is brought on the second ground, the disappointed bidder must show a clear and prejudicial violation of applicable statutes or regulations.” Id. at 1333.

    Banknote Corp., 365 F.3d at 1351; see also Seattle Sec. Servs., Inc. v. United States, 45 Fed.Cl. 560, 566 (2000); Analytical & Research Tech., Inc. v. United States, 39 Fed.Cl. 34, 42 (1997).

    . A review of Federal Circuit cases indicates that this prejudice analysis actually comes in two varieties. The first is that described above— namely, the ultimate requirement that a protestor must show prejudice in order to merit relief. A second prejudice analysis focuses more preliminarily on standing. In this regard, the Federal Circuit has held that “because the question of prejudice goes directly to the question of standing, the prejudice issue must be reached before addressing the merits.” Info. Tech. & Applications Corp. v. United States, 316 F.3d 1312, 1319 (Fed.Cir.2003); see also Myers Investigative Sec. Servs., Inc. v. United States, 275 F.3d 1366, 1370 (Fed.Cir.2002); Overstreet Elec. Co., Inc. v. United States, 59 Fed.Cl. 99, 109 (2003). Cases construing this second variation on the prejudice inquiry have held that it requires merely a "viable allegation of agency wrong doing,” with " 'viability' here turning on the reasonableness of the likelihood of prevailing on the prospective bid taking the protestor’s allegations as true.” McKing Consulting Corp. v. United States, 78 Fed.Cl. 715, 721 (2007); see also 210 Earll, LLC v. United States, 77 Fed.Cl. 710, 719 (2006); Textron, Inc. v. United States, 74 Fed.Cl. 277, 285 (2006). Because of the serious nature and breadth of the numerous allegations of error here, it requires little effort to conclude that all the plaintiffs in this case meet this preliminary "standing” threshold—and defendant, to its credit, does not argue otherwise.

    . See also Seattle Sec. Servs., 45 Fed.Cl. at 569; Incident Catering Servs., LLC, 2005 CPD ¶ 193, 2005 WL 3193685, at *4 (2005); U.S. Prop. Mgmt. Serv. Corp., 98-1 CPD ¶ 88, 1998 WL 126845, at *4 (1998); Wind Gap Knitwear, Inc., 95-2 C.P.D. ¶ 124, 1995 WL 368421, at *2 (1995).

    . The scripts were neither attached to the Solicitation nor otherwise provided to the offerors prior to the award decisions. Accordingly, the complaints made about them are timely. Compare Halter Marine, Inc. v. United States, 56 Fed.Cl. 144, 169 (2003); SWR, Inc., 97-2 C.P.D. ¶ 34 (1997), 1997 WL 422231, at *3 (1997).

    . For cases discussing procurements in which surveys were used, see, e.g., Southern Foods, Inc. v. United States, 76 Fed.Cl. 769, 779 (2007); Day & Zimmermann Servs., a Div. of Day & Zimmermann, Inc. v. United States, 38 Fed.Cl. 591, 609 (1997); S3 LTD, 2001 C.P.D. ¶ 165, 2001 WL 1105271, at *5 (2001); Beneco Enters., Inc., 2000 C.P.D. ¶ 175, 1999 WL 33218776, at *2-3 (1999); Cont’l Serv. Co., 97-1 C.P.D. ¶ 9, 1996 WL 753889, at *2-3 (1996); see also Ralph C. Nash & John Cibinic, "Postscript IV: Past Performance Evaluations,” 16 No. 3 Nash & Cibinic Rep. ¶ 14 (2002) ("Data can be obtained from references by questionnaire or telephone interview.”). Indeed, the Office of Federal Procurement Policy (OFPP) has provided guidance on how such surveys should be conducted. See Office of Mgmt. & Budget, OFPP, Best Practices for Collecting and Using Past Performance Information (2000) ("If adequate documentation is not readily available ... then a brief survey with follow up calls, or phone interviews should be used to verify past performance.”); id. at Appendix I (providing a sample survey form).

    . See, e.g., Cooperativa Muratori Riuiniti, 2005 C.P.D. ¶ 21, 2005 WL 277303, at *6 (2005) (survey that was not geared to the evaluation criteria held to be arbitrary and capricious); FC Construction Co., Inc., 2001 CPD ¶ 76, 2001 WL 370895, at *5 (2001) (telephone survey not arbitrary capricious, where conducted by contract administrator who read references twenty-six questions, provided the references with the definitions for adjectival ratings, and allowed references to review completed questionnaires); Dismas Charities, Inc., 2003 C.P.D. ¶ 125, 2003 WL 21665202, at *4 (2003) (questionnaires were "materially flawed” and contrary to the FAR when the agency did not provide references with instructions on how to score offeror’s past performance); ENMAX Corp., 99-1 C.P.D. ¶ 102, 1999 WL 335687, at *5 (survey was reasonable where it includes seventeen questions that were designed to correspond with the three technical evaluation factors and required the reference to rate the offeror as "above average,” "average,” "below average,” or "not observed”); see also Maint. Engrs. v. United States, 50 Fed.Cl. 399, 421 (2002); S3 LTD, 2001 WL 1105271, at *5.

    . In some instances, it appears that particular awardees benefitted from isolated comments that happened to coincide with the definitions in the adjectival ratings. For example, while INDUS comments were very similar to those of CGI (e.g., "very well,” "effective,” "very good”), one of its references indicated that it "always meet[s] or exceed[s]” its schedules. Similarly, while several of ManTech’s references indicated that it was "very effective,” "really good,” and "very good" in meeting schedules, one of its references reportedly said “[t]hey have pretty much exceeded everything that we have asked them.”

    . The distinction drawn in the Solicitation between firm, fixed-price contracts and cost-reimbursement contracts is logical and consistent with the FAR. Under the former 1ype of contract, the contractor absorbs all cost risk, whereas under the latter, the government must reimburse the contractor’s allowable costs. See 48 C.F.R. §§ 16.202-1, 16.301-1. Accordingly, while a contractor’s ability to control costs is critically important to the government in a cost-reimbursement contract, it is of little moment in a firm, fixed-price contract.

    . To be sure, one paragraph of the instructions that came with the scripts urged the interviewers to "[a]ttempt to identify specific performance metrics that were met or exceeded.” But, neither this paragraph nor anything else alerted the interviewers that the degree to which those metrics were exceed was critical to whether the offerors would receive an EH (“exceeds many requirements”) or a S rating (“exceed some requirements”). Moreover, while this same paragraph cautioned interviewers that specific answers were better than having the reference just state that the offeror did a "good” job, a review of the transcripts reveals dozens of such generic answers, suggesting that the agency did not enforce this instruction, either.

    . At oral argument, defendant speculated that the evidence documenting this confirmation simply did not find its way into the administrative record. The court has several problems with this contention. First, the instructions to Calyptus indicated that the completed transcripts were to be provided to the references via e-mail. The email to the IBM references is in the record, but no other such e-mails appear within the approximately 24,000 pages of the record—at least none that the court has found or defendant has cited. Second, the transcripts that were provided to the IBM references came back heavily marked, yet no other transcripts in the record contain any markings or indications of editing. Finally, defendant could have provided materials supporting its claim on numerous occasions, including after these arguments were raised by various plaintiffs—but did not. In these circumstances, the court is compelled to render its decision based upon the record actually before it—a record that does not support the claim that this corroboration step was performed.

    . To be sure, GSA’s failure to follow the SSP in this regard is not an independent basis for setting aside the awards. See, e.g., ManTech Telecomm., 49 Fed.Cl. at 67; Quality Sys., Inc., 89-2 C.P.D. ¶ 197, 1989 WL 241122, at *9 (1989). Yet, its failure to comply with a step that it apparently believed was important to validate the survey results certainly sheds light on whether its conduct of the surveys was arbitraiy and capricious.

    . The systemic nature of the errors encountered with respect to the past performance survey obviates the necessity for the court to consider most of the claims by individual plaintiffs that errors were made in converting the comments of their references into adjectival ratings. Some of these claims plainly are misplaced—for example, GSA did not act arbitrarily in assigning neutral ratings to offerors that lacked certain past performance experience. See Metcalf Constr. Co. v. United States, 53 Fed.Cl. 617 (2002). In addition, the court rejects Serco’s argument that GSA failed to provide offerors with the opportunity to rebut "adverse” past performance information. The Solicitation stated that the evaluator was to "provide adverse information to the PCO [procuring contracting officer],” and that the PCO would "decide” whether the offeror should be given the "opportunity to rebut’ the information.” This language tracks FAR § 15.306(a)(2), which states that ”[i]f award will be made without conducting discussions, offerors may be given the opportunity to clarify ... adverse past performance information to which the offeror has not previously had an opportunity to respond.” While the regulation does not define what it means by "adverse" past performance information, the adjectival ratings here defined such information as that which resulted in a past performance rating of "L/N" or "1.” None of plaintiffs received such a rating. Even if they had, neither the FAR nor the Solicitation guaranteed that an offeror would have an opportunity to respond to such information—both talk only in permissive terms. See DynCorp., Int'l, LLC v. United States, 76 Fed.Cl. 528, 540 (2007); see also JWK Int'l Corp., 52 Fed.Cl. at 661. And, certainly, no plaintiff has shown that GSA abused its discretion in failing to provide an opportunity to respond. Finally as to this segment, the court has considered, but rejects, various assertions that GSA treated certain plaintiffs unequally in evaluating their BCPs.

    . See Joseph S. Danowsky, "Statistical Report Flaws: A Spotter’s Guide,” 18 No. 8 Acct. & Fin. *488Planning for L. Firms 1 (2005) ("Use of excess precision may cast doubt on the mathematical competence of a report preparer ... since calculations [cannot] generate more ‘significant digits’ than the input with the least accuracy.”); James Tanton, Encyclopedia of Mathematics 168 (2005) ("Generally, the result of a calculation should be presented as no more accurate than the least accurate initial measurement.”); The Concise Encyclopedia of Mathematics 610 (2d ed. 1989) ("If a calculation is carried out with approximate values, then, in general, the result will likewise be only approximately correct."). One deskbook describes this concept in the following terms—

    Measured numbers are not exact. They depend on our skill and care in making the measurements. Because all measured numbers are approximate, it is important to know how well the reported number represents the actual value. The comparison of a measurement to the actual or true value is defined as accuracy and is typically presented as a percent difference from the true value ... Measurements are only as accurate or precise as the device that measures them. In other words, if water levels are measured with a steel tape and chalk that is only marked in feet, and then the water level is re-measured using an electronic monitoring system that provides measurements that are marked in tenths of an inch, a more accurate measurement is made with the second tape than with the first. If measurements are made over a six-month period using both techniques and then measurements are averaged, the average value will only be as accurate as the measurements made with the first tape (the least precise measuring device). The number of digits and the decimal notation of this average are determined by the

    significant digits of each of the measurements. James W. Conrad, Jr., Env. Sci. Deskbook § 1:7 (2007) (emphasis in original). This passage concludes that “[slummary statistics and values can only have as many significant digits as the measurement used with the least number of significant digits.” See also Philip N. Baldwin Jr., Statistics: Know-How Made Easy 30 (1999).

    . As discussed above, the record demonstrates that GSA did not comply fully with portions of the SSP that had been designed to improve the accuracy of its preliminary review, such as requiring the contractor to provide the transcript of their interviews to the past performance references for verification.

    . See M-Cubed Info. Sys., Inc., 2000 CPD ¶ 74, 2000 WL 656188, at *6 (2000) (agency properly determined that past performance scores of 39.5 and 42.5, as well as technical scores of 88.00 and 87.70, were "technically equal”); The Parks Co., 92-2 C.P.D. ¶ 354, 1992 WL 346769, at *2-3 (1992) (agency properly determined that technical scores of 2,860 and 2,515 points out of the available 3,000 weighted points were "essentially equal technically.”); Correa Enters., Inc., 91-1 C.P.D. ¶ 249, 1991 WL 73075, at *2 (1991) (agency properly determined that technical scores of 91.75 and 90.50 points on a 100 point scale were "essentially technically equal”); DDL Omni Eng'g, 85-2 C.P.D. ¶ 684, 1985 WL 53700, at *5 (1985) (agency reasonably determined that proposals were "substantially equal” despite a 0.35 point difference on a scale of 10); see also Training and Info. Servs., Inc., 87-1 C.P.D. ¶ 266, 1987 WL 96915, at *3 (1987) (protest sustained—0.2 point differential (on a 100 point scale) between protester and awardee, after correction of errors, held to be "an almost insignificant point difference”).

    . The court rejects plaintiffs' claim that the BCP/PP averages effectively gave disproportionate weight to the BCP rating. As the court understands them, these claims essentially focus on the fact that there was much more dispersion under the BCP rating than under the past performance rating, so that when the two ratings were averaged, the BCP rating had more impact on where a given offeror would be ranked. But assuming for the sake argument that the dispersion under the BCP rating was bona fide (which remains to be seen), there is nothing about the spread of those rankings that violated the Solicitation's promise that past performance and basic contract ratings would be "approximately equal in importance.”

    . The sports-minded reasonably might ask how the situation here differs from the calculation of batting averages. After all, that calculation begins with inputs that are whole numbers, yet ends with averages that are regularly carried to the third decimal place and, if necessary to break a tie, to a fourth place (e.g., George Kell versus Ted Williams in 1949). But, there are two major differences between the present case and the calculations used to determine the batting title. The first is that, unlike this case, the inputs into the batting average equation are certain—e.g., two hits in four at bats—and not approximate measurements or other observations that incorporate some level of imprecision or rounding that necessitates the use of an error bound. Averaging such figures thus does not raise issues concerning error propagation. The second distinction involves the use of the statistics. In baseball, the decision being made based upon the batting averages is defined in terms of the average itself—the player with the highest average wins the batting title. Here, that insularity is lacking. The ultimate question here is not which firm has the highest average technical ranking, but rather which presents the best value to the government. It is the context of the decision, then, that ultimately suggests that caution should be employed in using approximate numbers. Indeed, no baseball purist, certainly not Bill James (the guru of baseball statistics), would point solely to a batting average, carried to whatever number of digits, to resolve who is the better hitter (e.g., Babe Ruth and "Big Dan" Brouthers both have career batting averages of .3421). See Bill James, The New Bill James Historical Baseball Abstract (2003).

    . Several offerors argue that GSA created a de facto "competitive range” when it designated the presumptive award group. While the FAR does not specifically define when such a range arises, it indicates that if such a range is established, the agency must notify excluded offerors and provide them with an opportunity for debriefing. FAR § 15.306(c)(3)-(4). Yet, even without a definition, it would appear that the sin qua non of a “competitive range" is that it excludes certain offerors from further consideration. See M.W. Kellogg Co. v. United States, 10 Cl.Ct. 17, 23 (1986). Such was not the case with the "presumptive awardee” group established by GSA.

    . See also C.W. Gov't Travel, Inc., 2005 C.P.D. ¶ 139, 2005 WL 1805945, at *4 (2005) ("[t]he statutory requirement that cost to the government be considered in the evaluation and selection of proposals for award is not satisfied by the promise that cost or price will be considered later, during the award of individual task orders”); S.J. Thomas Co., Inc., 99-2 C.P.D. ¶ 73 (1999), 1999 WL 961750, at *3 (1999) (same); SCIENTECH, Inc., 98-1 C.P.D. ¶ 33, 1998 WL 29236, at *5 (1998) (same); Ralph C. Nash & John Cibinic, "Using Best Value to Select GWACs Contractors; A Flawed Procurement,” 19 No. 7 Nash & Cibinic Rep. ¶ 35 (2005).

    . While it is neither for defendant nor this court to question why price is relevant here—it is enough that it is—one can conceive of at least two answers to this question. First, it is reasonable to assume that offerors that list reasonable prices under the GWAC will continue to do so when they compete for task orders. Indeed, one of the government's objectives, in conducting this GWAC, is to line up a diverse group of reasonably-priced offerors to bid on those orders. Second, failing to consider prices at this stage would raise the specter that offerors would provide basic contract plans that are economically infeasible (e.g., gold-plated)—plans that have wonderful options, to be sure, but only at prohibitive *493prices. Indeed, several of the plaintiffs here have argued that is precisely what happened here and that if they had known that GSA would give so little consideration to price, they would have provided more extensive plans.

    . Miguel de Cervantes Saavedra, The Ingenious Hidalgo Don Quixote of La Mancha, Part ii, Chap, xxiv (1615).

    . Defendant makes much of the fact that the SSA mentioned price in setting the "natural break point” between the 28th and 29th ranked offerors. But, it essentially ignores the fact that the SSET, in setting the same "natural break point" between the 27th and 28th ranked offer-ors made no mention whatsoever of price. Indeed, if one looks more broadly at the rankings, it is hard to believe that the SSA gave serious consideration to price in setting this "natural break point." The last of his presumptive awar-dees, Alion, had a weighted BCP/PP Average of 3.829 and a price of $33,007,426, while the 33rd ranked offeror, MacB, had a BCP/PP Average of 3.754 and a price of $[]—its technical ranking was 1.9 percent lower than that of Alion, but its price was [] percent lower. Likewise, the 37th ranked offeror, SwRI, had a BCP/PP Average of 3.738 and a price of $[]—its technical ranking was 2.3 percent lower than that of Alion, but its price was [] percent lower. Indeed, had defendant properly accounted for the lack of precision in its technical rankings—leading it likely to conclude that the offerors were much more bunched together—it would have been obliged to give price an even greater role in the award decisions, as the Solicitation plainly advised that "the closer the technical scores of the various proposals are to one another the more important cost or price considerations become in determining the overall best-value for the Government.” That did not occur.

    . In the court’s view, the foregoing discussion fully disposes of the argument made by Centech that GSA gave price too much consideration.

    . In other contexts, an agency may, for example, cancel a solicitation if its receives unreasonable prices from all the offerors. See FAR §§ 14.404-l(c)(6); see also Nutech Laundry & Textiles, Inc., 2003 C.P.D. ¶ 34, 2003 WL 282208, at *3 (2003).

    . Consistent with this view, agencies in other procurements have used the results of a price reasonableness analysis to conclude that an award could not be made to a particular offeror. See, e.g., Crowley American Transp., Inc., 95-1 C.P.D. 277, 1995 WL 366985, at *3 (1995); see also Ralph C. Nash & John Cibinic, "Cost and Price Analysis: Understanding the Terms," 9 No. 1 Nash & Cibinic Rep. ¶ 5 (1995) ("The purpose of a price reasonableness analysis is to ascertain that the Government is not paying too high a price.").

    . Compare Moore's Cafeteria Servs. d/b/a MCS Management, 2007 C.P.D. ¶ 99, 2007 WL 1746378, at *4 (2007) (price was fair and reasonable where higher than protester but lower than the third technically acceptable offer); U.S. Dynamics Corp., 2007 C.P.D. ¶ 21, 2006 WL 4043686, at *3 (2006) (price reasonableness established based on "adequate price competition” where there was a 4 percent differential between the price of the offerors); Clearwater Instrumentation, Inc., 2001 C.P.D. ¶ 151, 2001 WL 1047078, at *4 (2001) (price reasonableness analysis was proper where prices proposed by three competitors "were relatively consistent and in line with each other”); ITT Fed. Sys. Int’l Corp., 2001 C.P.D. ¶ 45 (2001), 2001 WL 238559, at *9 (2001).

    . At oral argument, defendant asserted that the focus of the price reasonableness analysis should only be on the reasonableness of the profit factor built into a particular price. Under this approach, competition could be viewed as uniformly establishing price reasonableness based on the assumption that an offeror would not propose a price that included an uncompetitive profit margin. Defendant, however, could supply no authority in support of its cramped view of the role of price reasonableness. This is not surprising as its invention runs counter to the entire thrust of the FAR, which clearly is concerned that awards not be made to offers with unreasonable prices, whatever the cause. As once noted by the Court of Claims, “price does not become a reasonable one for the Government to pay simply because it is thought to yield the seller only a minimal profit.” Gibraltar Mfg. Co. v. United States, 212 Ct.Cl. 226, 546 F.2d 386, 390-91 (1976).

    . This is not to say that these prices could not be shown to be fair and reasonable under another method, perhaps one of the others listed in FAR § 15.404—1(b)(2), including an analysis of pricing information supplied by the offeror. Indeed, various plaintiffs argue that GSA erred in failing to conduct a cost realism analysis that focused on the various labor categories that were *496used in developing the offerors evaluated prices. They note that the hourly rates for some of those categories, including those reflected in the high-priced offers, varied significantly. But, it is difficult, at this juncture, to address these concerns given the agency's scant consideration of price— it is hard, in other words, to assess whether the agency abused its discretion in failing to analyze the components of certain prices when the agency failed properly to identify prices that appeared not to be fair and reasonable. Accordingly, the court will not reach these issues at this time.

    . See Beautify Prof'l Servs. Corp., 2003 C.P.D. ¶ 178, 2003 WL 22339300, at *4 (2003) ("In a best value procurement, it is the function of the source selection authority to perform a price/ non-price factor tradeoff, that is, to determine whether one proposal’s superiority under the non-price factor is worth a higher price. This tradeoff process allows an agency to accept other than the lowest-priced proposal.”); Johnson Controls World Servs., Inc., 2002 C.P.D. ¶ 88, 2002 WL 1162912, at *4 (2002) (“The propriety of the cost/price-technical tradeoff decision turns not on the difference in the technical scores or ratings per se, but on whether the selection official’s judgment concerning the significance of the difference was reasonable and adequately justified in light of the RFP’s evaluation scheme.”); Opti-Lite Optical, 99-1 C.P.D. ¶ 61, 1999 WL 152145, at *3-4 (1999) ("While adjectival ratings and point scores are useful as guides to decision-making, they generally are not controlling, but rather, must be supported by documentation of the relative differences between the proposals, their strengths, weaknesses and risks, and the basis and reasons for the selection decision"); Cygnus Corp., 97-1 C.P.D. ¶63, 1997 WL 46987, at *11 (1997).

    . Indeed, issues of false precision akin to those discussed above can creep into the process if an agency mechanically relies on a purely mathematical price/technical tradeoff methodology. As Messrs. Cibinic and Nash once wrote, "[sjome of these formulas have used a total point system by assigning points to the price, others have assigned dollar values to the non-price factors, and a third has divided the price by the points assigned to non-price factors to come up with a dollars per point figure.” Cibinic & Nash, supra at 715. Decisions reviewing these formulae have found them unobjectionable provided that they were used only as evaluation techniques and not as the actual basis for making best value determinations. See, e.g., Med. Dev. Int’l, 99-1 C.P.D. ¶ 68, 1999 WL 194481, at *6-7 (1999); Opti-Lite Optical, 1999 WL 152145, at *5; Teltara, Inc., 98-2 C.P.D. ¶ 124, 1998 WL 841469, at *4 (1998); Moran Assocs., 91-2 C.P.D. ¶ 495, 1991 WL 296778 (1991); Storage Tech. Corp., 84-2 C.P.D. ¶ 190, 573 N.W.2d 625, 1998 WL 46528, at *3 (1998); Harrison Sys., Ltd., 84-1 C.P.D. ¶ 572, 1984 WL 43532, at *4-5 (1984).

    . See, e.g., Magellan Health Servs., 2007 C.P.D. ¶ 81, 2007 WL 1469049, at * 15 (2007); SOS Interpreting Ltd., 2005 C.P.D. ¶ 26, 2005 WL 357422, at *5-6 (2004); Blue Rock Structures, Inc., 2004 C.P.D. ¶ 63, 2004 WL 414581, at *3-4 (2004); Johnson Controls World Servs., 2002 WL 1162912, at *5; AIU N. Am., Inc., 2000 C.P.D. ¶ 39, 2000 WL 255431, at * 7 (2000).

    . It is, of course, axiomatic that, while the court may uphold a decision of less than ideal clarity, the court "may not supply a reasoned basis for the agency's action that the agency itself has not given.” Motor Vehicle Mfrs. Assn., 463 U.S. at 43, 103 S.Ct. 2856; see also Bowman Transp., Inc. v. Arkansas-Best Freight Sys., Inc., 419 U.S. 281, 285, 95 S.Ct. 438, 42 L.Ed.2d 447 (1974).

    . See Remington Arms Co., Inc., 2006 C.P.D. ¶ 32, 2006 WL 327974, at * 13 (2006) (“The propriety of such a price/technical tradeoff decision turns not on the difference in the technical scores or ratings per se, but on whether the selection official’s judgment concerning the significance of the difference was reasonable and adequately justified in light of the RFP's evaluation scheme.”); Blue Rock Structures, Inc., 2004 WL 414581, at *3 (requiring the tradeoff analysis to furnish an explanation as to why technical advantages warrant a price premium); General Offshore Corp., 92-2 C.P.D. ¶ 335, 1992 WL 79117, at *4 (1992) (“The determining element is not the difference in technical merit, per se, but the contracting agency's judgment concerning the significance of that difference.”); see also A & D Fire Protection, Inc., 2002 C.P.D. ¶ 74, 2002 WL 841331, at *3 (2002); Tecom, Inc., 94-2 C.P.D. ¶ 212, 1994 WL 683269, at *5 (1994); Oshkosh Truck Corp., 1993 WL 335049, at *6.

    . As noted above, in performing best value anal-yses, agencies have sometimes divided the offering price by the points assigned to non-price factors to come up with a dollars per point figure. See Cibinic & Nash, supra at 715. Such a calculation here, made using the Weighted BCP/PP Average scores, reveals wide disparities between offerors. At the bottom of the scale, there are awardees such as NCI and plaintiff STG, at $5,284,517 and $[] per point, respectively. On the other end of the spectrum, we find awardees, such as ManTech and SAIC, at $13,414,596 and $12,365,996 per point, respectively—essentially double the numbers for NCI and STG. These figures are informative not because the agency was required to conduct such statistical studies as part of its tradeoff analysis, but because the disparities demonstrate that GSA needed to conduct more careful tradeoff analy-ses, in whatever form they took.

    . This is not to say that the tradeoff analysis for ManTech and for certain other high-priced offer-ors necessarily met the requirements for the FAR. While the agency might have provided an adequate explanation of the benefits that it felt were worth paying these higher price, it remains that the agency did not properly determine whether those price were fair and reasonable. Whether that means that the tradeoff analysis themselves were deficient or were based upon a false predicate, i.e., that the prices being analyzed were fair and reasonable, is a distinction without meaning at this point. It would seem that the agency first needs to determine whether a given price is fair and reasonable and then may consider whether it is worth paying any premium represented by that price.

    . The plaintiffs making this assertion have cited nothing in support thereof. Requiring the SSA to make comparisons even with no likelihood that a tradeoff would occur makes no sense and runs counter to the discretion afforded to the SSA in making tradeoff decisions.

    . Establishing the presumptive group and requiring a compelling reason for disturbing it not only placed further undue reliance on its technical rankings, but arguably straight-jacketed what should be a flexible approach in making best value determinations. See, e.g., Gen. Offshore Corp., 1992 WL 79117, at *4 ("[W]e consistently have stated that evaluation scores are merely guides for the selection official, who must use his judgment to determine what the technical difference between the competing proposals might mean to contract performance, and who must consider what it would cost to take advantage of it.").

    . Support also exists for the proposition that the denial of the right to have a bid fairly and lawfully considered constitutes irreparable harm. See Ellsworth Assocs., Inc. v. United States, 45 Fed.Cl. 388, 398-99 (1999) (citing cases); but see Minor Metals, Inc. v. United States, 38 Fed.Cl. 379, 381-82 (1997) (noting "economic harm without more, does not seem to rise to the level of irreparable injury”).