-
FARRELL, Senior Judge: A jury found appellant guilty of, among other things, first-degree felony murder (burglary) while armed, first-degree sexual abuse while armed, first-degree theft of a motor vehicle from a senior citizen, and related lesser included offenses. The principal issue on appeal is whether the trial judge erroneously admitted the expert opinion of an FBI forensic document examiner that a piece of handwriting left on the body of the murder victim had been written by appellant. Specifically, we must decide whether opinion evidence of this kind based on comparison of “known” and “questioned” handwritings, resulting in the opinion that the same individual wrote both documents, meets the test of “general acceptance of a particular scientific methodology,” Ibn-Tamas v. United States, 407 A.2d 626, 638 (D.C.1979); see Frye v. United States, 54 App.D.C. 46, 47, 293 F. 1013, 1014 (1923), required by this jurisdiction for the admission of forensic science evidence. Although appellant, joined by the Public Defender Service as amicus curiae, makes a spirited attack on the general acceptance of all such “pattern-matching” analysis in the light of a recent National Research Council Committee Report, we hold that forensic handwriting comparison and expert opinions based thereon satisfy the bedrock admissibility standard of Frye and Ibrtr-Tamas and may be put before a jury, where remaining issues of reliability may be argued, after cross-examination and any counter-expert testimony, as affecting the weight of the opinions.
Rejecting as well appellant’s remaining assignment of error, see part III., infra, we affirm the judgments of conviction except for those the parties agree must be vacated on remand under merger principles.
I.
The government’s proof allowed the jury to find beyond a reasonable doubt that appellant sexually assaulted and killed 78-year-old Martha Byrd in her home (next to his family’s home) within a day or two before September 4, 2004, when her body was found lying in her bed. Ms. Byrd had been strangled from the rear and made unconscious with a cloth ligature wrapped around her neck, and stabbed five times from the front in the torso. Semen found on her thighs and material taken from her vagina contained sperm matching appellant’s DNA profile by an overwhelming statistical probability. His left ring-finger print was found on the inside frame of a sliding glass door that had been forced open to allow entry to the home. The night before Ms. Byrd’s body was found, appellant had been seen “[sjpeeding up and down the street” in her car, and his palm print was later lifted from the car interior. Pieces of black cloth found on the rear floorboard of the vehicle, in the opinion of a forensic fiber analyst, could have originated from the same textile garment as the ligature used to strangle the victim.
An envelope found on Ms. Byrd’s body contained a handwritten note that read: “You souldins [sic] have cheated on me.”
*216 Federal Bureau of Investigation (FBI) document examiner Hector Maldonado later compared the handwriting on the envelope with 235 pages of appellant’s handwriting taken from his jail cell. Based on Maldonado’s observation of “an overwhelming amount of handwriting combinations ... in agreement with each other” and no significant differences, he opined that appellant was the author of the murder scene note. A conclusion of authorship, he explained, is not based on similarities between one, two, or three letters, but rather “on the combination of all the letters together and the words together, higher relationships, baseline arrangements, spacing between words and letters, initial strokes, ending strokes, [and] where the letters meet.”1 His conclusion, applying these criteria, was that there were “significant combinations of handwriting characteristics ... in agreement between the questioned writing and the known writing.” Cross-examined about examples of individual letters or combinations from appellant’s known writings that appeared pictorially different from the murder scene note, Maldonado maintained that those represented variations in appellant’s own handwriting, not “significant differ-enee[s].”2 Although significant differences will preclude a conclusion of authorship, intra-writer variations will not do so, in his opinion.II.
The government’s case relied on multiple forms of forensic evidence comparison and resulting expert opinion testimony— DNA, fiber, fingerprint, and handwriting comparison (not to say medical examiner analysis) — but appellant and amicus challenge only the admissibility of Maldonado’s opinion that appellant wrote the note found on Ms. Byrd’s body. This challenge, however, particularly as made in the briefs of amicus (who chiefly represented appellant on this issue at oral argument), amounts to an attack on the “ ‘pattern-matching’ [forensic] disciplines in general” except for “nuclear DNA analysis.” Reply Br. for Amicus Curiae at 11-12. According to amicus, the recent NRC Report— which we discuss in part II. E., infra — has concluded that “none of the pattern-matching disciplines, including handwriting identification, satisf[y the basic] requirements” of science. Id. at 12 (emphasis by ami-cus ). But the issue before this division is the admissibility of expert opinions derived from one such discipline, forensic handwriting identification, and we imply nothing about whether other pattern-matching disciplines meet the foundational test for admissibility in this jurisdiction.
We first summarize the standards governing the admission of evidence of this kind; then recite in detail the evidence received by the trial court (Judge Kravitz) at the pretrial hearing on the issue, and explain our agreement with his conclusion that such evidence is admissible; and lastly discuss why the NRC Report, not available to Judge Kravitz at the time, does not alter our conclusion of admissibility.
*217 A.“In the District of Columbia,” we explained recently,
before expert testimony about a new scientific principle [may] be admitted, the testing methodology must have become sufficiently established to have gained general acceptance in the particular field in which it belongs. The issue is consensus versus controversy over a particular technique, not its validity. Moreover, general acceptance does not require unanimous approval. Once a technique has gained such general acceptance, we will accept it as presumptively reliable and thus generally admissible into evidence. The party opposing the evidence, of course, may challenge the weight the jury ought to give it.
(Ricardo) Jones v. United States, 27 A.3d 1130, 1136 (D.C.2011) (alteration in original; citations and internal quotation marks omitted). These principles ultimately derive from Frye v. United States, supra. Notably, however, “Frye only applies to ‘a novel scientific test or a unique controversial methodology or technique.’ ” (Ricardo) Jones, 27 A.3d at 1137 (quoting Drevenak v. Abendschein, 773 A.2d 396, 418 (D.C.2001)). Thus, the question arises initially whether the Frye inquiry must be conducted as to handwriting comparison, when expert opinion testimony of this kind has been admitted in the courts of the District for over a century.
3 In (Ricardo) Jones, supra, the court upheld the refusal of the trial court to conduct a Frye hearing before admitting an expert opinion based on firearms comparison; we stated that “[pjattern matching is not new, and courts in this jurisdiction have long been admitting firearms identifications based on this method.” (Ricardo) Jones, 27 A.3d at 1137; see also Spann v. State, 857 So.2d 845, 852 (Fla.2003) (“Courts will only utilize the Frye test in cases of new and novel scientific evidence,” and “[fjorensic handwriting identification is not a new or novel science”; “by [the] time [Frye was decided], forensic handwriting identification had already established itself as a tool commonly used in court”).The government nonetheless concedes that, as this court has never decided whether handwriting identification meets Frye’s general acceptance standard, it is proper for us to do so here — especially given the trial court’s lengthy consideration of the issue — whether or not handwriting identification is a “novel” forensic science. See Br. for Appellee at 15 n. 15. We therefore proceed to the merits of appellant’s challenge, guided by three general principles. First, since the government asks us “to establish the law of the jurisdiction for future cases” involving admissibility of handwriting identification, we “review the trial court’s analysis de novo” rather than for abuse of discretion. (Nathaniel ) Jones v. United States, 548 A.2d 35, 40 (D.C.1988).
4 Second, as Judge*218 Kravitz recognized, the relevant “community” for purposes of assessing Frye admissibility includes not just forensic scientists (including handwriting experts) but also others “whose scientific background and training are sufficient to allow them to comprehend and understand the process and form a judgment about it.” United States v. Porter, 618 A.2d 629, 634 (D.C.1992). Finally, the standard of proof for admissibility is a preponderance of the evidence, not more, in part because the party opposing evidence shown to have met that standard “may challenge the weight the jury ought to give it.” (Ricardo) Jones, 27 A.3d at 1136; see Porter, 618 A.2d at 633.B.
The government’s main witness at the admissibility hearing was FBI supervisory document analyst Diana Harrison, a certified forensic document examiner. Besides her experience of ten years as an FBI document analyst, Harrison was a member of the Mid-Atlantic Association of Forensic Scientists, ASTM International (ASTM standing for the American Standards for Testing and Materials), and the Scientific Working Group for Documents (SWGDOC), an organization of forensic document examiners from federal, state, and private document examination laboratories that develops standards for the document examination field.
Harrison testified that professional standards for forensic document examiners are published through ASTM International and subject to peer review.
5 The eighteen current standards include such guides as “standard terminology relating to examination of questioned documents,” a “standard guide for examination of handwritten items,” and one setting forth the minimum training requirements for forensic document examination candidates. The specialty is one recognized by universities offering such courses through their forensic science programs, by professional organizations such as the American Academy for Forensic Sciences, and by multiple-discipline professional bodies such as the American Academy of Sciences, which has a questioned document section. Document examiners conduct peer review through these organizations as well as the American Society of Questioned Document Examiners.6 According to Harrison, document examiners operate on the principle that handwriting is unique, meaning that “no two people share the same writing characteristics,” although “some of the handwriting” of twins has been shown to be “very similar.” Further, since no one writes “with machine[-]like precision,” variations are seen within a person’s own handwriting even within the same document. But “given [a] sufficient quantity and quality of
*219 writing” to analyze, a trained forensic document examiner can discriminate between natural variations in a writer’s own handwriting (intra-writer variation) and significant differences denoting different writers (inter-writer variation), and so can determine whether the writer of a known writing also wrote a questioned document.Harrison described the four-step process followed generally (and by FBI document examiners) in expert comparison of handwriting. The procedure, known as the ACE-V method (Analysis, Comparison, Evaluation, and Verification), begins with the examiner analyzing the known writing to decide whether it was “freely and naturally prepared” rather than a simulation or tracing of other writing. Using magnification, the examiner also establishes “a range of variation” on the known writer’s part, ie., “deviations [from a person’s] repetitive handwriting characteristics ... expected in natural, normal writing.” Once the known and questioned writings are studied separately, the examiner “com-paréis them] to determine if there are similarities present between the two writings, if there are differences present ... and ... if the [range of] variation that was observed in the one writing is also observed in the [other] writing.”
7 In evaluating or “assessing] the characteristics ... observed in common or not between the two writings,” examiners consider both “pictorial” or “gross” features and more detailed ones, including “the beginning and ending strokes, ... how the writing sits on the [hypothetical] baseline or ruled line of writing,” “spacing [and height relationships] between words and letters,”8 the “number of [strokes] used to prepare a letter,”9 shading (“if you’re going upstroke on a letter, do you have heavier ... ink deposit ... than you would on the down-stroke”), and the side-slant of letters. All told, examiners look at both documents “to determine if there are characteristics present ... that would stray from what we call the copybook style of writing, the writing you learnfed] when you learned writing in school,” for which Harrison gave this example:[I]f you would write the word “the” usually in copybook, the T is shorter than the H and it all sits nice and evenly on the baseline. It’s evenly spaced. Straying from that, the T would be taller than the H. The E would slant down below the baseline. The T crossing wouldn’t be centered on the T. It would be heavy to the right or to the left and crossing through the H. These are significant characteristics.
Harrison conceded that there is no “standard for how many individualizing characteristics need to be present” in either the known or questioned writing for a conclusion of authorship to be reached, but she was clear that “[t]he identification of authorship is not based on one single characteristic”: “we have to have sufficient identifying characteristics in common [and] ... no significant differences and variation between the two writings.”
10 *220 The final phase of the process, “common practice” both for the FBI and in “the field” generally, is verification, whereby (in the FBI’s laboratory) “the handwriting is given ... to another examiner who goes through the same process to ... evaluate ... the conclusion of the [first] examiner,” though the repetition is not “blind” since the second examiner will have the work product of the first. Unlike other laboratories which permit a broader or narrower range of conclusions, FBI examiners may reach one of four conclusions: identification (a “definite determination” of authorship), elimination (a definite determination of no authorship), no conclusion, and a qualified (“may have” or “may not have”) opinion.Harrison admitted that the analytic method she described left the decision whether two handwritings were done by the same person ultimately to the trained judgment of individual examiners.
11 But she maintained that the method is generally accepted in the community of forensic scientists, and beyond, as grounded in objective, testable identification criteria yielding consistently accurate results when applied by competent examiners. To buttress this opinion, the government introduced published studies and testimony by scientists outside the practice of forensic document examination, who had tested the performance of expert examiners or experimented with features used in writing comparison to see if their procedures, and correspondingly accurate results, could be replicated.The first of these was a series of studies done by Dr. Moshe Kam, a professor of electronic and computer engineering at Drexel University. Over a decade starting in 1994, Kam did five studies examining the error rates of professional document examiners as compared to lay persons. In the first study, 86 handwriting samples from twenty different writers were analyzed by seven document examiners and ten laymen; the document examiners made a total of four errors in identifying a document’s author, the laymen 247. Kam thus concluded that “professional document examiners ... are significantly better in performing writer identification than college-educated nonexperts” and that, based on this study, “the probability [is] less than 0.001” that “professionals and nonprofessionals are equally proficient at performing writer identification.”
12 Kam later compared the accuracy rates of 105 professional document examiners, eight document examiner trainees, and 41 laymen in determining the authorship of a particular writing.
13 The professional examiners incorrectly matched unknown writings with the wrong writer 6.5 percent of the time, while the lay persons’ corresponding error rate was 38.8 percent.14 *221 Kam published a fourth study in 2001, comparing the error rates of professional document examiners and laymen who examined six known signatures generated by the same person and compared them with six unknown signatures to determine if they could declare a match. Once again, Kam found that professional document examiners had “much smaller” error rates than laymen. Non-genuine signatures were erroneously identified as genuine 0.49 percent of the time by document examiners, compared to 6.47 percent of the time by laymen, while authentic signatures were erroneously declared to be non-genuine 7.05 percent of the time by the document examiners and 26.1 percent of the time by laymen.15 Beyond such performance studies,
16 the government presented the testimony of, and recent studies authored by, Dr. Sar-gur N. Srihari, a professor of computer science and energy at the State University of New York, and director of the Center of Excellence for Document Analysis and Recognition. Testifying as an expert in the field of pattern recognition and computational forensics related to handwriting recognition, he explained the method and results of studies he had conducted to examine whether “each individual has consistent handwriting that is distinct from the handwriting of another individual.”17 Sri-hari collected three handwritten samples of the same letter prepared by each of 1500 people representative of the U.S. population in gender, age and ethnicity. The samples were all scanned into a computer (yielding a digitalized image), and the computer was programmed to extract certain individual handwriting features from each, ranging from the “global level of the document” to the paragraph, word, and character levels. Whereas forensic document examiners have compiled some twenty-one discriminating elements of handwriting,18 Dr. Srihari was able to program the computer to recognize just “a small subset of the features that document examiners*222 would use” to compare a known writing with a questioned one. Some of the collected samples were used to teach the computer (“using a machine-learning approach”) to recognize an individual writer’s characteristics; others were used to determine if the computer, by comparison with other samples, could recognize and accurately identify those individual writers. “Based on a few macro-features ... and micro-features at the character level from a few characters,” the computer was able to identify individual writers with 96 to 98 percent accuracy.19 Dr. Srihari concluded that, “Making an approach that the results are statistically inferable over the entire population of the U.S., we were able to validate handwriting individuality with a 95% confidence.” He anticipated that, once programmers can teach the computer to recognize additional “finer features” of handwriting now discernible only by “the human analyst,” the computer’s accuracy level could be expected to reach 100 percent.20 On cross-examination, Dr. Srihari conceded that in the current “state of ... computer program[ming]” and given that his studies “do [show] an error rate,” computer scientists could not say that “a person’s handwriting is absolutely unique.”The defense’s lone witness at the Frye hearing was Mark Denbeaux, an evidence professor at Seton Hall University Law School, whom the trial judge let testify as “an expert on the general acceptance of forensic document examination in the relevant scientific fields,” while commenting that this was “not dissimilar from qualifying [Denbeaux] as an expert on a legal question.”
21 Professor Denbeaux’s expertise had been acquired from (1) reading forensic document publications, (2) interviewing document examiners and studying their reports, (3) visiting forensic document laboratories, and (4) reading transcripts of document examiners’ testimony.22 He “testifies] generally” and, “as ... some courts have said, as a critic” who*223 is “committed to the proposition that there is no expertise” involved when forensic document examiners make handwriting identifications.In 1989, Professor Denbeaux had coauthored a law review article urging courts to “cease to admit” expert testimony on handwriting comparison, primarily because proficiency tests conducted by the Forensic Science Foundation (FSF) in the 1970s and 1980s showed that document examiners were wrong at least 43 percent of the time.
23 But in a footnote the article acknowledged that the FSF’s Proficiency Advisory Committee had since “disavow[ed] these tests,” deeming them “not representative of the level of performance of any of the fields being tested.” In any event, Denbeaux admitted that since 1989 more testing had been done on handwriting identification and that he had been “persuaded by some of it,” noting that, according to the Kam studies, document examiners avoid false positives more frequently than laymen.Admitting that the principle that “[n]o two people write alike” is generally accepted by forensic document examiners, Professor Denbeaux had “never seen a study that proves that,” and he believed there should be empirical testing to quantify handwriting characteristics before conclusions of a handwriting match should be allowed in evidence. Dr. Srihari’s studies, in his view, “certainly [had] not accepted” the proposition that handwriting comparison methodology “can be used to reach an absolute conclusion” of authorship, since “there’s at least a five percent error rate his computer can’t explain.” Despite Sri-hari’s experiments and the consensus of the forensic science community, Denbeaux disputed that handwriting comparison is accepted as a legitimate science, because, among other things, it has no quantifiable measures for defining “significant” handwriting characteristics and distinguishing intra- from inter-writer differences.
C.
Carefully evaluating the testimony and studies presented to him, Judge Kravitz first found “uncontradicted” FBI examiner Harrison’s testimony that the forensic document examiner community accepts two points: that no two people write exactly alike, and that a document examiner can determine if the writer of a known writing also wrote a questioned writing given sufficient samples for comparison. Further, the judge found the FBI’s laboratory method for making comparisons to be “generally accepted in the relevant scientific community” because it “follows the steps recommended by ASTM [International], ... a voluntary standards development organization with [30,000] members which publishes standards for materials, products, systems, and services in many technological, engineering and testing fields including standards of forensic document analysis.”
Further, the judge reasoned, the methodology had been tested and shown to be sound in two ways by scientists outside the field of forensic science. Dr. Kam’s studies (particularly II and V) showed that document examiners using the method “are skilled in such a way that they can identify matches more accurately than lay persons are able to, with a lower rate of false positives.” And Dr. Srihari’s tests using computerized equivalents of features employed by examiners to make comparisons confirmed Srihari’s belief that “the
*224 methodology ... is capable of comparing writings and drawing conclusions of a match.” By contrast, the defense had “failed to present any evidence of a genuine and public dispute within the relevant scientific community” about handwriting identification methodology. Professor Denbeaux did “not identify] any handwriting scientists, ... forensic or non-forensic, who have publicly disputed the validity of the FBI laboratory’s methodology.” Den-beaux and other non-scientists cited by the defense were “not part of the relevant scientific community for the purposes of ... Frye analysis”; they “do not have any scientific background or training in any of the relevant fields, ... they are not specialists in pattern recognition, ... not analysts of motor skills, [and] ... not scholars from any other scientific field that could lend its expertise to the evaluation of handwriting.”The judge thus concluded that the FBI’s methodology for handwriting identification “meets a baseline standard of reliability” and general acceptance in the relevant field, even though it “leaves much room for subjective analysis,” “lacks standards and guidelines for determining significance,” and suffers from an inability of the FBI to “mak[e its] error rate [ ]known.”
24 But the “doubts about the ability of forensic document examiners to make reliable conclusions of absolute authorship,” in the judge’s view, stemmed from “shortcomings [that could] be exposed on cross-examination of the [g]overnment’s expert witness” at trial and went “to the weight of the testimony of [the witness], not its admissibility.”D.
On the basis of the record before the trial judge, we agree that handwriting comparison and identification as practiced by FBI examiners passes the Frye test for admissibility. “[Scientists significant either in number or experience [must] public[ly] oppose a new technique or method as unreliable” before that “technique or method does not pass muster under Frye.” United States v. Jenkins, 887 A.2d 1013, 1022 (D.C.2005) (internal brackets omitted). In opposition to the combined testimony and studies of Harrison, Kam and Srihari, appellant furnished the trial judge with virtually no testimony or conclusions by scientists “public[ly] opposing ... as unreliable” the FBI’s method for determining authorship of handwriting, and certainly none approaching a “consensus,” id., against either the guiding principle among document examiners that no two people write exactly alike
25 or the technique for making handwriting comparisons described by Harrison.FBI document examiners, as Harrison testified, are trained according to and employ national standards recommended by ASTM International, a body of forensic scientists, academics, and lawyers who vote on the adoption and revision of professional standards for numerous disciplines, including handwriting analysis. The FBI laboratory is accredited and its analysts, like forensic document examiners generally, undergo peer review through organizations including the American Academy of Science, which has a questioned document section. FBI examiners follow the general four-step (ACE-V) pro
*225 cedure used in the forensic science community, and at each step look for multiple handwriting characteristics that conform to standards recognized by ASTM International and published in recognized questioned document texts. In accordance with scientific procedures generally, the examiners (as Judge Kravitz found) use “microscopic observations and other technical procedures to enhance their analysis”; they “are required to support their conclusions with documentation as required by the FBI’s standard operating procedures”; and they verify each analysis (though not “blindly”) through repetition of the process by a second examiner.Thus, the FBI’s methodology for handwriting comparison is well-established and accepted in the forensic science community generally. Moreover, evidence showed that in recent years the method’s adherence to objective, replicable standards and its capacity to reach accurate conclusions of identification have been tested outside the forensic science community in two ways. Fh’st, Dr. Kam’s controlled studies beginning in 1994 have shown that trained document examiners consistently have lower error rates, by a wide margin, than lay persons attempting handwriter identification. Dr. Srihari’s computer experiments, in turn, have shown that multiple features of handwriting regularly used by examiners can be converted to quantitative measurements
26 and employed by computers to make highly accurate handwriting comparisons and identification. While the purpose of those computer simulations, as Srihari admitted, was not to prove the accuracy of handwriting identification by human examiners, they nonetheless support the judgment of the forensic science community that handwriting analysis (in the trial judge’s words) “meets a baseline standard of reliability.”All told, then, the government’s evidence at the hearing, rebutted only by the testimony of a non-scientist, Professor Den-beaux,
27 demonstrated by a preponderance of the evidence that handwriting comparison leading to conclusions of (or against) identification rests on a methodology “sufficiently established to have gained general acceptance in the particular field in which it belongs.” (Ricardo) Jones, 27 A.3d at 1136.E.
It remains for us to consider, however, appellant’s and amicus’s argument that the 2009 NRC Report,
28 published after Judge Kravitz’s ruling, reveals a fundamental re-evaluation by the science community of forensic pattern-matching disci*226 plines such as handwriting analysis, and requires a similar revision of the courts’ traditional liberality in admitting such expert opinion. At oral argument amicus (who, as stated earlier, carried the laboring oar for appellant on this issue) clarified its position to be that in the wake of the NRC Report, a handwriting expert such as Hector Maldonado should continue to be able to describe the procedure he followed and the “significant” similarities and differences he observed in the writings- he compared — testimony amicus concedes can provide assistance to jurors in a matter beyond their ken. But what the Report repudiates, in its view, is permitting a conclusion of identification or match by any forensic specialist (DNA experts excluded) based on pattern-matching techniques too unscientific in their present state to allow such identification. Even as thus qualified, however, we reject appellant’s argument that the NRC Report represents a scientific consensus as to handwriting identification materially different from that established at the evidentiary hearing.The NRC Report was the end-product of a comprehensive, congressionally-com-missioned study of the forensic sciences by a Committee of the National Academy of Sciences, which Congress instructed (among other things) to “assess the present and future resource needs of the forensic science community,” “make recommendations for maximizing the use of forensic technologies and techniques to solve crimes,” and “disseminate best practices and guidelines concerning the collection and analysis of forensic evidence to help ensure quality and consistency in [its] use.” NRC Report at 1-2. The Committee was made up of “members of the forensic science community, members of the legal community, and a diverse group of scientists.” Id. at 2. Its Report numbers 286 pages, excluding appendices. Notably, however, of these many pages only five concern “Questioned Document Examination,” and just four paragraphs discuss handwriting comparison en route to the following “Summary Assessment” (minus footnotes):
The scientific basis for handwriting comparisons needs to be strengthened. Recent studies have increased our understanding of the individuality and consistency of handwriting[,] and computer studies ... suggest that there may be a scientific basis for handwriting comparison, at least in the absence of intentional obfuscation or forgery. Although there has been only limited research to quantify the reliability and replicability of the practices used by trained document examiners, the committee agrees that there may be some value in handwriting analysis.
Id. at 166-67 (footnotes omitted). That assessment, while hardly an unqualified endorsement of “a scientific basis for handwriting comparison,” just as clearly does not spell “public[ ] opposition] by the science community,” Jenkins, supra, to the reliability and hence admissibility of expert handwriting identification.
Appellant and amicus do not appear to argue otherwise. Instead, they argue that the Report taken as a whole amounts to a critique, and repudiation, of the supposed science underlying all forensic analysis based on pattern-matching, except for DNA. See Reply Br. for Amicus at 12 (the Report “concluded that none of the pattern-matching disciplines, including handwriting identification, satisfied [the basic requirements of science].”) They rely especially on the Report’s statement in Summary that, “[w]ith the exception of nuclear DNA analysis, ... no forensic method [of ‘matching’] has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a
*227 connection between evidence and a specific individual or source.” NRC Report at 7. In our view, however, it exaggerates the measured conclusions and recommendations of the Report to read them as a rejection of the scientific basis for all pattern-matching analysis, including handwriting identification.The Report is much more nuanced than that. It ranges over a wide variety of forensic science disciplines and identifies weaknesses (and some strengths) of varying degrees in each. Thus, while pointing to the “simple reality ... that the interpretation of forensic evidence is not always based on scientific studies to determine its validity,” it finds “important variations [in terms of validity] among the disciplines relying on expert interpretation [of observed patterns].” Id. at 7-8. At one end of the spectrum (almost by itself) is DNA analysis, but “[t]he goal is not to hold other disciplines to DNA’s high standards,” since “it is unlikely that most other current forensic methods will ever produce evidence as discriminating as DNA.” Id. at 101. Closer to the other end (and discussed under the heading “Questionable or Questioned Science”) may be disciplines such as toolmark or bitemark identification, which “have never been exposed to stringent scientific inquiry” and thus “have yet to establish either the validity of their approach or the accuracy of their conclusions.” Id. at 42, 53. Yet, in virtually no instance — and certainly not as to handwriting analysis, which ultimately is all that concerns us here — does the Report imply that evidence of forensic expert identifications should be excluded from judicial proceedings until the particular methodology has been better validated.
29 Rather, the Report states more modestly that “we must limit the risk of having the reliability of certain forensic methodologies judicially certified before the techniques have been properly studied and their accuracy verified,” and that, accordingly, “the least that courts should insist upon from any forensic discipline is certainty that practitioners in the field adhere to enforceable standards, ensuring that any and all scientific testimony or evidence admitted is not only relevant, but reliable.” Id. at 12, 101 (emphasis added). This encapsulates, we believe, the showing the trial judge required of the government in this case.Appellant argues, however, that the Report takes direct aim at a key aspect of the methodology the FBI employs in handwriting analysis, as described by Harrison. Discussing friction ridge analysis (exemplified by fingerprint comparison), the Report comments on the ACE-V or four-step process of analysis, comparison, evaluation, and verification that document examiners typically follow:
ACE-V provides a broadly stated framework for conducting friction ridge analyses. However, this framework is not specific enough to qualify as a validated method for this type of analysis. ACE-V does not guard against bias; is too broad to ensure repeatability and transparency; and does not guarantee that two analysts following it will obtain
*228 the same results. For these reasons, merely following the steps of ACE-V does not imply that one is proceeding in a scientific manner producing reliable results.Id. at 142. This criticism is unanswerable, we think, if the methodology in question is no more concrete in practice than a four-step sequence. Even to a lay observer, a technique defined only as saying, “first we analyze, then we compare, etc.,” can scarcely lay claim to scientific reliability— to yielding consistently accurate and con-firmable results. But, as the trial judge recognized, the FBI’s method of analyzing handwriting goes beyond those four sequential steps: Harrison’s testimony, supported by the studies of Srihari, showed that at each of the four ACE-V steps document examiners descend to the specific by using multiple standard (and published) handwriting characteristics to reach conclusions of or against identification. Although this still leaves considerable room for the examiner’s subjective judgment as to significance, that is very different from saying that the process employed is no more than a skeletal ACE-V set of steps.
In sum, the NRC Report, while it finds “a tremendous need for the forensic science community to improve” generally and identifies flaws in the methodology of “a number of forensic science disciplines,” expressly “avoid[s] being too prescriptive in its recommendations,”
30 and as to handwriting comparison in particular it states nothing to imply that identification of authorship by trained examiners in the field is based on no “reliable methodology for making the inquiry.” Ibn-Tamas, 407 A.2d at 638; NRC Report at 12, 14, 18. The Report thus does not supply the scientific consensus opposing forensic handwriting identification that appellant seeks. That is all we need decide here; future challenges under Frye to other forensic pattern-matching disciplines may, and may be expected to, rely on the NRC Report as part of the relevant expression of scientific opinion.31 F.
We therefore uphold Judge Krav-itz’s ruling on admissibility. As in all such cases, however, it is important — and is reflected in the preponderance of the evidence standard — that appellant was not denied a second opportunity to challenge FBI examiner Maldonado’s expert opinion, this time before the jury. Rejecting the view of those “overly pessimistic about the capabilities of the jury and of the adversary system generally,” the Supreme Court has reminded us that “[vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.” Daubert, supra note 5, 509 U.S. at 596, 113 S.Ct. 2786. As the trial judge said in concluding his exemplary analysis here:
I fully expect the defense to conduct a thorough cross-examination that will expose any and all inadequacies and points of unreliability of the ACE-V method as
*229 a general matter, as well as the ... inadequacies and points of ... unreliability in the application of that method in this case.In consultation with its own experts the defense has fully investigated the points of weakness of the FBI laboratory’s approach to handwriting analysis, including its underlying theoretical premises, its general methodology and the methodology as applied in this case. If defense counsel exposes those weaknesses to the jury with the same thoroughness and clarity with which it exposed them at the Frye hearing, then in my view there is no reason for any concern that the jury in this case will give undue weight to the FBI document examiner’s testimony.
The vigorous challenge appellant in fact made to Maldonado’s testimony before the jury bears this out.
III.
Following his arrest a week after the murder, appellant was interviewed by the police and eventually confessed to stealing Martha Byrd’s car, strangling and stabbing her, and writing the murder scene note. On his motion to suppress the statement, and after a lengthy evidentiary hearing, the trial judge found that during the videotaped interview, at a point before appellant had said anything inculpatory, he signaled his desire to remain silent and the police then failed to “scrupulously honor” his assertion of the right. See Michigan v. Mosley, 423 U.S. 96, 103, 96 S.Ct. 321, 46 L.Ed.2d 313 (1975). The judge therefore barred the use of the statement in the government’s case-in-chief, but ruled that it would be admissible to impeach appellant if he testified, because it had been voluntarily made. See United States v. Turner, 761 A.2d 845, 853 (D.C.2000). The judge made detailed findings of why, even though the police had applied some psychological pressure tending to weaken appellant’s volition, in the end he had freely chosen to inculpate himself.
32 Because appellant did not testify at trial, the statement was not used against him.He nonetheless argues that the statement was coerced, and that its permitted use for impeachment had the effect of keeping him off the stand when he otherwise would have testified. The government counters that he waived the issue of voluntariness by not testifying, relying on Luce v. United States, 469 U.S. 38, 41-42, 105 S.Ct. 460, 83 L.Ed.2d 443 (1984), and Bailey v. United States, 699 A.2d 392, 400-02 (D.C.1997). But we do not consider that ground for decision, for two reasons. First, the trial judge’s lengthy findings on voluntariness (made after a thorough hearing and viewing the videotaped confession) are not clearly erroneous and support our independent legal conclusion that appellant confessed of his own free will. Turner, 761 A.2d at 853-55; see Colorado v. Connelly, 479 U.S. 157, 163-64, 107 S.Ct. 515, 93 L.Ed.2d 473 (1986). Second, even if we were less certain of that conclusion, it is inconceivable to us how appellant was harmed by the (hypothetical) admission of the confession to impeach, when he offers no hint of how
*230 his testimony might have caused the jury to find reasonable doubt in the face of a DNA match, multiple other forms of forensic identification evidence, and circumstantial proof almost compelling a finding that he broke into the victim’s house, sexually assaulted her, and strangled her to death. See Chapman v. California, 386 U.S. 18, 87 S.Ct. 824, 17 L.Ed.2d 705 (1967).IV.
Accordingly, the judgments of conviction are affirmed, except that on remand the trial court must vacate those convictions which the parties agree merged into others.
So ordered.
. To illustrate, Maldonado explained that for the word "You,” both the note and appellant’s known writings showed (1) that the uppercase letter "Y” was made with three strokes; (2) similar space separated the letters "ou” from the vertical staff of the "Y”; (3) the letters ”o” and "u” were connected at the middle of their formations; and (4) the finishing stroke of the “u” went to the right, forming a "hook” over the baseline. Maldonado showed the jury several other examples of similarities between the known and questioned writings.
. By contrast, he stated, if a questioned writing’s lowercase “d” is comprised of an "eyelet” made counterclockwise, and the known writings contain only a lowercase ”d” made with a clockwise motion, "that is considered a significant difference.”
. See, e.g., Keyser v. Pickrell, 4 App.D.C. 198, 208 (1894) ("Notwithstanding the frequent severe criticism of [handwriting] expert testimony, the tendency in modern practice has constantly been towards the extension of the field .."[t]he objections urged ... may all be classified as affecting the credibility and weight of the evidence rather than its competency, and as such may be left to the consideration of the jury.”); see also Boyer v. United States, 40 A.2d 247, 250 (D.C.1944), rev’d on other grounds, 80 U.S.App.D.C. 202, 150 F.2d 595 (1945) (upholding trial court’s rejection of proposed jury instruction terming handwriting identification testimony as " 'so weak and decrepit as scarcely to deserve a place in our system of jurisprudence,’ ” noting that "we do not feel that such language is justified today in view of the advance made in the scientific study of handwriting").
. Neither party asks us to depart from the Frye test (a course necessitating en banc action in any event) in favor of Daubert v. Mer-
*218 rell Dow Pharm., 509 U.S. 579, 113 S.Ct. 2786, 125 L.Ed.2d 469 (1993), and its progeny construing Fed.R.Evid. 702. Under those decisions, the trial court's gatekeeper ruling on admissibility of expert opinion evidence is reviewed for abuse of discretion. See Kumho Tire Co., Ltd. v. Carmichael, 526 U.S. 137, 152, 119 S.Ct. 1167, 143 L.Ed.2d 238 (1999).. ASTM has 30,000 members, and within the membership are several committees, including a committee on forensic sciences and a subcommittee on forensic document examination. ASTM members, who include not just document examiners but also other forensic scientists, academics, and lawyers, vote on the adoption or revision of professional standards.
. Professional journals in which forensic document examiners publish include Forensic Science Communication, the Journal of Forensic Sciences, the Canadian Society of Forensic Science Journal, and the Journal of the American Society of Questioned Document Examiners.
. If the variations observed in one writing are not seen in the other, Harrison said, this "may render an inconclusive ... or less than definitive result.”
. "[A]re two letters in a word sitting right on top of each other or are they spaced evenly apart or is there more space between these two letters in the word than [in] the rest of the letters in the word?”
. ”[F]or example, ... you could have a handwritten N that has one stroke going straight down, then almost a U stroke” — "a two-stroke N” — in contrast to a "three-stroke N” with "strokes ... down, diagonal, and up.”
. "[I]t's the combination of characteristics. You could have a five-page handwritten letter that you wouldn’t be able to identify, if it's all
*220 copybook style and it doesn't stray from the system....”. "Q.... [s]o it’s totally up to the document examiner to determine whether or not ... the [observed] characteristic is an identifying characteristic and . .. whether there’s a sufficient number of those characteristics to call it a match, right?
A. Correct.”
. Moshe Kam et al., Proficiency of Professional Document Examiners in Writer Identification, 39 J. Forensic Sci. 5 (1994) (Kam I).
. Moshe Kam et al., Writer Identification by Professional Document Examiners, 42 J. Forensic Sci. 778 (1997) (Kam II).
. In response to criticism of Kam II, a third study was conducted based on varying the rates of compensation for participating laymen. See Moshe Kam et al., Effects of Monetary Incentives on Performance of Nonprofessionals in Document-Examination Proficiency Tests, 43 J. Forensic Sci. 1000 (1998) (Kam III). Kam III found no statistically different results in the laymens’ error rates when dif
*221 ferent monetary penalty or reward incentives were used.. Moshe Kam et al., Signature Authentication by Forensic Document Examiners, 46 J. Forensic Sci. 884 (2001) (Kam IV). Finally, in "Kam V," Writer Identification Using Hand-Printed and Non-Hand-Printed Questioned Documents, 48 J. Forensic Sci 1391 (2003), Kam reevaluated the results from Kam II to focus on error rates for hand-printed (versus cursive) samples and found that, for hand-printed writings, document examiners had a false positive error rate of 9.3 percent compared to a 40.45 percent error rate for laymen.
. The government further cited an Australian study using "a signature comparison task” which found that forensic document examiners had a substantially lower rate of false positive identifications (3.4 percent) than laymen (19.3 percent). J. Sita et al., Forensic Hand-Writing Examiners’ Expertise for Signature Comparison, 47 J. Forensic Sci. 1117 (2002).
. Sargur N. Srihari et al, Individuality of Handwriting, 47 J. Forensic Sci. 1 (2002). In Dr. Srihari's view at the time the study was published, the hypothesis of uniqueness had "not been subjected to rigorous scrutiny with the accompanying experimentation, testing, and peer review.”
. For this Srihari cited Roy Huber and A.M. Headrick, Handwriting Identification: Facts and Fundamentals (1999). Dr. Srihari's study listed these as:
arrangement; class of allograph; connections; design of allographs (alphabets) and their construction; dimensions (vertical and horizontal); slant or slope; spacings, intraword and interword; abbreviations; baseline alignment; initial and terminal strokes; punctuation (presence, style, and location); embellishments; legibility or writing quality; line continuity; line quality; pen control; writing movement (arched, angular, interminable); natural variations or consistency; persistency; lateral expansion; and word proportions.
. The computer was evaluated for accuracy in performing two tasks with the writing samples: (1) identifying a writer from among a possible set of writers, which it did with 98 percent accuracy (the "identification” method); and (2) determining whether two documents were written by the same writer, which the computer did with 96 percent accuracy (the "verification” method). For both models, the more handwriting that was compared, the higher was the rate of resulting accuracy in making a match.
. Dr. Srihari later conducted another computer study examining handwriting of twins and non-twins. See Sargur N. Srihari et al, On the Discriminability of the Handwriting of Twins, 53 J. Forensic Sci. 1 (2008). In this study, writing samples were obtained from 206 pairs of fraternal or identical twins as well as from non-twins. Using the verification method — in which two writings are compared to determine if they are from the same writer — the computer had an identification accuracy rate of 87 percent when comparing twins' handwriting, and an accuracy rate of 96 percent for non-twins. The computer’s combined error rate was nine percent. Three professional document examiner trainees and nine laymen also participated in the study. The trainee-examiners outperformed the computer, with an average error rate of less than five percent. The laymens' average error rate, 16.5 percent, was higher than that of the computer or the trainee-examiners.
. When the judge asked Denbeaux on voir dire, "Why should I not view you as just some lawyer who knows about all this stuff,” Den-beaux replied: "If people know enough about some stuff they’re experts.... [Ojne of my premises is that you can become an expert by training, skill, experience, knowledge or education.”
. Denbeaux had no formal training (and had taken no classes) in handwriting comparison, had no experience in research methodology, was not a statistician, and had not trained in computer science; and he was not a member of ASTM International, which accepts some attorneys as members and which he described as a "highly respected” organization.
. See D. Michael Risinger, Mark P. Den-beaux & Michael J. Saks, Exorcism of Ignorance as a Proxy for Rational Knowledge: The Lessons of Handwriting Identification “Expertise," 137 U. Pa. L.Rev. 731 (1989).
. Harrison had testified that there is no known error rate for the methodology because "it’s very difficult to separate the methodology error rate” from the "error rate of the particular examiner.”
. As Judge Kravitz found, even a survey of professional document examiners relied on by Professor Denbeaux at the hearing revealed that they strongly accept the principle that no two people write exactly alike.
. Srihari’s 2002 study defined the programmable "features” as "quantitative measurements that can be obtained from a handwriting sample in order to obtain a meaningful characterization of the writing style.”
. Appellant claims the trial judge erred by excluding Denbeaux, an evidence professor, from the relevant scientific community for Frye purposes. Judge Kravitz recognized that that community "is certainly not limited to forensic scientists," but believed it was “also pretty clear that it doesn’t extend as far as law professors or other people who have simply studied an issue but don’t have the relevant scientific background and training” in the field of expertise. Whether the judge excluded. Denbeaux from those able to contribute to the Fiye discussion or instead — as we think more likely the case, since he qualified Den-beaux as an expert and heard his testimony at length — gave his opinions greatly reduced weight, is of no consequence to the conclusion we reach after independently reviewing the evidence.
. National Research Council, Committee on Identifying the Needs of the Forensic Science Community, Strengthening Forensic Science in the United States: A Path Forward (2009) (NRC Report).
. Amicus finds it unremarkable that a critique mainly by scientists of the supposed science of pattern-matching expresses no opinion on legal admissibility. But the Report includes a lengthy discussion of the legal standards of admissibility under Frye and especially Daubert, pointedly criticizing some post-Daubert decisions of federal appellate courts for lax treatment of forensic science admissibility. Thus, the Report’s agnosticism, at worst, on the admissibility of handwriting evidence in court is significant, in our view. Cf. United States v. Rose, 672 F.Supp.2d 723, 725 (D.Md.2009) (pointing out that "the Report itself did not conclude that fingerprint evidence was unreliable such as to render it inadmissible under Fed. R. Ev. 702”).
. Among those recommendations were congressional funding to create an independent federal entity, the National Institute of Forensic Science, which in turn “should competitively fund peer-reviewed research into” (among other things) "the scientific bases demonstrating the validity of forensic methods.” Id. at 22-23.
. In (Ricardo ) Jones, supra, we held that the NRC Report, although "not properly before us,” provided no reason for rejecting the admissibility of firearms identification evidence. See 27 A.3d at 1137 & n. 7. Like this case, Jones dealt with the admissibility of one class of forensic science evidence.
. The judge found, among other things, that "at no time was there even any hint of physical coercion” nor yelling or "rais[ing] their voices” by the police; they gave appellant sodas and allowed him to smoke; he received "several breaks” to use the bathroom and was allowed to meet with his grandmother; "despite his relative youth [appellant] had extensive experience with the criminal and juvenile justice systems,” and "clearly understood his [initially given] Miranda rights”; and during his "back and forth” discussions with the police he "never appeared to be intimidated by the police” but rather seemed "fully in control of his emotions and ... [to be] making rational decisions.”
Document Info
Docket Number: No. 08-CF-1361
Citation Numbers: 37 A.3d 213, 2012 D.C. App. LEXIS 22, 2012 WL 399997
Judges: Farrell, Thompson, Washington
Filed Date: 2/9/2012
Precedential Status: Precedential
Modified Date: 10/26/2024