Science Applications International Corp. v. United States ( 2021 )


Menu:
  •        In the United States Court of Federal Claims
    SCIENCE APPLICATIONS
    INTERNATIONAL CORP.,
    Plaintiff,
    v.
    THE UNITED STATES,
    No. 17-cv-825
    Defendant,
    Filed: August 6, 2021
    and
    MICROSOFT CORPORATION,
    Intervenor-Defendant,
    and
    L3 TECHNOLOGIES, INC.,
    Third-Party Defendant.
    Stephen R. Smith, Cooley LLP, Washington, D.C. for Plaintiff. With him on the brief are DeAnna
    D. Allen, Erin M. Estevez, Stephen C. Crenshaw, and James P. Hughes, Cooley LLP, Washington,
    D.C.; Douglas P. Lobel, Cooley, LLP, Reston, Virginia; and Goutam Patnaik and Gwendolyn
    Tawresey, Pepper Hamilton LLP, Washington, D.C.
    Alex Hanna, United States Department of Justice, Civil Division, Washington, D.C. for
    Defendant. With him on the briefs are Joseph H. Hunt, Assistant Attorney General, Civil
    Division, Gary Hausken, Director, Commercial Litigation Branch, Civil Division, and Scott
    Bolden, United States Department of Justice, Civil Division, Washington, D.C.
    Thomas Lee Halkowski, Fish and Richardson P.C., Washington, D.C., for Intervenor-Defendant.
    With him on briefs are Ahmed J. Davis and Kenton W. Freeman, Jr., Fish and Richardson P.C.,
    Washington, D.C.; and Proshanto Mukherji, Fish and Richardson P.C., Boston, MA.
    William Carl Bergmann, Baker & Hostetler, LLP, Washington, D.C., for Third-Party Defendant.
    With him on briefs are Michael Anderson and Cassandra Simmons, Baker & Hostetler, LLP,
    Washington, D.C.
    1
    MEMORANDUM AND ORDER
    This patent case involves complex subject matter, important, cutting-edge technology, and
    relates to several high-value contracts with the United States totaling billions of dollars. See e.g.,
    July 26, 2021 Hearing Transcript (ECF No. 191) at 8:7-17 (referencing $21.9 billion contract
    between the Government and Microsoft). 1 In 2017, Plaintiff Science Applications International
    Corp. (SAIC), filed its complaint alleging “that the United States infringed the SERVAL Patents
    by entering into contracts with Plaintiff's competitors for the manufacture and subsequent use of
    night vision goggle weapon systems with specialized heads up displays that allegedly use
    Plaintiff's patented technology.” Sci. Applications Int'l Corp. v. United States, 
    148 Fed. Cl. 268
    ,
    269 (2020); Complaint (ECF No. 1) (Compl.) at ¶¶ 2, 37.
    The patents at issue are U.S. Patent Nos. 7,787,012 (the ’012 patent), 8,817,103 (the ’103
    patent), 9,229,230 (the ’230 patent), and 9,618,752 (the ’752 patent) (collectively, the Soldier
    Enhanced Rapid Engagement and Vision in Ambient Lighting or SERVAL 2 patents). See Compl.
    ¶¶ 1, 3. 3 Collectively, these patents include 91 total claims. See ’012 patent at 9:62-12:3; ’103
    1
    See also SAIC’s June 1, 2021 Letter: Motion to Compel Government to Produce Documents
    under Court of Fed. Cl. R. 26 and 34 (Public Version) (ECF No. 185) at 2 (referencing a March
    25, 2021 agreement between the Government and Microsoft under which the Government agreed
    to pay $21.9 billion for IVAS HUD devices); Plaintiff Science Applications International Corp.’s
    Claim Construction Brief (ECF No. 90) at 5-6 (referencing “contract awards to BAE Systems and
    DRS Technologies for not to exceed values of $444.8 million and $367 million, respectively,” “a
    May 16, 2018 contract award to L3 Technologies, Inc. worth up to nearly $400 million, and a
    November 20, 2018 other transaction agreement with Microsoft worth up to nearly $500
    million.”); Defendant’s Objections and Second Supplemental Responses to Plaintiff’s Second Set
    of Interrogatories (7-9) (ECF No. 179-2) (SEALED) at 17-29.
    2
    A serval is “a sub-Saharan African cat that is known to have the best kill rate in the wild cat
    family, including when hunting at night.” Pl.’s Technology Tutorial Presentation at 29.
    3
    The patents are available as attachments to SAIC’s Opening Claim Construction Brief.
    Specifically, the ’012 patent is Exhibit A (ECF No. 90-4); the ’103 patent is Exhibit B (ECF No.
    90-5); the ’230 patent is Exhibit C (ECF No. 90-6); and the ’752 patent is Exhibit D (ECF No.
    90-7).
    2
    patent at 10:6-59; ’230 patent at 24:24-30:42; ’752 patent at 25:1-28:57. The parties seek
    construction on six terms found throughout each of the four asserted patents.          This claim
    construction Memorandum and Order construes the disputed terms.
    BACKGROUND
    I. Technology Overview
    Night vision goggles (NVGs) passively amplify miniscule amounts of ambient light, such
    as starlight, and enable a soldier to see obscured targets in the dark. ’012 patent at 1:34-36.
    However, the process of acquiring and striking a target while using these goggles can be
    cumbersome. Id. at 1:32-33. When a soldier located a target, the soldier was forced to either (1)
    flip the goggles out of the way and reacquire the target with the sight on his weapon or (2) engage
    a laser illuminator on his weapon that followed the weapon’s line of sight and thus indicated where
    the bullet would strike. Id. at 1:36-48. Both options came with their own drawbacks. If the soldier
    employed the first option, the soldier would lose valuable seconds removing his goggles and be
    forced to acquire the target using the weapon’s narrower field of vision, which may be virtually
    impossible with a distant or moving target. Id. at 1:39-43. If the soldier employed the second
    option, the illuminator may have the unintended effect of giving away the soldier’s position
    because the laser illuminator may be just as visible to an enemy as it is to the soldier. Id. at
    1:44-1:57.
    To alleviate these issues associated with acquiring targets using NVGs, U.S. military
    planners set out to develop technology that would combine images from a weapon sight and the
    NVG’s visual field. See ’012 patent at 1:59-2:27. According to the SERVAL patents, the prior
    art combined images from the field of view and the weapon sight but did not “register” the images.
    3
    Id. at 1:65-2:3. Instead, the patents assert that, under the prior art, the combined images were often
    “distinctly offset” with two distinct images of the same target appearing in different places. Id. at
    2:13-2:15. This offset could confuse and disorient the soldier because the soldier could have
    difficulty determining the location of his weapon sight in relation to his night vision goggle field
    of view. Id. at 2:15-2:18.
    To resolve these issues found in the prior art, Retired Brigadier General John Scales led a
    team of SAIC engineers in developing the technologies claimed in the four asserted patents, which
    form two interrelated patent families. See Compl. ¶ 1; Sci. Applications Int’l Corp., 135 Fed. Cl.
    at 664. Each patent family is described below in turn.
    1. First Patent Family (’012 and ’103 Patents)
    The First Patent Family consists of the ’012 and ’103 patents, which share a common
    specification. 4 The ’012 patent is entitled “System and Method for Video Image Registration in a
    Heads Up Display.” See generally ’012 patent. The ’012 patent was issued on August 31, 2010
    from an application filed on December 2, 2004 that did not claim priority to any earlier-filed
    application. Id. On its face, the ’012 patent identifies two inventors, John Richard Scales and
    Mark David Hose, and an assignee, Science Applications International Corporation of San Diego,
    California. Id. The ’012 patent issued with nineteen total claims. See ’012 patent at 9:62-12:3.
    Method claims 1 and 17 are the only independent claims. Id. at 9:63-12:3. Claim 1 of the ’012
    patent recites:
    1. A method of registering video images with an underlying visual field comprising
    the steps of:
    (1) determining a source orientation of a video source providing a video
    feed containing data for a series of video images representing portions of a
    visual field;
    4
    Because the ’012 and ’103 patents share the same specification, this Memorandum and Order
    cites to only the ’012 patent when referring to the specification for the First Patent Family.
    4
    (2) determining a display orientation of a transparent display overlaying the
    visual field, wherein the video source and the transparent display are
    independently movable about multiple axes; and
    (3) displaying the video images in positions on the transparent display that
    overlay portions of the visual field represented by the displayed video
    images, wherein boundaries of the displayed video images are in
    registration with boundaries of portions of the visual field represented by
    the displayed video images.
    Id. at 9:63-10:10.
    Claim 17 of the ’012 patent recites:
    17. A method of registering video images with an underlying visual field
    comprising the steps of:
    (1) determining a source orientation of a video source of a video feed;
    (2) determining a display orientation of a transparent display overlaying
    the visual field;
    (3) displaying a portion of the video feed in the transparent display;
    (4) registering the portion of the video feed with the underlying visual
    field; and
    (5) repositioning the portion of the video feed within the transparent display
    when the video source or transparent display moves.
    Id. at 10:48-60. Claims 2-16 and 18-19 of the ’012 patent depend ultimately on claim 1. Id. at
    9:63- 12:3.
    Like the ’012 patent, the ’103 patent is also entitled “System and Method for Video Image
    Registration in a Heads Up Display.” See generally ’103 patent. The ’103 patent was issued on
    August 26, 2014 from a divisional application filed on July 26, 2010 that claims priority to the
    application filed on December 2, 2004, which issued as the ’012 patent. Id. The ’103 patent shares
    the same specification with the ’012 patent. Id. On its face, the ’103 patent identifies the same
    inventors and assignee as the ’012 patent. Id. The ’103 patent issued with twelve total claims;
    system claim 1, reproduced below, is the only independent claim. Id. at 10:7-59.
    Claim 1 of the ’103 patent recites:
    1. A system comprising:
    a video camera adapted to provide, in a video feed, data for
    5
    a series of video images representing portions of a visual field;
    a first orientation sensor adapted to detect an orientation of
    the video camera;
    a heads up display (HUD) adapted for viewing of the visual
    field by a user of the system wherein the HUD comprises
    a transparent display, and wherein the HUD and the
    video camera are independently movable about multiple
    axes;
    a second orientation sensor adapted to detect an orientation
    of the HUD; and
    a computer adapted to receive sensor data from the first
    and second orientation sensors, to receive the video feed
    from the video camera, and to display the video images,
    on the transpar[e]nt display and based on the received
    sensor data, in positions that overlay portions of the
    visual field represented by the displayed video images
    wherein boundaries of the displayed video images are in
    registration with boundaries of portions of the visual
    field represented by the displayed video images,
    and wherein the computer is adapted to
    determine a source orientation of the video camera, and
    determine a display orientation of the transparent display.
    Id. at 10:7-31.
    SAIC’s asserted claim 1 of the ’012 patent and asserted claim 1 of the ’103 patent recite
    nearly identical steps, though the former is a method and the latter is a system. The First Patent
    Family summarizes their purported invention as “a method for aligning video images with an
    underlying visual field” by performing various steps. ’012 patent at 2:31-37. Those steps are
    “determining a source orientation of a video source, determining a display orientation of a
    transparent display overlaying the visual field, and displaying video images in the transparent
    display,” where the position of the video “images is based on the source orientation and the display
    orientation.” Id. at 2:33-37. In other words, “[a] video camera is coupled with a heads up display,
    and a computer positions images from the video camera on the heads up display based on the
    relative orientations of the camera and the display.” Id. at Abstract. “The video image, which
    may, for example, come from a weapon sight, is aligned within the heads up display. . . .” Id.
    6
    Figure 5, reproduced below, which appears on the front of the ’012 and ’103 patents, illustrates
    the described configuration. ’012 patent at Fig. 5.
    The ’012 and ’103 patents admit that prior art methods and systems, including prior art
    night vision goggles (such as “Sensor Technology Systems’ Model 2733 Low Profile Night Vision
    Goggle”), already “have the ability to port a video feed into a beam combiner, overlaying a video
    image from a video source mounted in the weapon sight onto the center of the visual field of the
    goggle.” ’012 patent at 1:65-2:3. Figure 1 of the ’012 and ’103 patents illustrate an example of
    the combined image generated by the prior art.
    7
    In Figure 1, the ’012 and ’103 patents describe the problem with the prior art solution as
    “the video feed 102 remains stationary in the center of the visual field 101, obscuring content in
    the center of the visual field . . . .” ’012 patent at 2:10-13. According to the patents, the combined
    images in Figure 1,
    depict the same subjects, a group of soldiers accompanying an armored personnel
    carrier (APC). However, the video feed 102 remains stationary in the center of the
    visual field 101, obscuring content in the center of the visual field, in this case the
    APC and a soldier. The two images are distinctly offset, with the two soldiers to
    the right of the APC being repeated in both images. This offset, with two distinct
    images of the same target appearing in different places in the field of view could
    confuse the soldier, causing a delay in engagement or a miss.
    ’012 patent at 2:9-18.
    Under the First Patent Family, images from the weapon sight’s video source and night
    vision goggles are “registered” using the relative orientations of the weapon’s sight and the night
    vision goggles. Id. at Abstract, 2:4-7. Figure 4 illustrates the combined image using the invention.
    8
    In Figure 4, the visual field 400 is “the view through a soldier’s night vision goggles or
    other (clear) goggles [that] is enhanced with the addition of a portion of the weapon sight video
    feed 401 through the use of a heads up display (HUD).” ’012 patent at 3:57-61. “[T]he video feed
    401 has been positioned over the portion of the visual field 400 based on the direction the video
    source is pointed.” Id. at 3:64-66. “As the weapon moves, the video feed 401 is dynamically
    positioned within the visual field 400.” Id. at 3:67-4:1. A side-by-side comparison of Figures 1
    and 4 illustrates the advancement over the prior art.
    9
    The ’012 and ’103 patents provide a single high-level flowchart, in Figure 8, which
    “demonstrates an illustrative embodiment of a method for registering a video image with an
    underlying visual field.” ’012 patent at 6:25-27.
    In Figure 8, the relative orientation of the display and video source is used to determine the
    placement (e.g., X and Y position and rotation) of a video frame containing image data on the
    display. Id. at 6:25-27, 6:38-7:12. Specifically, Figure 8 explains that the invention registers the
    image by: (1) calculating pitch and yaw delta values (804), (2) determining frame location (805),
    (3) calculating the roll delta value (806), (4) determining frame rotation (807), and (5) cropping,
    resizing, and enhancing the frame (808). Next, “the processed video frame . . . may be displayed
    in a heads up display, as in step 809.” Id. at 7:16‐17. As step 810 illustrates, “if another frame of
    10
    video is set to be received (i.e., the display is still on), then the process repeats for each new frame,
    returning to step 802.” Id. at 7:30‐33.
    2. Second Patent Family (’230 and ’752 Patents)
    The Second Patent Family includes two patents that also share a common specification. 5
    The ’230 patent is entitled “System and Method for Video Image Registration and/or Providing
    Supplemental Data in a Heads Up Display.” See generally ’230 patent. The ’230 patent was
    issued on January 5, 2016 from an application filed on February 28, 2007 that does not claim
    priority to any earlier-filed application. Id. On its face, the ’230 patent identifies two inventors,
    John Richard Scales and Michael Harris Rodgers, and an assignee, Science Applications
    International Corporation of McLean, VA. Id. The ’230 patent issued with forty-two total claims;
    claims 1, 15, and 29 are the only independent claims. Id. at 24:24-30:42.
    Claim 1 of the ’230 Patent recites:
    1. A system, comprising:
    a first video source configured to generate images repre-
    senting portions of an external environment;
    a second video source, movable independent of the first
    video source, configured to generate images represent-
    ing portions of the external environment;
    a video display; and
    a controller coupled to the first and second video sources
    and to the display, wherein the controller is configured to
    (a) receive video images from the first video source and
    from the second video source,
    (b) receive motion data indicative of motion of the first
    and second video sources,
    (c) identify, based on the received motion data, a part of
    a first video source image that potentially represents a
    portion of the external environment represented in a
    part of a second video source image;
    (d) evaluate, based on a comparison of data from the first
    and second video source images, the identification
    performed in operation (c); and
    5
    Because the ’230 and ’752 patents share the same specification, this Memorandum and Order
    cites to only the ’230 patent when referring to the specification for the Second Patent Family.
    11
    (e) display at least a portion of the first video source
    image and at least a portion of the second video source
    image such that the second video source image por-
    tion overlays a corresponding region of the first video
    source image portion, wherein the corresponding
    region represents a portion of the external environ-
    ment represented in the second video source portion.
    Id. at 24:25-51.
    Claims 2-14 of the ’230 patent depend ultimately on claim 1. Id. at 24:51-26:26.
    Similarly, claim 15 of the ’230 patent mirrors claim 1, except that it is a method claim
    rather than a system claim, and it recites:
    15. A method, comprising:
    (a) receiving video images from a first video source and
    from a second video source representing portions of an
    external environment;
    (b) receiving motion data indicative of motion of the first
    and second video sources;
    (c) identifying, based on the received motion data, a part of
    a first video source image that potentially represents a
    portion of the external environment represented in a part
    of a second video source image;
    (d) evaluating, based on a comparison of data from the first
    and second video source images, the identification per-
    formed in step (c); and
    (e) displaying at least a portion of the first video source
    image and at least a portion of the second video source
    image such that the second video source image portion
    overlays a corresponding region of the first video source
    image portion, wherein the corresponding region repre-
    sents a portion of the external environment represented
    in the second video source portion.
    Id. at 26:27-47.
    Claims 16-28 of the ’230 patent depend ultimately on claim 15. Id. at 26:48-28:15.
    Claim 29 of the ’230 patent recites the same steps as claims 1 and 15, except it involves a
    “non-transitory machine-readable medium having machine-executable instructions for performing
    a method.” It recites in full:
    12
    29. A non-transitory machine-readable medium having
    machine-executable instructions for performing a method,
    comprising:
    (a) receiving video images from a first video source
    and from a second video source representing portions of an
    external environment;
    (b) receiving motion data indicative of motion of the first
    and second video sources;
    (c) identifying, based on the received motion data, a part of
    a first video source image that potentially represents a
    portion of the external environment represented in a part
    of a second video source image;
    (d) evaluating, based on a comparison of data from the first
    and second video source images, the identification per-
    formed in step (c); and
    (e) displaying at least a portion of the first video source
    image and at least a portion of the second video source
    image such that the second video source image portion
    overlays a corresponding region of the first video source
    image portion, wherein the corresponding region repre-
    sents a portion of the external environment represented
    in the second video source portion.
    Id. at 28:17-38.
    Claims 30-42 of the ’230 patent depend ultimately on claim 29. Id. at 28:39-30:42.
    Like the ’230 patent, the ’752 patent is also entitled “System and Method for Video Image
    Registration and/or Providing Supplemental Data in a Heads Up Display.” See generally ’752
    patent. The ’752 patent was issued on April 11, 2017 from a continuation application filed on
    November 24, 2015 that claims priority to the application filed on February 28, 2007, which issued
    as the ’230 patent. Id. On its face, the ’752 patent identifies the same inventors and assignee as
    the ’230 patent. Id. The ’752 patent issued with eighteen total claims; claims 1, 7, and 13, all
    reproduced below, are the only independent claims. Id. at 25:2-28:58. Claim 1 of the ’752 patent
    is a system claim and recites in full:
    1. A system, comprising:
    a first video source configured to generate video data
    of images representing portions of an external environ-
    13
    ment within a field of view of the first video source;
    a second video source configured to generate second
    video data of images representing portions of the
    external environment within a field of view of the
    second video source;
    a video display; and
    a controller in communication with the first and second
    video sources and the display and configured to per-
    form operations that include
    receiving the first video data, the second video data,
    first motion data corresponding to the first video
    source, and second motion data corresponding to the
    second video source,
    identifying, based on the received first motion data and
    the received second motion data, a region of a first
    image generable from the first video data for com-
    parison with a region of a second image generable
    from the second video data,
    comparing data corresponding to the identified region
    of the first image and data corresponding to the
    region of the second image,
    selecting, based on the comparing, a part of the first
    image and a part of the second image that represent
    the same portion of the external environment,
    displaying at least a portion of the first image and the
    selected part of the second image such that the
    selected part of the second image replaces the
    selected part of the first image and is in registration
    with regions of the first image surrounding the
    selected part of the first image.
    Id. at 25:2-34. Claims 2-6 of the ’752 patent depend ultimately on claim 1. Id. at 25:35-26:19.
    Claim 7 of the ’752 patent is a method claim and recites:
    7. A method comprising:
    receiving first video data of images representing portions
    of an external environment within a field of view of a
    first video source;
    receiving second video data of images representing por-
    tions of the external environment within a field of view
    of a second video source;
    receiving first motion data corresponding to the first video
    source and second motion data corresponding to the
    second video source;
    identifying, based on the received first motion data and
    14
    the received second motion data, a region of a first
    image generable from the first video data for compar-
    ison with a region of a second image generable from the
    second video data;
    comparing data corresponding to the identified region of
    the first image and data corresponding to the region of
    the second image;
    selecting, based on the comparing, a part of the first image
    and a part of the second image that represent a same
    portion of the external environment; and
    displaying at least a portion of the first image and the
    selected part of the second image such that the selected
    part of the second image replaces the selected part of
    the first image and is in registration with regions of the
    first image surrounding the selected part of the first
    image.
    Id. at 26:19-26:45. Claims 8-12 of the ’752 patent depend ultimately on claim 7. Id. at
    26:48-27:30.
    Claim 13 of the ’752 patent involves a “non-transitory machine-readable medium
    having machine executable instructions for performing a method” and recites in full:
    13. A non-transitory machine-readable medium having
    machine executable instructions for performing a method
    comprising:
    receiving first video data of images representing portions
    of an external environment within a field of view of a
    first video source;
    receiving second video data of images representing por-
    tions of the external environment within a field of view
    of a second video source;
    receiving first motion data corresponding to the first video
    source and second motion data corresponding to the
    second video source;
    identifying, based on the received first motion data
    and the received second motion data, a region of a first
    image generable from the first video data for compari-
    son with a region of a second image generable from the
    second video data;
    comparing data corresponding to the identified region of
    the first image and data corresponding to the region of
    the second image;
    15
    selecting, based on the comparing, a part of the first image
    and a part of the second image that represent a same
    portion of the external environment; and
    displaying at least a portion of the first image and the
    selected part of the second image such that the selected
    part of the second image replaces the selected part of
    the first image and is in registration with regions of the
    first image surrounding the selected part of the first
    image.
    Id. at 27:31-59.
    The ’230 and ’752 patents summarize their purported invention as “a computer receives
    images from two video sources[, where] [e]ach of those two video sources is movable independent
    of the other and generates images that represent a portion of an external environment within its
    field of view.” ’230 patent at 1:58-62. “Sensors coupled to the two video sources provide data to
    the computer that indicates the spatial orientations of those sources.” Id. at 1:64-66. “Using the
    sensor data, the computer determines a location for placing a video image (or a portion thereof)
    from a second of the sources (e.g., a rifle-mounted source) in the video image from a first of the
    sources (e.g., a goggles-mounted source).” Id. at 1:66-2:3. After a location is determined from
    the sensor data, “the two images are displayed such that the second source image (or a portion of
    that image) overlays a corresponding portion of the first source image.” Id. at 2:11-14.
    The Second Patent Family incorporates by reference the ’012 patent. Id. at 1:19-22. The
    ’230 and ’752 patents admit that prior art methods and systems include the ’012 patent. Id. at
    1:17-34. An important difference between the ’012 patent family and the later ’230 patent family
    is the manner in which the images from the weapon sight (first video source) and the night vision
    goggles (second video source) are aligned. Markman Hearing Tr. (ECF No. 159) at 23:20-25:7.
    The ’012 patent family claims a system in which the images from two different sources are aligned
    using only orientation data. See, e.g., ’012 patent, claims 1, 17; ’103 patent, claim 1. By the time
    16
    the ’230 patent was filed three years later, however, the named inventors had realized that use of
    orientation data alone may pose problems. ’230 patent at 1:35-43 (identifying disadvantages of a
    system using only sensor data to match images).         “For example, many low-cost [inertial
    measurement unit (IMU)] sensors experience bias drift over time” that “can result in relative
    orientation errors of several degrees per hour.” ’230 patent at 1:38-41. These errors require the
    user to periodically recalibrate the IMU sensors, and thus “can disrupt system operation.” Id. at
    1:42-44. Thus, the later filed ’230 patent describes an improved, two-step alignment method. The
    ’230 patent family first uses data from motion sensors to help align images from two different
    sources. Id. at 2:6-17. The ’230 patent then performs a second step of comparing the content of
    the images themselves to evaluate whether the alignment is correct and to adjust the alignment as
    necessary. Id. at 2:6-17, Abstract (“The sensor-based location is checked (and possibly adjusted)
    based on a comparison of the images.”). The purported invention of the ’230 and ’752 patents
    apparently minimizes the need for such manually initiated recalibration. Id. at 1:44-45, 2:14-16.
    Figure 4 of the ’230 patent is an example of a displayed image resulting from this two-step
    alignment process.
    17
    To achieve the image in Figure 4, a computer receives video images from the weapon‐ and
    goggles‐mounted sources and inertial data from the sensors. Id. at 6:54-60. The computer
    calculates a location for an image from the weapon‐mounted source within an image from the
    goggles‐mounted source using the inertial sensor data. Id. at 7:19-22. The sensor-based location
    is checked (and possibly adjusted) based on a comparison of the images. Id. at Abstract. “Figure
    4 shows an example of a user display 70 provided by the goggles 11.” Id. at 6:54-55. “Located
    within the goggles[’] [field of view] (and thus in [the] goggles image 82) are numerous trees and
    bushes . . . as well as soldiers 71 (partially behind a tree in the lower left) and 72 (partially covered
    by foliage in the upper right).” Id. at 6:62-67. “The [heads up display] portion of [the] user display
    70 is shown as a rectangular region 73 in the center portion of the goggles[’] [field of view].” Id.
    at 7:3-5. “[O]verlaid on [the heads up display] 73 is a weapon view 74 corresponding to (and
    generated from) the scope image.” Id. at 7:6-8. “[T]he location and rotation of [the] weapon view
    74 within [the] user display 70 is determined by [the] computer 30 based on output from [the]
    sensors 13 and 18 and based on [a] comparison of the scope image with the goggles image.” Id.
    18
    at 7:19-22. “As [the] rifle 19 is moved, scope images (or portions thereof) are dynamically
    positioned within [the] user display 70 so as to indicate where [the] scope 17 (and thus [the] rifle
    19) is pointing.” Id. at 7:22-25.
    The ’230 and ’752 patents provide a high-level flowchart in Figures 5A-5B, demonstrating
    the Second Patent Family’s advancement. Id. at 2:48-49. Figures 5A-5B, reproduced below, “are
    a flow chart explaining the operation of [the] system 10.” Id. at 7:46-47. As shown below, initial
    calibration occurs at step 103 and recalibration, if necessary, occurs at step 117 “thereby correcting
    for bias drift and helping to maintain proper registration of the scope image within the goggles
    image.” Id. at 7:61-64, 10:4-15.
    19
    II. Procedural History
    Familiarity with the background of this litigation is presumed. See, e.g., Sci. Applications
    Int’l Corp. v. United States, No. 17-CV-825, 
    2021 WL 1568815
    , at *1 (Fed. Cl. Apr. 21, 2021)
    (denying Plaintiff’s motion to strike Defendant’s indefiniteness contentions but awarding fees for
    untimely assertion of indefiniteness contentions); Sci. Applications Int’l Corp. v. United States,
    20
    
    148 Fed. Cl. 268
     (2020) (permitting Government’s Rule 14(b) notice); Sci. Applications Int’l Corp.
    v. United States, 
    135 Fed. Cl. 661
    , 662 (2018) (denying Government’s motion to dismiss). For
    ease of reference, the Court summarizes the litigation as follows.
    On June 19, 2017, SAIC filed suit seeking compensation for the Government’s alleged
    infringement of the asserted patents. See generally Compl. This suit was based on the United
    States’ contract awards to BAE Systems and DRS Technologies for the development of technology
    which included the implementation of a Rapid Target Acquisition (RTA) feature relevant to
    Plaintiff's infringement claims. Compl. ¶¶ 3, 38.
    On November 20, 2018, Microsoft entered into a contract with the United States to develop
    an Integrated Visual Augmentation System (IVAS), which Microsoft alleges includes
    implementation of an RTA feature relevant to Plaintiff's infringement claims. See Microsoft
    Motion to Intervene (ECF No. 59) at 1. Subsequently, Microsoft moved to intervene in this action
    “for the limited purpose of protecting its interests regarding the United States’ defense that
    products incorporating [RTA] do not infringe the patents asserted in this matter by [Plaintiff].”
    Microsoft Motion to Intervene (ECF No. 59) at 1. On May 6, 2019, the Court granted Defendant-
    Intervenor Microsoft Corporation's unopposed motion to intervene in this action. See Order
    Granting Intervention (ECF No. 60).
    On May 30, 2019, nearly two years after the inception of this suit, the Army entered into
    two separate other transaction agreements (OTAs) with L3 and Harris, each of which had a night
    vision technology division, to develop a prototype for an Enhanced Night Vision Goggle-
    Binocular (ENVG-B). Sci. Applications Int'l Corp., 148 Fed. Cl. at 269-70. 6
    6
    “L3 and Harris merged on June 29, 2019, approximately one month after the L3/Harris contracts
    were executed, creating a new entity known as L3Harris Technologies, Inc. . . . L3’s legacy night
    vision division remained with the merged entity, and is now known as the Integrated Vision
    21
    On July 18, 2019, the period for claim construction discovery closed, and the parties jointly
    filed deposition transcripts of their respective claim construction experts.           See Science
    Applications International Corporation’s and the United States’ Joint Submission of Claim
    Construction Experts’ Deposition Testimony (ECF No. 79).             On October 15, 2019, claim
    construction briefs were completed. See Defendant-Intervenor Microsoft Corporation’s Opening
    Claim Construction Brief (ECF No. 87) (Microsoft’s Opening Cl. Constr. Br.); Defendant Opening
    Claim Construction Brief (ECF No. 89) (Government’s Opening Cl. Constr. Br.); Plaintiff Science
    Applications International Corp.’s Claim Construction Brief (ECF No. 90) (Pl.’s Opening Cl.
    Constr. Br.); Defendant-Intervenor Microsoft Corporation’s Responsive Claim Construction Brief
    (ECF No. 95) (Microsoft’s Responsive Cl. Constr. Br.); Plaintiff Science Applications
    International Corp.’s Responsive Claim Construction Brief (ECF No. 96) (Pl.’s Responsive Cl.
    Constr. Br.); Defendant United States’ Responsive Claim Construction Brief (ECF No. 97)
    (Government’s Responsive Cl. Constr. Br.).
    After the parties completed claim construction briefing, Microsoft Corporation moved to
    stay this action pending a decision from the United States Patent Trial and Appeal Board (PTAB)
    concerning whether to institute inter partes review (IPR) of the patents-in-suit. See Microsoft’s
    Motion to Stay Proceedings (ECF No. 92). On November 26, 2019, the prior judge overseeing
    this case, the Honorable Mary Ellen Coster Williams, granted a partial stay pending a decision by
    the PTAB on whether to institute inter partes review of the patents-in-suit. See Order for Partial
    Stay (ECF No. 102). On January 27, 2020, the PTAB issued a decision denying institution on
    Solutions Sector of L3 Technologies, Inc., which is a subsidiary of L3Harris Technologies, Inc.
    . . . Former Harris Corporation’s legacy night vision technology division, which ultimately
    received one of the Army's two separate May 30, 2019 OTAs, was spun-off (for regulatory
    reasons) and purchased by Elbit Systems of America, LLC (‘Elbit’), which is the U.S. subsidiary
    of Elbit Systems, Ltd.” Sci. Applications Int’l Corp., 148 Fed. Cl. at 270 n.1 (internal citation and
    quotation omitted).
    22
    each of Microsoft’s five petitions for IPR. Microsoft Corp. v. Sci. Applications Int’l Corp., No.
    IPR2019-01311, 
    2020 WL 572706
    , at *1 (P.T.A.B. Jan. 27, 2020).
    “In November 2019, during the pendency of the stay, Defendant's counsel first learned
    from the Army of the May 2019 L3/Harris contracts, and that the prototypes required by the
    contracts may implicate the accused technology in this action.” Sci. Applications Int'l Corp., 148
    Fed. Cl. at 270 (internal citations omitted). “On February 20, 2020, the Court lifted the stay, and
    six days later, on February 26, 2020, the Government notified SAIC that the Government intended
    to move to notify L3 and Harris pursuant to Rule 14(b).” Id.
    On February 27, 2020, the Clerk of Court transferred this case to the undersigned judge.
    See Order Reassigning Case (ECF No. 112). On March 10, 2020, the Government moved for
    leave to notice L3 Technologies, Inc. and Harris Corporation under Rule 14(b) to potentially
    appear as third parties to this lawsuit; the Court granted the motion on May 12, 2020. See Sci.
    Applications Int’l Corp., 148 Fed. Cl. at 273. That same day, this Court held a status conference
    during which it established a schedule for the proceedings in this case through claim construction.
    See May 12, 2020 Scheduling Order (ECF No. 121).
    After the status conference, L3 intervened in this case. See June 17, 2020 Order (ECF No.
    128); L3’s Answer (ECF No. 131). 7 Subsequently, the Court permitted L3 to file a claim
    construction brief and permitted SAIC to respond. See September 4, 2020 Scheduling Order (ECF
    No. 141). In its opening claim construction brief, L3 proposed additional disputed terms and a
    new term construction. See L3’s Opening Claim Construction Brief (ECF No. 148) (L3’s Opening
    Cl. Constr. Br.).
    7
    Elbit Systems of America, LLC declined to intervene in this action. (ECF No. 135).
    23
    On November 20, 2020, SAIC filed its responsive supplemental claim construction brief
    addressing L3’s arguments. See Plaintiff Science Applications International Corp.’s Responsive
    Claim Construction Brief to Third-party Defendant L3 Technolgies, [sic] Inc. (ECF No. 151) (Pl.’s
    Responsive Cl. Constr. Br. to L3) at 15-20.
    At the conclusion of the parties’ briefing, they had requested construction on the following
    ten (10) disputed terms or groups of terms:
    1. “video camera;” “video source”
    2. “external environment”
    3. “video images” / “video source image” / “video data of images”
    4. “orientation”
    5. “motion data”
    6. “based on a comparison”
    7. “transparent display” / “HUD comprising a transparent display”
    8. “overlay terms”
    9. “registration”      /   “registering   terms”;   “Wherein    the   boundaries     are   in
    registration/registering the portion of the video feed with the underlying visual field”
    10. “twist rotation”
    See November 30, 2020 Joint Status Report (ECF No. 153) at 2-3.
    On December 1, 2020, the Court held a pre-Markman Hearing status conference during
    which the Court encouraged the parties to narrow their disputes where possible. Pre-Markman
    Hearing Tr. at 21:1-23:25. Accordingly, on December 10, 2020, parties limited the terms in
    dispute to the following:
    24
    1. “video images” / “video source image” / “video data of images”
    2. “transparent display” / “HUD comprising a transparent display”
    3. “overlay terms”
    4. “based on a comparison”
    5. “motion data”
    6. “registration” / “registering terms.”
    See December 10, 2020 Joint Status Report (ECF No. 157) at 2.
    The Court held a Markman Hearing on December 15, 2020, which included the parties’
    technology tutorials and arguments on (1) “video images” / “video source image” / “video data of
    images,” (2) “transparent display” / “HUD comprising a transparent display,” (3) “motion data,”
    (4) “registration” / “registering terms.” See generally Markman Hearing Tr. The parties agreed
    to rely on their briefs for the “overlay” and “based on a comparison” terms. See January 8, 2021
    Joint Status Report (ECF No. 161).
    APPLICABLE LEGAL STANDARDS
    I. Claim Construction
    “It is a bedrock principle of patent law that the claims of a patent define the invention to
    which the patentee is entitled the right to exclude.” Phillips v. AWH Corp., 
    415 F.3d 1303
    , 1312
    (Fed. Cir. 2005) (en banc) (internal quotations and citations omitted). Claim construction is the
    process of determining the meaning and scope of patent claims.             Markman v. Westview
    Instruments, Inc., 
    52 F.3d 967
    , 976 (Fed. Cir. 1995) (en banc), aff’d, 
    517 U.S. 370
     (1996). “[T]he
    words of a claim are generally given their ordinary and customary meaning,” which is “the
    meaning that the term would have to a person of ordinary skill in the art in question at the time of
    25
    the invention, i.e., as of the effective filing date of the patent application.” Phillips., 415 F.3d at
    1312-13 (internal quotations and citations omitted). 8
    The analysis of any disputed claim terms begins with the intrinsic evidence of record, as
    “intrinsic evidence is the most significant source of the legally operative meaning of disputed claim
    language.” Vitronics Corp. v. Conceptronic, Inc., 
    90 F.3d 1576
    , 1582 (Fed. Cir. 1996). “[T]he
    claims themselves provide substantial guidance as to the meaning of particular claim terms.”
    Phillips., 415 F.3d at 1314 (internal quotation and citations omitted).
    “[T]he person of ordinary skill in the art is deemed to read the claim term not only in the
    context of the particular claim in which the disputed term appears, but in the context of the entire
    8
    The Government submits that the level of skill of a person of ordinary skill in the art for the First
    and Second Patent Family is a Bachelor of Science degree in Computer Science and 3 years of
    experience in programming video/graphics applications and computer vision, or alternatively, a
    Master of Science degree in Computer Science and 1 year of experience in programming
    video/graphics applications and computer vision. Neumann Decl. (ECF No. 66) ¶ 48. L3 agrees
    with the Government. Def. L3 Opening Cl. Constr. Br. at 2 n.1.
    “SAIC submits that a person having ordinary skill in the art for the Asserted Patents possesses
    either: (1) a Bachelor’s degree in computer science, computer engineering, electrical engineering,
    or systems engineering and at least 2 years of experience working with sensor (e.g., camera and
    orientation sensor) technology; or (2) a Master’s degree in one of these disciplines and at least 1
    year of work experience with the aforementioned sensor technology.” Pl.’s Responsive Cl. Constr.
    Br. at 2 (citing Ex. L, Declaration of Dr. Gregory F. Welch ¶ 34)). SAIC notes that its definition
    “differs from the Government’s (which requires a degree in Computer Science, excludes persons
    with engineering degrees, and focuses on experience in video graphics and computer vision rather
    on than on the camera and orientation sensor technology underlying the inventions).” Id. (internal
    citations omitted).
    At the Markman Hearing, the parties agreed that the dispute regarding a person of ordinary skill
    in the art (POSITA) did not need to be decided to resolve the parties’ disputes over Claim
    Construction. Markman Hearing Tr. at 60:16-61:7; see also Pl.’s Responsive Cl. Constr. Br. at 2.
    Because the parties have indicated that their dispute over the level of ordinary skill in the art does
    not affect this Court’s construction of the disputed terms, this Court declines to articulate the level
    of a POSITA at this stage in the proceedings. In re Fought, 
    941 F.3d 1175
    , 1179 (Fed. Cir. 2019)
    (“Unless the patentee places the level of ordinary skill in the art in dispute and explains with
    particularity how the dispute would alter the outcome, neither the Board nor the examiner need
    articulate the level of ordinary skill in the art.”).
    26
    patent, including the specification.” Id. at 1313. “[T]he specification ‘is always highly relevant
    to the claim construction analysis. Usually, it is dispositive; it is the single best guide to the
    meaning of a disputed term.’” Id. at 1315 (quoting Vitronics, 
    90 F.3d at 1582
    ).
    Notwithstanding the importance of a specification, limitations in the specification must not
    be read into the claims absent lexicography or disclaimer/disavowal. Hill-Rom Services, Inc. v.
    Stryker Corp., 
    755 F.3d 1367
    , 1371 (Fed. Cir. 2014). The United States Court of Appeals for the
    Federal Circuit (Federal Circuit) has “expressly rejected the contention that if a patent describes
    only a single embodiment, the claims of the patent must be construed as being limited to that
    embodiment.”      Phillips, 415 F.3d at 1323 (internal citations omitted).          Conversely, “an
    interpretation [which excludes a preferred embodiment] is rarely, if ever, correct and would require
    highly persuasive evidentiary support.” Vitronics, 
    90 F.3d at 1583
    .
    The prosecution history of a patent is also relevant intrinsic evidence. Markman, 
    52 F.3d at 980
    . Although “the prosecution history represents an ongoing negotiation between the PTO and
    the applicant, rather than the final product of that negotiation” and for this reason “often lacks the
    clarity of the specification,” the prosecution history can nonetheless “often inform the meaning of
    the claim language by demonstrating how the inventor understood the invention and whether the
    inventor limited the invention in the course of prosecution, making the claim scope narrower than
    it would otherwise be.” Phillips, 415 F.3d at 1317 (citations omitted). “[A] patentee’s statements
    during prosecution, whether relied on by the examiner or not, are relevant to claim interpretation.”
    Microsoft Corp. v. Multi-Tech Sys., Inc., 
    357 F.3d 1340
    , 1350 (Fed. Cir. 2004).
    “Although [the Federal Circuit has] emphasized the importance of intrinsic evidence in
    claim construction, [it has] also authorized district courts to rely on extrinsic evidence, which
    ‘consists of all evidence external to the patent and prosecution history, including expert and
    27
    inventor testimony, dictionaries, and learned treatises.’” Phillips, 415 F.3d at 1317 (citing
    Markman, 
    52 F.3d at 1583
    ). While sometimes helpful, extrinsic evidence is “less significant than
    the intrinsic record in determining the legally operative meaning of claim language.” Id. at 1317
    (quoting C.R. Bard, Inc. v. U.S. Surgical Corp., 
    388 F.3d 858
    , 862 (Fed Cir. 2004)).
    II. Definiteness
    The Patent Act of 1952—which preceded the America Invents Act and, which the parties
    agree, is applicable to the patents at issue in this case—requires that “a specification ‘conclude
    with one or more claims particularly pointing out and distinctly claiming the subject matter which
    the applicant regards as his invention.’” Nautilus, Inc. v. Biosig Instruments, Inc., 
    572 U.S. 898
    ,
    901 (2014) (quoting 
    35 U.S.C. § 112
    , ¶ 2 (2006 ed.) (emphasis omitted)). 9 The question of whether
    a claim is sufficiently definite under 
    35 U.S.C. § 112
    , ¶ 2, is closely related to the claim
    construction task because the decision depends on the ability of the court to interpret a claim. “If
    a claim is indefinite, the claim, by definition, cannot be construed.” Enzo Biochem, Inc. v. Applera
    Corp., 
    599 F.3d 1325
    , 1332 (Fed. Cir. 2010). A claim fails to satisfy this statutory requirement
    and is thus invalid for indefiniteness if its language, when read in light of the specification and the
    prosecution history, “fail[s] to inform, with reasonable certainty, those skilled in the art about the
    scope of the invention.” Nautilus, 572 U.S. at 901. However, “the certainty which the law requires
    in patents is not greater than is reasonable, having regard to their subject-matter.” Id. at 910 (citing
    Minerals Separation, Ltd. v. Hyde, 
    242 U.S. 261
    , 270 (1916)). The definiteness standard must
    allow for “some modicum of uncertainty” to provide incentives for innovation but must also
    9
    Paragraph 2 of 
    35 U.S.C. § 112
     was replaced with newly designated § 112(b) when § 4(c) of the
    America Invents Act (AIA), Pub. L. No. 112–29, took effect on September 16, 2012. Because the
    applications resulting in the patents at issue in this case were filed before that date, this Court will
    refer to the pre-AIA version of § 112. Markman Hearing Tr. at 98:9-15.
    28
    require “clear notice of what is claimed, thereby apprising the public of what is still open to them.”
    Id. at 909 (internal quotation marks and citations omitted). It also serves as a “meaningful . . .
    check” against “foster[ing] [an] innovation-discouraging ‘zone of uncertainty.’” Id. at 910-11
    (quoting United Carbon Co. v. Binney & Smith Co., 
    317 U.S. 228
    , 236 (1942)).
    Issued patents are presumed valid, and indefiniteness must be proven by clear and
    convincing evidence. See Teva Pharm. USA, Inc. v. Sandoz, Inc., 
    789 F.3d 1335
    , 1345 (Fed. Cir.
    2015); Microsoft Corp. v. i4i Ltd., P’ship, 
    564 U.S. 91
    , 95 (2011); see also 
    35 U.S.C. § 282
    (a)
    (“The burden of establishing invalidity of a patent or any claim thereof shall rest on the party
    asserting such invalidity.”).
    29
    DISCUSSION
    I. “VIDEO IMAGES” / “VIDEO SOURCE IMAGE” / “VIDEO DATA OF IMAGES”
    SAIC 10                 Government               Microsoft                 L3
    “Plain and ordinary     digital or analog        digital or analog         digital or analog
    meaning (e.g.,          video frames             video frames              video frames
    electronic images and
    data)”
    Court’s Construction: “digital, analog, or nonstandard video frames”
    Microsoft argues that the patents’ specifications equate “video images” to “video frames,”
    and accordingly urges the Court to construe “video image” as “digital or analog video frames.”
    See Defendant-Intervenor Microsoft Corporation’s Opening Claim Construction Brief
    (Microsoft’s Opening Br.) (ECF No. 87) at 3-4; Defendant-Intervenor Microsoft Corporation’s
    Responsive Claim Construction Brief (Microsoft’s Responsive Br.) (ECF 95) at 1-2; Markman
    Hearing Tr. at 61:8-23. This proposed construction is, according to Microsoft, the plain and
    ordinary meaning of the terms as dictated by the patent specification. See Microsoft’s Opening
    Br. at 3; Microsoft’s Responsive Br. at 1-2. To support its proposed construction, Microsoft
    highlights that the First Patent Family uses “frame” or “video frame” interchangeably with “video
    10
    In its briefs, SAIC argued that the terms “‘video images’/‘video source image’ mean
    [Images][An image] generated from visual information (e.g., visible light, ambient light,
    thermal/IR data, etc.) captured from a video source, depicting an external area or region in the
    video source’s field of view.” Pl. Opening Cl. Constr. Br. at 10 (brackets in original). SAIC’s
    construction changed during the Markman Hearing; counsel for SAIC argued instead that the better
    construction was the term’s plain and ordinary meaning. Markman Hearing Tr. at 51:20-52: 18.
    “If there's a construction -- which we don't even think there should be -- but if there is, it should
    be, just very simply, electronic images or electronic representations.” Markman Hearing Tr. at
    55:13-16. Plaintiff also stated that it would not object to defining “video images” as “digital or
    analog or nonstandard images and data.” Markman Hearing Tr. at 88:2-12.
    30
    image.” See Microsoft’s Opening Br. at 4; Markman Hearing Tr. at 65:3-66:9. More concisely
    stated, Microsoft maintains that “frames are literally what defines what video is” regardless of
    “[w]hether it’s analog or digital, [video] consists of a series of frames.” Markman Hearing Tr. at
    65:22-24.
    The Government and L3 agreed with and adopted Microsoft’s proposed construction. See
    Government Opening Cl. Constr. Br. at 42-43; Government’s Responsive Cl. Constr. Br. at 1; L3’s
    Opening Cl. Constr. Br. at 17-18. At the Markman Hearing, L3 added that the Defendants’
    proposed construction is further supported by the Second Patent Family’s use of the term “i.e.,”
    when discussing the two terms which, according to L3, indicates that the inventors equated the
    terms “image” with “frame.” See Markman Hearing Tr. at 78:4-79:13.
    SAIC contended for the first time during the Markman Hearing that the “video images”
    claim term should be construed as its plain and ordinary meaning, which SAIC states is “just
    electronic images or data.”      Markman Hearing Tr. at 51:20-53:5-19, 76:22-77:12.             This
    construction differed from the construction SAIC offered in its briefs, which proposed construing
    the video image terms as “[Images][An image] generated from visual information (e.g., visible
    light, ambient light, thermal/IR data, etc.) captured from a video source, depicting an external area
    or region in the video source’s field of view.” See Pl.’s Claim Constr. Br. at 10; Pl.’s Responsive
    Claim Constr. Br. at 2-3 (brackets in original); Pl.’s Responsive Cl. Constr. Br. to L3 at 12. During
    the Markman Hearing, SAIC’s counsel explained this shift in proposed construction by stating that
    both SAIC’s original construction and their current construction are intended to combat a format
    limitation for video images. Markman Hearing Tr. at 52:10-55:20.
    At the Markman Hearing, both parties made important concessions.                 First, SAIC
    acknowledged that its proposed construction does not include optical images. Markman Hearing
    31
    Tr. at 58:11-25. Second, Microsoft indicated that it would be satisfied with construing images as
    digital, analog, or nonstandard video frames to alleviate any of Plaintiff’s concerns related to
    limiting video images to “digital or analog formats.” See Markman Hearing Tr. at 72:17-22. Thus,
    at this stage in the proceedings, the primary difference between the parties’ proposed constructions
    is Defendants’ use of “frames,” which Plaintiff argues improperly limits the scope of the term
    “video images.” Markman Hearing Tr. at 58:4-10.
    The Court agrees with Defendants’ construction. The terms “video images” / “video source
    image” / “video data of images” appear in claims 1, 2, 6, 8, 9, 12, 14, 17, and 19 of the ’012 patent;
    claims 1, 4, and 9 of the ’103 patent; claims 1, 3, 4, 5, 7, 9, 10, 13, 15, 17, 18, 19, 21, 23, 24, 27,
    29, 31, 32, 33, 35, 37, 38, and 41 of the ’230 Patent; and claims 1, 2, 7, 8, 13, and 14 of the ’752
    patent. The parties appear to agree that the terms “video images” / “video source image” / “video
    data of images” should be construed consistently throughout both patent families. Pl.’s Opening
    Cl. Constr. Br. at 10, 13; Microsoft’s Opening Br. at 3; Government’s Opening Cl. Constr. Br. at
    42-43; L3’s Opening Cl. Constr. Br. at 17-18.
    While the patents do not define the “video image” terms explicitly, the specifications do
    so implicitly through consistent use of comparative language to discuss image data from the first
    and second video sources. See Irdeto Access, Inc. v. EchoStar Satellite Corp., 
    383 F.3d 1295
    ,
    1300 (Fed. Cir. 2004) (“Even when guidance is not provided in explicit definitional format, ‘the
    specification may define claim terms by implication such that the meaning may be found in or
    ascertained by a reading of the patent documents.’” (quoting Bell Atl. Network Servs., Inc. v. Covad
    Communications Group, Inc., 
    262 F.3d 1258
    , 1268 (Fed. Cir. 2001)); Vitronics, 
    90 F.3d at 1582
    (“The specification acts as a dictionary when it expressly defines terms used in the claims or when
    it defines terms by implication.”).
    32
    Throughout both patent families, the phrase “video images” is consistently equated with
    still video frames. The specification’s description of Figure 8 of the’012 patent is illustrative.
    Figure 8 of the’012 patent, “demonstrates . . . a method for registering a video image with an
    underlying visual field.” ’012 patent at 6:25-27. The specification explains in detail how this
    image is received, processed, and displayed as a video frame. Id. at 6:25-33. The process for
    registration begins “at step 802, a video frame is received for processing. The frame may be
    processed digitally, and if it is received in analog form may first need to be converted to a digital
    format for processing.” Id. at 6:33-37. At the processing step, “the location of the processed frame
    within a heads up display is determined, as in step 805.” Id. at 6:50‐51. Before displaying in the
    HUD, “the processed frame can be rotated for presentation within the heads up display, as in step
    807.” Id. at 6:61-62. Additionally, after “the location and rotation of the processed frame within
    the display are determined, the frame may be cropped, discarding unneeded pixels, as in step 808.”
    Id. at 6:65‐67. Next, the processed video frame “may be displayed in a heads up display, as in step
    809.” Id. at 7:16‐17. As step 810 illustrates, “if another frame of video is set to be received (i.e.,
    the display is still on), then the process repeats for each new frame, returning to step 802.” Id. at
    7:30‐33.
    It is evident that the language seamlessly transitions from video image to frames. In other
    words, Figure 8 makes clear that the thing being generated, processed, and displayed is referenced
    synonymously in the specification as being both images and video frames. For instance, in
    explaining step 808, the specification states:
    The frame may be resized in order to map the video information onto the pixels that
    will ultimately be used in a heads up display. This step may be necessary if the
    video images produced by a video source are larger than needed for display. For
    example, if a video image initially has a field of view of 8 degrees horizontal and 6
    degrees vertical, it may be cropped down to 4 degrees horizontal and 3 degrees
    vertical, retaining the same center point. In this fashion, only a quarter of the image
    33
    is retained, but it constitutes the most relevant part of the image. Alternatively, the
    video frame may need to be magnified or compressed in order to adjust for
    differences in magnification between the visual field and the native video frame.
    In addition, the frame may be enhanced by adding a border around the frame so as
    to further distinguish it from the visual field for an observer.
    Id. at 6:67-7:15.
    In discussing resizing, the language of the specification demonstrates that the inventors
    understood that images were equivalent to frames. The specification for the First Patent Family
    begins with the general statement that “the frame” may need to be resized. Id. at 6:67-7:2. In the
    very next sentence, the specification states that “video images” may need to be resized if the video
    images are too large. Id. at 7:2-4. The specification then switches back to using the term “video
    frames” which it states may need to be resized if too small. Id. at 7:9-12. The specification makes
    a general statement that “the frame may be resized” then proceeds to discuss specifics of making
    a “video image” smaller through cropping and making a “video frame” larger through magnifying.
    Id. at 6:65-7:15. Plaintiff has not proffered any explanation for this interchangeable use. With no
    apparent explanation to the contrary, the plain implication of this interchangeable use is that the
    inventors view “video images” and “video frames” as the same object. See Edwards Lifesciences
    LLC v. Cook Inc., 
    582 F.3d 1322
    , 1329 (Fed. Cir. 2009) (finding that the interchangeable use of
    the words “graft” and “intraluminal graft” is “akin to a definition equating the two”).
    The Second Patent Family, which shares a common inventor with the First Patent Family,
    is no different and thus provides further support for Defendants’ construction. For instance, in
    describing Figures 6A through 6F of the ’230 patent and explaining issues pertaining to parallax,
    the ’230 Patent repeatedly uses the terms “video image” / “image” and “video frame” / “frame”
    interchangeably. ’230 patent at 8:58-9:47. Figures 6A through 6F “illustrate positioning of one
    video image within another based on data from inertial measurement units.” Id. at 2:50-53. For
    34
    example, Figure 6A shows the weapon and goggles pointing in the same direction, resulting in the
    image from the weapon’s scope being placed directly in the middle of the image generated from
    the goggles, which is showcased in Figure 6B. Id. at Figs. 6A-B, 8:66-9:2. In Figure 6C, the rifle
    was yawed 8 degrees to the left of the goggles, which resulted in the scopes image being placed
    negative 8 degrees from the centerline of the HUD display, as seen in Figure 6D. Id. at Figs. 6C-
    D, 9:3-24. Similarly, in Figure 6E, the rifle has pitched upward 6 degrees, resulting in the rifle’s
    image being shifted upward in relation to the center point generated by the goggles’ image, as
    illustrated in Figure 6F. Id. at Fig. 6E-F, 9:25-30. In all the descriptions of these figures, the
    Second Patent Family uses the words “image” or “view;” however, when the patent family
    discusses parallax, the language seamlessly switches to the term “frame.” See, e.g., ’230 patent at
    8:58-9:47. For instance, the ’230 patent states that when “processing a video frame from [the]
    scope 17 [of the rifle], the location where the frame is placed may be slightly off, and a displayed
    frame of video will not be aligned as perfectly as possible.” Id. at 9:36-39. (emphasis added).
    Throughout this discussion, the ’230 patent uses the terms “frame” and “image” consistently to
    reference to the thing being processed and displayed in the HUD. See, e.g., ’230 patent at 8:23-
    26, 8:58-9:47.
    Perhaps the best example demonstrating that the patents use “video images” / “image” and
    “video frames” / “frames” interchangeably is contained in Figure 5A. Figure 5A through B
    comprise a high-level flow chart which explains one possible operation by which video images are
    registered. Id. at Fig. 5A, 7:46-8:57. In describing the step in Figure 5A where “video” is received
    from “goggles and scope,” the specification states that, “[i]n block 107, a computer receives data
    for a video frame (i.e., a scope image) from scope 17.”           Id. at 8:23-26.   In addition to
    interchangeable use of words, patents may also define terms by implication by using the phrase
    35
    “i.e.” See Interval Licensing LLC v. AOL, Inc., 
    766 F.3d 1364
    , 1373 (Fed. Cir. 2014) (noting that
    a phrase preceded by “i.e.” may indicate that the inventor’s intent to cast it as a definition);
    Edwards Lifesciences LLC, 
    582 F.3d at 1334
     (Fed. Cir. 2009) (finding the use of “i.e.” in the
    specification “signals an intent to define the word to which it refers . . .”); but see Dealertrack,
    Inc. v. Huber, 
    674 F.3d 1315
    , 1326 (Fed. Cir. 2012) (finding “i.e.” to not be definitional where
    such a reading would exclude a preferred embodiment from the claim's scope or where a
    “contextual analysis” of the patent indicates that “i.e.” is used in an exemplary rather than
    definitional way).
    Here, the term “i.e.” was used to equate “image” and “frame.” The use of i.e. in Figure 5A
    is not merely incidental. The patents’ use of “video frames” / “frames” as interchangeable with
    “video images” / “images” coupled with the use of “i.e.” clearly indicate that “video frame” and
    “scope image” are intended to be used synonymously throughout the patents. See OpenTV, Inc. v.
    Apple, Inc., 
    2015 WL 3544845
    , at *11 (N.D. Cal. June 5, 2015) (finding that specification’s
    repeated and consistent usage of the term “redraw” as meaning “call low level graphics routine”
    in conjunction with the specification’s use of “i.e.” clearly indicated that the patent intended to
    define “redraw” as “call low level graphics routine”).
    Notwithstanding, the evidence contained within the specifications of the First and Second
    Patent Families, SAIC argues that the claimed “video images” are not equivalent to “video
    frames.” Pl.’s Opening Cl. Constr. Br. at 10-14; Pl.’s Responsive Cl. Constr. Br. at 2-5; Pl.’s Supp.
    Cl. Constr. Br. (ECF No. 149) at 12. First, SAIC argues that Defendants’ proposed construction
    would destroy the claims’ distinction between partial frame display and complete frame display.
    Pl.’s Opening Cl. Constr. Br. at 12-13. Second, SAIC argues that its proposed construction is
    supported by the prosecution history. Pl.’s Opening Cl. Constr. Br. at 11-14.
    36
    Each of these contentions is without merit. First, SAIC’s concern that Defendants’
    construction would destroy the claims’ distinction between partial frame display and complete
    frame display is misplaced. By their plain language, the claims fall into two broad categories: (1)
    claims that state part of the video image may be displayed; (2) claims that state the whole image
    must be displayed. Notably, the claims that permit the display of part of the image say so
    expressly. Compare ’230 patent, claim 15 (“A method comprising . . . (e) displaying at least a
    portion of the first video source image and at least a portion of the second video source
    image . . .”) (emphasis added) and ’752 patent, claim 7 (“A method comprising . . . displaying at
    least a portion of the first image and the selected part of the second image . . .”) with ’012 patent,
    claim 1 (“A method of registering video images with an underlying visual filed comprising the
    steps of: . . . (3) displaying the video images . . . on the transparent display . . .”) and ’103 patent,
    claim 1 (“A system comprising: . . . a computer adapted . . . to display the video images, on the
    transparant [sic] display . . .”). Defendants’ proposed construction respects this distinction and
    gives meaning to this claim language. Construing “video images” to mean “digital, analog, or
    nonstandard video frames” permits partial frame display, and also allows for full frame display
    when the claims require full frame display.
    By contrast, SAIC’s proposed use of the term “electronic data” does not comport with the
    claim language or the specifications. First, claim construction “must give meaning to all the words
    in [the] claims.” Funai Elec. Co. v. Daewoo Elecs. Corp., 
    616 F.3d 1357
    , 1372 (Fed. Cir. 2010)
    (internal quotation marks omitted) (quoting Exxon Chemical Patents, Inc. v. Lubrizol Corp., 
    64 F.3d 1553
    , 1557 (Fed. Cir. 1995)). Construing “video images” to encompass all types of electronic
    data would destroy the distinction between partial and whole display by importing the ability to
    represent only part of the frame into the definition of the term “video image.” If this were the case,
    37
    every claim would allow for partial frame display, regardless of its actual language. SAIC’s
    construction would render the claims’ “at least a portion” language meaningless. Moreover, claim
    1 of the ’012 and ’103 patents refer to the “boundaries of the displayed video images are in
    registration with boundaries of portions of the visual field represented by the displayed video
    images.” ’012 patent at 10:8-10; ’103 patent at 10:26-28. The notion of boundaries within the
    claim supports Defendants’ proposed construction because frames are inherently bound; whereas,
    electrical data is not. Therefore, Defendants’ construction is more consistent with the claim
    language. See Trustees of Columbia Univ. v. Symantec, 
    811 F.3d 1359
    , 1366 (Fed. Cir. 2016)
    (“[T]he construction that stays true to the claim language and most naturally aligns with the
    patent’s description of the invention will be, in the end, the correct construction.” (internal
    quotation and citation omitted)).
    Second, SAIC’s construction would exclude “analog” images from the term video image
    because nothing in the patent indicates that analog images are “electrical data.” For instance, the
    patents state that video feeds may be delivered in numerous formats, including standard formats,
    such as analog formats like NTSC or PAL, digital formats like MPEG, or any non-standard format.
    ’012 patent at 4:43-46; ’103 patent at 4:48-51; ’230 patent at 4:6-9; ’752 patent at 4:20-23. Frames
    received in an analog format may require conversion into a digital format for processing. ’012
    patent at 6:35-37; ’103 patent at 6:41-43. This reference to conversion into a digital format
    indicates that the image being received may not always be “electronic.” As SAIC’s construction
    would exclude analog uses because it fails to recognize the conversion of analog data to a digital
    format, it would be improper to adopt its proposed construction. See Vitronics, 
    90 F.3d at 1583
    (“[A]n interpretation [which excludes a preferred embodiment] is rarely, if ever, correct . . . .”
    (citations omitted)).
    38
    Finally, the prosecution history cited by SAIC does not support SAIC’s proposed
    construction. During the prosecution of the ’230 patent, examiners originally rejected claim 1 as
    unpatentable over the prior art, specifically Azuma’s LCD display. Pl.’s Opening Cl. Constr. Br.
    Ex. E, ’230 Patent File History, May 30, 2012 Appeal Br. (ECF No. 90-8) at 4. SAIC appealed
    the examiner’s rejection arguing that Azuma does not teach the disputed limitation in claim 1 of
    the ’230 patent; rather, SAIC argued that the ’230 patent teaches that the “first video source [is]
    configured to generate images representing portions of an external environment.” Id. at 7. To
    support its appeal, SAIC noted, inter alia, that Azuma focused on optical images while claim 1 of
    the ’230 patent is related to an image from a camera. Id. at 8-9. On appeal, the Patent Trial and
    Appeal Board (PTAB) agreed with SAIC, reasoning that:
    Although Azuma’s optical display may generate optical images, the disputed
    limitation of a video source, when read in light of the Specification, must generate
    video images, e.g., electrical signals or data. Therefore, it is insufficient that
    Azuma’s display generates optical images; this fact, of itself, does not transform
    Azuma’s display into a video source as required by claim 1. Furthermore, the
    Examiner has erred by relying on Azuma’s single optical display to teach both the
    first video source and video display of claim 1.
    Pl.’s Opening Cl. Constr. Br. Ex. F, ’230 Patent File History, PTAB Decision on Appeal (ECF No.
    90-9) at 5.
    There is nothing cited in the ’230 patent prosecution history that supports SAIC’s assertion
    that Defendants’ frame limitation is inappropriate. Instead, the cited prosecution history appears
    to be more focused on distinguishing optical images from video images; it says nothing concerning
    Defendant’s proposed “frame” construction.
    SAIC’s reliance on the ’012 patent file history is also without merit. During the prosecution
    of the ’012 patent, the patent examiner originally rejected SAIC’s claims based on a prior art night
    vision goggle operational summary from STS Sensor Technology Systems, (STS). Pl.’s Opening
    39
    Cl. Constr. Br. Ex. G, ’012 Patent File History, July 24, 2009 (’012 Patent File History) (ECF No.
    90-10) at 9-11. In response, SAIC discussed two combined images, provided below, generated by
    the STS reference, which are not in registration:
    Id. SAIC explained to the examiner that the above-depicted video images were not in registration
    in part because “boundaries of the thermal [e.g., scope] image are substantially offset from (and
    not in registration with) the boundaries of the visual field portion.” Id. at 11. These file history
    remarks relate to the topic of registration, not video images, and nothing in these remarks addresses
    the patent specification’s use of the terms “video image” and “video frame” interchangeably.
    Accordingly, this Court construes “video images” as “digital, analog, or nonstandard video
    frames.”
    40
    II. TRANSPARENT DISPLAY
    SAIC                    Government                Microsoft                L3 11
    a display that has the optical see-through        N/A                      a display that allows
    effect of               display, or a display                              light from
    being transparent or    that allows some light                             the visual field to
    translucent,            to pass through it                                 pass through it
    allowing                (i.e., is see-through)
    simultaneous viewing when
    of the underlying       powered off
    visual field and
    other images or
    information
    Court’s Construction: “a display that has the effect of being transparent or translucent,
    allowing simultaneous viewing of the underlying visual field and other images or information”
    SAIC argues that in the context of the patents, the term “transparent display” should be
    construed as “a display that has the effect of being transparent or translucent, allowing
    simultaneous viewing of the underlying visual field and other images or information.” Pl.’s
    Opening Cl. Constr. Br. at 34; Pl.’s Responsive Cl. Constr. Br. at 12-13; Pl.’s Supp. Cl. Constr.
    Br. at 7; Pl.’s Responsive Cl. Constr. Br. to L3 at 1. SAIC argues that its construction is supported
    by the specification which discloses “transparent display” in non-limiting terms and also includes
    examples of both optical see-through and video see-through/functionally see-through. Pl.’s
    Opening Cl. Constr. Br. at 36-37; Pl.’s Responsive Cl. Constr. Br. at 13-14; Pl.’s Supp. Cl. Constr.
    Br. at 7-8; Pl.’s Responsive Cl. Constr. to L3 at 2. Thus, according to SAIC, a person of ordinary
    11
    L3 offers a slightly different construction for ’103 and ’012 patents. For the ’012 patent, L3
    proposes construing “transparent display” as, “a display that allows light from the visual field to
    pass through it.” L3 Opening Cl. Constr. Br. at 9. For the ’103 patent, L3 proposes construing
    “transparent display” as, “a heads up display adapted for viewing of the visual field by a user of
    the system wherein the HUD comprises a display that allows light from the visual field to pass
    through it.” Id. The difference in construction seems to account for claim 1 of the ’012 patent
    being a method claim, and, claim 1 of the ’103 patent being a system claim. However, L3’s
    construction of “transparent display” is essentially a display that allows visual light to pass through
    it. For ease of reference, this Court refers only to L3’s proposed ’012 patent claim construction.
    41
    skill in the art would understand the term transparent display to include video-generated displays
    that have a transparent effect. Pl.’s Supp. Cl. Constr. Br. at 14-15.
    The Government and L3 argue for a narrower interpretation of transparent display. The
    Government interprets transparent display to mean an “optical see-through display, or a display
    that allows some light to pass through it (i.e. is see-through) when powered-off.” Government’s
    Opening Cl. Constr. Br. at 20, 27-28; Government’s Responsive Cl. Constr. Br. at 16, 20.
    Similarly, L3 interprets transparent display to mean “a display that allows light from the visual
    field to pass through it.” L3’s Opening Cl. Constr. Br. at 9; L3 Technologies, Inc.’s Responsive
    Claim Construction Brief (ECF No. 152) (L3’s Responsive Cl. Constr. Br.) at 1. Though different,
    L3’s construction agrees with the Government’s construction in principle. The gravamen of both
    L3’s and the Government’s constructions is that transparent display is limited to an “optical see-
    through display.” See Markman Hearing Tr. at 168:17-21 (L3), 188:22-190:22 (Government).
    The parties’ dispute therefore boils down to whether “transparent display” is limited to so-
    called “optical see-through displays” or whether the term encompasses so-called “video
    see-through displays.” This Court agrees with SAIC’s construction that “transparent display”
    encompasses “video see-through displays” because Defendants’ proposals would exclude
    disclosed embodiments. See Vitronics, 
    90 F.3d at 1583-84
    . The term “transparent display”
    appears in the First Patent Family claims only. The term appears in claims 1, 10, 12, and 17-18 of
    the ’012 patent, and claims 1 and 9 of the ’103 patent. Nothing in the claim language limits
    transparent display to optical see-through. Claim 1, for example, does not contain any express
    limitation requiring the transparent display to operate only where light passes from the visual field
    through the display. The dependent claims likewise do not include any such requirement.
    42
    With no evidence restricting transparent displays to optical see-through displays in the
    claims themselves, both L3 and the Government rely heavily on Figure 6 of the ’012 patent to
    support their constructions. L3’s Opening Cl. Constr. Br. at 10-11; Government’s Opening Cl.
    Constr. Br. at 22-23; L3’s Responsive Cl. Constr. Br. at 5-7. Figure 6 is a block diagram depicting
    the heads-up display (HUD) and a video assembly, both of which capture a view of the visual field.
    ’012 patent at Fig. 6. The HUD and video assembly are connected by sensors to a computer, and
    the computer is then connected to a beam combiner. 
    Id.
     Finally, the figure depicts an arrow
    directly from the beam combiner to the viewer’s eye. 
    Id.
     The parties agree that this particular
    embodiment is optical see-through. See Government’s Opening Cl. Constr. Br. at 26; L3’s
    Opening Cl. Constr. Br. at 5, 10-11; Markman Hearing Tr. 198:19-25 (SAIC). Defendants contend
    that the invention is limited to this embodiment. They state that the description of Figure 6 limits
    the term transparent display to an optical see-through display. Government’s Opening Cl. Constr.
    Br. at 24-26; L3’s Responsive Cl. Constr. Br. at 5.
    However, this Court cannot limit a transparent display to a single embodiment. Verizon
    Servs. Corp. v. Vonage Holdings Corp., 
    503 F.3d 1295
    , 1305 (Fed. Cir. 2007). Moreover, “a claim
    interpretation that excludes a preferred embodiment from the scope of the claim is rarely, if ever,
    correct.” MBO Labs., Inc. v. Becton, Dickinson, & Co., 
    474 F.3d 1323
    , 1333 (Fed. Cir. 2007)
    (internal quotation and citation omitted); EPOS Techs. Ltd. v. Pegasus Techs. Ltd., 
    766 F.3d 1338
    ,
    1347 (Fed. Cir. 2014) (holding that a claim construction is incorrect when it “reads out” preferred
    embodiments).
    Importantly, the ’012 patent expressly discloses two embodiments that use a video
    see-through display. First, the ’012 patent expressly discloses an embodiment in which
    the image produced in the visual field of a display is captured by a second video
    camera. This second video feed or goggle image, along with the video images from
    43
    the video source, are both fed into a computer for initial digital processing. . . .
    [T]he video image may be displayed in the heads up display alone, or the heads up
    display may be filled with the resultant combination of video image and goggle
    image.
    ’012 patent at 9:36-52.
    At oral argument, L3 argued that this embodiment does not mean that the HUD can be
    functionally see-through because the specification discloses that the HUD will be filled with a
    single image. Markman Hearing Tr. at 179:4-181:19. This argument ignores the specification
    statement that the single, resultant image is created by combining the video image and the goggle
    image. ’012 patent at 9:49-51. There is no reason why such an embodiment would arbitrarily
    exclude a video see-through display when the two-video system provides capability for a video
    see-through display. Indeed, if this embodiment required that the video image for a goggle image
    overlay an optically transparent display of that same goggle image, as L3 suggests, then such an
    embodiment would require that three images be aligned instead of two—which is something that
    the invention does not contemplate.
    Second, the ’012 patent also states that,
    [i]n other alternative embodiments, the heads up display need not be connected to
    the viewer, as through a pair of night vision goggles. . . . The current state of the
    art uses two screens, one for navigation and one for aiming the weapon. A robot
    operator uses one screen to drive the robot and acquire targets, then refers to an
    adjacent screen to aim and fire the weapon. Registering the weapon video image
    to the navigation screen in a manner similar to an infantryman garners similar
    advantages for the robot operator.
    ’012 patent at 5:23-25, 5:33-39. In such an embodiment, a robot is remote from its operator, who
    uses a navigation screen to control the robot. 
    Id.
     The Government’s and L3’s proposals would
    exclude any variation of such an embodiment because light does not pass directly from the robot’s
    visual field to the remote operator’s display.
    44
    Considering the specification’s disclosures, a POSITA (under either party’s proposed
    standard) would understand that the inventions in the ’012 and ’103 patents use “transparent
    display” as encompassing “video see-through” displays. Such an interpretation would also be
    consistent with the inventions’ intended purpose of displaying video images in registration with
    an underlying visual field. See ’012 patent at Abstract, 1:6-9; 2:32-38; claims 1, 17.
    Notwithstanding the evidence in the specification, the Government and L3 also argue that
    “during prosecution of the application issuing as the ’103 patent, for purposes of allowance, the
    applicant authorized the following narrowing amendment to application claim 1, replacing (among
    other things) ‘in the HUD’ with ‘on the transparent display[.]’” Government’s Opening Cl. Constr.
    Br. at 24; L3’s Opening Cl. Constr. Br. at 9 (both quoting Examiner’s Amendment with Notice of
    Allowability (ECF No. 63-3) at 127-29). The Government’s and L3’s construction is supported
    by the ’103 patent’s prosecution history. Government’s Opening Cl. Constr. Br. at 24-25; L3’s
    Responsive Cl. Constr. Br. at 8-10. Defendants argue that by making this amendment, SAIC ceded
    from the claim scope a video see-through display. Government’s Opening Cl. Constr. Br. at 24-
    25; see also L3’s Opening Cl. Constr. Br. at 9-10.
    However, Defendants fail to articulate how the amendment results in a clear disavowal.
    Nor could they. The law “requires that the alleged disavowing actions or statements made during
    prosecution be both clear and unmistakable.” Aylus Networks, Inc. v. Apple Inc., 
    856 F.3d 1353
    ,
    1359 (Fed. Cir. 2017) (internal citations omitted). Here, no evidence exists on the record showing
    why this amendment was made. While statements, including amendments, made during patent
    prosecution can amount to prosecution disclaimer, the disavowal must not be ambiguous. See Avid
    Tech., Inc. v. Harmonic, Inc., 
    812 F.3d 1040
    , 1045 (Fed. Cir. 2016) (“Where the alleged disavowal
    is ambiguous, or even ‘amenable to multiple reasonable interpretations,’ [the Federal Circuit has]
    45
    declined to find prosecution disclaimer.” (quoting Cordis Corp. v. Medtronic AVE, Inc., 
    339 F.3d 1352
    , 1359 (Fed. Cir. 2003)) (citing Omega Engineering, Inc. v. Raytek Corp., 
    334 F.3d 1314
    ,
    1323-24 (Fed. Cir. 2003) (finding multiple reasonable interpretations of remarks made by the
    inventor in patent prosecution so disavowal did not occur)). Replacing “in the HUD” with “on the
    transparent display” alone says nothing about the scope of the term “transparent display.” Simply
    put, the mere existence of an amendment does not demonstrate disavowal.
    Indeed, a complete look at the prosecution history of the First Patent Family as a whole
    indicates that there was no intended and clear disavowal of video see-through displays. Other
    excerpts from the prosecution history of the ’012 patent indicate that SAIC neither excluded a
    system that presented both the weapon sight image and the goggle image as video nor required
    light from the goggles’ visual field to pass through the display. During prosecution, the examiner
    rejected claim 1 of the ’012 patent application as obvious over the combination of (1) an
    operational summary for a pair of night vision goggles made by Sensor Technology Systems
    (STS) and (2) 
    U.S. Patent No. 7,277,118
     (Foote). Pl.’s Supp. Cl. Constr. Br., Ex. T, ’012 FH,
    09.04.2009 Final Rejection (ECF No. 149-3) at 3-4. To support his rejection, the examiner alleged
    that the combination of STS and Foote taught a multi-camera system providing video images. Id.
    at 4. This rejection indicates that the examiner viewed claim 1 of the ’012 patent as encompassing
    multiple cameras. SAIC overcame this rejection, not by arguing that claim 1 employed only a
    single camera, but by arguing, inter alia, that “Foote’s predication upon fixed cameras is contrary
    to (and fundamentally incompatible with) the recitation in claim 1 of the video source and the
    transparent display being independently movable about multiple axes.” Pl.’s Supp. Cl. Constr. Br.
    Ex. U, ’012 FH, 12.03.2009 Arguments (ECF No. 149-4) at 8-9; see also Pl.’s Supp. Cl. Constr.
    Br. Ex. V, 12.17.2009 Pre-Appeal Brief Request for Review (ECF No. 149-5) at 3. SAIC’s
    46
    argument thus differentiated claim 1 from the examiner’s asserted combination of STS and Foote
    because the transparent display was independently movable from the video source but did not
    disclaim using video system as the transparent display. Accordingly, the Court finds that the
    prosecution history does not evidence a clear disavowal of a video see-through transparent display.
    Defendant’s reliance on extrinsic evidence is also unavailing. Because the intrinsic record
    clearly contemplates “transparent displays” beyond optical see-through displays, this Court need
    not reference extrinsic evidence. See Summit 6, LLC v. Samsung Elecs. Co., Ltd., 
    802 F.3d 1283
    ,
    1290 (Fed. Cir. 2015) (“Although courts are permitted to consider extrinsic evidence, like expert
    testimony, dictionaries, and treatises, such evidence is generally of less significance than the
    intrinsic record.” (citing Phillips, 415 F.3d at 1317)). Furthermore, it is imperative that this Court
    put far less weight, if any, on extrinsic evidence to the extent it contradicts the intrinsic record.
    See Immunex Corp. v. Sanofi-Aventis U.S. LLC, 
    977 F.3d 1212
    , 1221-22 (Fed. Cir. 2020) (citing
    Helmsderfer v. Bobrick Washroom Equip., Inc., 
    527 F.3d 1379
    , 1382 (Fed. Cir. 2008)) (holding
    that it would be incorrect “to allow the extrinsic evidence . . . to trump the persuasive intrinsic
    evidence”).
    Here, the Government and L3 cite to several dictionary definitions, testimony from Dr.
    Ronald Azuma, and prior art to support their constructions that would limit the transparent display
    to optical see-through displays. See Government’s Opening Cl. Constr. Br. at 28; L3’s Opening
    Cl. Constr. Br. at 13, 24; L3’s Responsive Cl. Constr. Br. at 10-12. With respect to dictionaries,
    the Defendants rely on the following definitions:
    (1) The Government uses Merriam-Webster’s Collegiate Dictionary (11th ed.),
    which defines “transparent,” in relevant parts, as “1a(1): having the property of
    transmitting light without appreciable scattering so that bodies lying beyond are
    47
    seen clearly” or “1b: fine or sheer enough to be seen through,” “b: easily detected
    or seen through.” Government’s Opening Cl. Constr. Br. at 28 (citing SAIC’s
    Motion to Substitute Exhibits Ex. A, Merriam-Webster’s Collegiate Dictionary
    (11th ed. 2004) at 1330 (ECF No. 69-1) at 3).
    (2) L3 uses Merriam Webster’s Collegiate Dictionary (10th ed.), which defines
    “transparent” as “having the property of transmitting light without appreciable
    scattering so that bodies lying beyond are seen clearly.” L3’s Opening Cl. Constr.
    Br. Ex. 1 (Merriam-Webster’s Collegiate Dictionary (10th ed. 2000) at 1251)
    (ECF No. 148-1) at 5.
    (3) L3 also uses Wiley Electrical and Electronics Dictionary, which defines
    “transparent,” in relevant part, as “1. A body, material or medium which freely
    passes radiant energy, such as light, or sound.” L3’s Opening Cl. Constr. Br. Ex.
    6 (Wiley Electrical and Electronics Dictionary (2004) at 801-02) (ECF No. 148-
    1) at 26.
    The Government and L3 also rely on the testimony of Dr. Azuma and on prior art to
    distinguish between optical see-through and video see-through heads up displays.               See
    Government’s Opening Cl. Constr. Br. at 26-27 (citing Joint Claim Construction Statement Exhibit
    12, Chung, J.C., et al., “Exploring Virtual Worlds with Head-Mounted Displays,” appeared in
    Non-Holographic True 3-Dimensional Display Technologies, SPIE Proceedings, Vol. 1083, Los
    Angeles, CA, January 15-20, 1989 (ECF No. 64-12) at 5) (differentiating between “optical see-
    though HMD” and “opaque head-mounted display”); Pl.’s Responsive Cl. Constr. Br. Ex. S,
    Ronald Azuma Deposition Transcript (ECF No. 96-3) (Azuma Dep.) at 160:21-161:2 (“Q.
    . . . Would you consider a video see-through as it's used here in your dissertation -- excuse me.
    48
    Video see-through HMD, would you consider that to be a transparent display? A. I think we had
    that discussion already. Generally speaking, no, because if the power was cut, then the user would
    effectively be blinded.”).
    The dictionary definitions, Dr. Azuma’s testimony, and the prior art referenced by
    Defendants are contrary to the intrinsic record and therefore fail to support Defendants’ proposed
    construction.   Even if these dictionaries, expert testimony, and prior art did establish that
    video-see-through displays are generally not considered “transparent,” it would be of no moment;
    under Phillips, the plain meaning is not the meaning of the term in the abstract but is rather the
    plain meaning as understood by a POSITA after reading the patents.               415 F.3d at 1313
    (“Importantly, the person of ordinary skill in the art is deemed to read the claim term not only in
    the context of the particular claim in which the disputed term appears, but in the context of the
    entire patent, including the specification.”). When a term as used in patent conflicts with the
    ordinary or conventional usage of that term, the patent’s use of the term must prevail. For instance,
    in Honeywell Int'l, Inc. v. Universal Avionics Sys. Corp., 
    493 F.3d 1358
    , 1361 (Fed. Cir. 2007),
    the Federal Circuit was tasked with interpreting the term “heading” in the context of aircraft
    navigation. There, the parties agreed that, in the context of aircraft navigation, ‘“[h]eading’
    ordinarily refers to the direction in which an object is pointing.” 
    Id.
     However, the Federal Circuit
    found that, based on a patent figure, “the patentees used the term ‘heading’ in a manner different
    from its ordinary meaning.” 
    Id.
     Relying on Phillips, the Federal Circuit held that “[w]hen a
    patentee defines a claim term, the patentee's definition governs, even if it is contrary to the
    conventional meaning of the term.” 
    Id.
     (citing Phillips, 415 F.3d at 1321). Here, the patents
    disclose embodiments that include both optical see-through and video see-through. See ’012
    patent at 5:44-6:2, 7:16-19; ’103 patent at 5:49-6:7, 7:22-25. To the extent the cited dictionary
    49
    definitions, prior art references, and expert testimony contradict these disclosed embodiments, this
    Court cannot rely on such definitions, references, and testimony, and declines to do so. See
    Immunex Corp. v. Sanofi-Aventis U.S. LLC, 
    977 F.3d 1212
    , 1221-22 (Fed. Cir. 2020) (citing
    Helmsderfer v. Bobrick Washroom Equip., Inc., 
    527 F.3d 1379
    , 1382 (Fed. Cir. 2008)) (holding
    that it would be incorrect “to allow the extrinsic evidence . . . to trump the persuasive intrinsic
    evidence”).
    For these reasons, this Court construes “transparent display” as “a display that has the effect
    of being transparent or translucent, allowing simultaneous viewing of the underlying visual field
    and other images or information.”
    III. OVERLAY
    SAIC                      Government               Microsoft                 L3
    Plain and ordinary        N/A                      N/A                       to place on top or in
    meaning:                                                                     front of
    e.g., Overlaying                                                             (e.g., superimpose)
    (plain and
    ordinary meaning):
    positioned over or
    upon.
    Overlay/overlays
    (plain and
    ordinary meaning):
    are over
    or upon/is over or
    upon
    Court’s Construction: Plain and ordinary meaning, e.g., Overlaying: positioned over or upon. /
    Overlays/overlay: are over or upon/is over or upon.
    SAIC’s and L3’s proposed constructions of the terms “overlay,” “overlays,” and
    “overlaying” are remarkably similar, but are not identical. SAIC contends that the “overlay” terms
    are unambiguous because such terms are used in accordance with their plain and ordinary meaning
    50
    in the patents—“positioned over or upon” or “are over or upon/is over or upon.” Pl.’s Supp. Cl.
    Constr. Br. at 16. SAIC further argues that L3’s proposed construction “to place on top or in front
    of (e.g., superimpose)” as a synonym for “overlay” adds nothing of consequence to the term’s
    meaning and would erroneously read limitations into the patent claims.            
    Id.
     at 18 (citing
    ActiveVideo Networks, Inc. v. Verizon Comm’ns, Inc., 
    694 F.3d 1312
    , 1324-25 (Fed. Cir. 2012));
    Pl.’s Responsive Cl. Constr. Br. to L3 at 7. Essentially, SAIC urges the Court to adopt its proposed
    plain and ordinary meaning or construction of the term “overlay” because it “encompasses the full
    scope of the disclosed embodiments consistent with the inventions’ purpose;” whereas, L3’s
    proposed construction would inappropriately limit the scope of the patent claims at issue. Pl.’s
    Supp. Cl. Constr. Br. at18; Pl.’s Responsive Cl. Constr. Br. to L3 at 7-8.
    L3 argues that “overlay” should be construed to mean “on top or in front of (e.g.,
    superimposed).” L3’s Responsive Cl. Constr. Br. at 14. L3’s proposed construction imports a
    directional component into the definition of “overlay” that SAIC’s construction does not require.
    See L3’s Opening Cl. Constr. Br. at 5-8. L3 asserts this definition comports with how “overlaying”
    is described in the patents. L3’s Responsive Cl. Constr. Br. at 14-15.
    The parties appear to agree—and the Court concurs—the concept of “superimposing” is
    contained within the parties’ respective claim constructions, which L3 and SAIC each contend
    encompass the “plain meaning” of the “overlay” terms. See, e.g., L3’s Opening Cl. Constr. Br. at
    8-9; L3’s Responsive Cl. Constr. Br. at 14-16; Pl.’s Cl. Constr. Presentation at 88 (“SAIC offered
    to construe this term as ‘plain and ordinary meaning (e.g., superimpose)’”); L3’s Cl. Constr.
    Presentation at 33 (noting SAIC’s proposed stipulation encompasses “superimpose”). The parties’
    dispute appears instead to center around the importation of directional requirements into the
    “overlay.”
    51
    The Court agrees with SAIC that overlay is not limited to placing one image “on top of”
    or “in front of” another. The overlay terms appear in claims 1 and 17 of the ’012 patent; claim 1
    of the ’103 patent; and claims 1, 15, and 29 of the ’230 patent. Claim 1 of the ’012 patent recites
    a method of displaying video images overlaying portions of a visual field. ’012 patent at 9:63-
    10:10. The ’012 patent states:
    1. A method of registering video images with an underlying visual field comprising
    the steps of:
    (1) determining a source orientation of a video source providing a video feed
    containing data for a series of video images representing portions of a visual
    field;
    (2) determining a display orientation of a transparent display overlaying the
    visual field, wherein the video source and the transparent display are
    independently movable about multiple axes; and
    (3) displaying the video images in positions on the transparent display that
    overlay portions of the visual field represented by the displayed video images,
    wherein boundaries of the displayed video images are in registration with
    boundaries of portions of the visual field represented by the displayed video
    images.
    
    Id.
    In claim 1 of the ’012 patent, “overlaying” is used to distinguish the transparent display
    from the “underlying visual field.” 12 As used in claim 1, the terms “overlaying” and “underlying”
    describe the positions of the transparent display and the visual field relative to one another.
    However, nothing in claim 1 limits “overlaying” through an objective perspective outside both the
    transparent display and the visual field such that the overlaying image must always be viewed as
    “on top of” or “in front of” another image.
    In the Second Patent Family, only the ’230 patent uses the term overlay. 13 The ’230 patent
    12
    Claim 1 of the ’103 patent uses the term “overlay” consistent with the rest of the First Patent
    Family. See ’103 patent at 10:20-29.
    13
    The ’752 patent uses the term “replace” rather than “overlay.” However, at oral argument on
    the Government’s motion to dismiss, SAIC’s counsel noted that the terms serve a similar function.
    52
    uses “overlay” to describe how two video source images are displayed, rather than to distinguish
    between the video feed and visual field as in the First Patent Family. Claim 1 of the ’230 patent
    is illustrative and recites in relevant part:
    (e) display at least a portion of the first video source image and at least a portion
    of the second video source image such that the second video source image
    portion overlays a corresponding region of the first video source image portion,
    wherein the corresponding region represents a portion of the external
    environment represented in the second video source portion.
    ’230 patent at 24:45-51.
    Like the claims in the ’012, ’103, and ’230 patents, the First Patent Family’s specification,
    which is incorporated by reference in ’230 patent’s specification, occasionally describes an image
    as “in front of the visual field” when describing certain embodiments. ’012 patent at 5:57-67;
    7:21-24. However, nothing in the specifications indicate that “overlay” is necessarily limited to
    placing an image in front of or on top of another. Therefore, the importation of such limitations
    from the specification is impermissible. See Phillips, 415 F.3d at 1323.
    Moreover, L3’s construction would impermissibly exclude other embodiments. See MBO
    Labs., 474 F.3d at 1333 (“A claim interpretation that excludes a preferred embodiment from the
    scope of the claim is rarely, if ever, correct.” (internal quotation marks and citation omitted)). For
    instance, the ’012 patent specification references alternative embodiments where “the heads up
    display may be filled with the resultant combination of the video image and the goggle image.”
    ’012 patent at 9:37-51. L3’s proposed limitation would exclude this embodiment because placing
    one image in front of another requires that there be two distinct images that, when viewed from a
    specific (though unidentified) perspective, are directionally related such that one is “on top or in
    Jan. 3, 2018 Oral Argument Tr. (ECF No. 18) at 68:4-17 (“[The ’752 patent] talks about replacing
    an image with a portion of another – replace a portion of an image with a portion of another image.
    That’s different. That’s different language. It’s a different process. It may get you to the same
    end, but you asked about the how, the how matters. Those are different hows, Your Honor.”).
    53
    front of” the other. L3’s Responsive Cl. Constr. Br. at 16. Specifically, L3’s limiting definition
    would exclude such an embodiment from the scope of the patent claim because the HUD is “filled
    with” a single resultant image rather than one image positioned “on top or in front of” another.
    See ’012 patent at 9:49-51.
    Nor is L3’s reliance on dictionary definitions persuasive. As noted, because the intrinsic
    record is clear that “overlay” is not limited to “in front of’ or “on top of,” this Court need not
    reference extrinsic evidence. See Phillips, 415 F.3d at 1317-18; Summit 6, LLC, 802 F.3d at 1290.
    However, to the extent extrinsic evidence is at all relevant, it supports a broader construction than
    L3 proposes.
    To support its proposed construction, L3 relies on the following dictionary definitions of
    the term “overlay.” L3’s Opening Cl. Constr. Br. at 8-9.
    (1) Merriam Webster’s Collegiate Dictionary defines “overlay” as “to lay or spread
    over or across.”     L3’s Opening Cl. Constr. Br. Ex. 1 (Merriam-Webster’s
    Collegiate Dictionary (10th ed. 2000) at 827) at 4.
    (2) New Penguin Dictionary of Computing (2001) defines “overlay” in the context
    of graphics as “superimpos[ing] one image over another.” L3’s Opening Cl.
    Constr. Br. Ex. 2 (New Penguin Dictionary of Computing (2001) at 352) (ECF
    No. 148-1) at 9.
    (3) The New Oxford American Dictionary defines “overlay” as “lie on top of: a third
    screen which will overlay the others.” L3’s Opening Cl. Constr. Br. Ex. 3 (The
    New Oxford American Dictionary (2nd ed. 2005) at 1213) (ECF No. 148-1) at
    13.
    54
    (4) Newton’s Telecom Dictionary defines “overlay” as “The ability to superimpose
    computer graphics over a live or recorded video signal and store the resulting
    video image on videotape. It is often used to add titles to videotape.” L3’s
    Opening Cl. Constr. Br. Ex. 5 (Newton’s Telecom Dictionary (16th ed. 2000) at
    622) (ECF No. 148-1) at 21.
    In Phillips, the Federal Circuit explained that dictionaries may be useful in determining the
    meaning of terms. 415 F.3d at 1318. Despite this allowable use of dictionary definitions, the
    Federal Circuit has warned that courts should not rely on dictionary definitions that are “entirely
    divorced from the context of the written description.” Id. at 1321. The Federal Circuit thus
    concluded that:
    [J]udges are free to consult dictionaries and technical treatises “at any time in order
    to better understand the underlying technology and may also rely on dictionary
    definitions when construing claim terms, so long as the dictionary definition does
    not contradict any definition found in or ascertained by a reading of the patent
    documents.”
    Id. at 1322-23 (quoting Vitronics, 
    90 F.3d at
    1585 n.6). Here, the New Oxford American
    Dictionary, which appears to be a general use dictionary, defines “overlay” as meaning to “lie on
    top of: a third screen which will overlay the others.” L3’s Opening Cl. Constr. Br. Ex. 3 at 13. To
    be sure, this definition supports L3’s construction (and is encompassed by SAIC’s construction)
    and is consistent with some embodiments contained within the patents. ’012 patent at 5:57-67;
    7:21-24. However, the other dictionaries to which L3 cite do not support limiting the term
    “overlay” as L3 suggests. Indeed, Newton’s Telecom Dictionary and New Penguin Dictionary of
    Computing, which are technical in nature, appear to undermine L3’s proposed claim limitations.
    Newton’s Telecom Dictionary defines “overlay” as “[t]he ability to superimpose computer
    graphics over a live or recorded video signal and store the resulting video image on videotape.”
    55
    L3 Opening Cl. Constr. Br. Ex. 5 at 21. Similarly, New Penguin Dictionary of Computing defines
    “overlay” in the context of graphics as “superimpos[ing] one image over another.” L3’s Opening
    Cl. Constr. Br. Ex. 2 at 9. These definitions, consistent with the intrinsic evidence, do not require
    that an image is placed only “in front of” or “on top of” another. Newton’s Telecom Dictionary’s
    reference to a “resulting video image” is also consistent with the ’012 patent specification’s
    reference to a preferred embodiment which combines images from two video feeds into one
    resultant image using digital processing. See ’012 patent at 9:37-51. Accordingly, SAIC’s
    proposed construction is consistent with both the intrinsic and extrinsic evidence cited by L3.
    For these reasons, this Court construes the “overlay” terms in accordance with their plain
    meaning as, “e.g., Overlaying: positioned over or upon.” / “ Overlay/overlays: are over or upon/is
    over or upon.”
    IV. “BASED ON A COMPARISON OF DATA FROM THE FIRST AND SECOND
    VIDEO SOURCE IMAGES”
    SAIC                     Government               Microsoft             L3
    Plain and ordinary       N/A                      N/A                   based on [a] the
    meaning                                                                 comparison of
    image data (e.g.,
    content and
    contrast) from the
    first and
    second video source
    images
    Court’s Construction: “based on a comparison of image data (e.g., content and contrast) from
    the first and second video source images”
    In its most recent filing, L3 offers the construction “based on [a] the comparison of image
    data (e.g., content and contrast) from the first and second video source images” for the “based on
    a comparison of data from the first and second video source images” term. L3’s Responsive Cl.
    56
    Constr. Br. at 17 (brackets and strikethrough in original). 14 L3 argues that the context of the claims
    and specification demonstrate that the “data” being compared is “image data (e.g., content and
    contrast)” and not “motion data” or “orientation data.” L3’s Opening Cl. Constr. Br. at 14-17. In
    other words, L3’s construction seeks to clarify “the actual content of the images – [sic] whether it
    be their greyscale, contrast, PSR, or some similar measure—is what must be compared, not data
    unrelated to the content of the images, such as the separately claimed motion/orientation data.”
    L3’s Responsive Cl. Constr. Br. at 18.
    SAIC argues that L3’s proposed construction would unduly restrict the claims to less than
    their full scope. Pl.’s Responsive Cl. Constr. Br. to L3. at 8. Specifically, SAIC contends that,
    [w]hile content and contrast are two types of data used when comparing data from
    a first and second sources of images as part of image registration, the express claim
    language—i.e., data from the first and second video source images—is
    straightforward and L3’s proposal to rewrite it should be rejected.
    Pl.’s Supp. Cl. Constr. Br. at 21-22 (emphasis in original).
    This Court agrees with L3 that the “data” in the phrase “based on a comparison of data”
    refers to “image data (e.g., content and contrast).” The language of the claims read in context of
    the entire patent are of primary importance. See Phillips, 415 F.3d at 1312 (citing Merrill v.
    Yeomans, 
    94 U.S. 568
    , 570 (1876)). Here, the language of the claims read in light of the
    specification are clear. The ’230 patent claims a process of registering images from two different
    video sources through two different comparisons—the first being based on a comparison of motion
    14
    Initially, L3 proposed to construe “based on a comparison of data from the first and second video
    source images” as “based on the comparison of image data (e.g., content and contrast) from the
    first and second video source images.” L3’s Opening Cl. Construction Br. at 14. However,
    Plaintiff took issue with L3’s initial construction language of “the comparison” as opposed to “a
    comparison” because it would allegedly introduce “an otherwise non-existent antecedent basis
    problem” to the claim language. Pl.’s Responsive Cl. Constr. Br. to L3 at 11. L3’s revised, current
    construction uses “a” instead of “the” to avoid an antecedent basis issue. L3’s Responsive Cl.
    Constr. Br. at 18.
    57
    data from the video source and the second being based on a comparison of the content of the
    images themselves. See, e.g., ‘230 patent at Abstract.
    The phrase “based on a comparison of data from the first and second video source images”
    is contained in step (d) of claims 1, 15, and 29 term of the ’230 patent. The distinction between
    motion data and image data is illustrated by claim 1 of the ’230 patent. In claim 1, the patentee
    outlines the invention as “[a] system, comprising: a first video source configured to generate
    images representing portions of an external environment [and] a second video source, movable
    independent of the first video source configured to generate images representing portions of the
    external environment . . . .” ’230 patent at 24:25-30. These video sources are connected to a
    controller and video display “wherein the controller is configured to (a) receive video images from
    the first video source and from the second video source” then to “(b) receive motion data indicative
    of motion of the first and second video sources.” Id. at 24:33-37. Once the controller receives
    motion and image data from the first and second video sources, it then “(c) identif[ies], based on
    the received motion data, a part of a first video source image that potentially represents a portion
    of the external environment represented in a part of a second video source image . . . .” Id. at
    24:38-41. Crucially, the system then “(d) evaluate[s], based on a comparison of data from the first
    and second video source images, the identification performed in operation (c).” Id. at 24:42-44.
    Finally, the system,
    display[s] at least a portion of the first video source image and at least a portion of
    the second video source image such that the second video source image portion
    overlays a corresponding region of the first video source image portion, wherein
    the corresponding region represents a portion of the external environment
    represented in the second video source portion.
    Id. at 24:45-51.
    The structure of claim 1 and step (d)’s reference to “data from the first and second video
    source image” demonstrates that the data used to perform this evaluation is the data received from
    58
    “video images” in step (a). Conversely, the data used to perform the evaluation in step (d) is not
    the “motion data” described in step (b), but rather is the data from the images received in step (a).
    See Becton, Dickinson & Co. v. Tyco Healthcare Grp., LP, 
    616 F.3d 1249
    , 1254 (Fed. Cir. 2010)
    (“Where a claim lists elements separately, the clear implication of the claim language is that those
    elements are distinct components of the patented invention.” (internal quotations and citations
    omitted)).
    Reading claim 1 in light of the invention’s intended purpose along with the specification
    provides further support for L3’s construction. The First Patent Family claims a system in which
    the images from two different sources are aligned using only orientation data. See, e.g., ’012
    patent, claims 1, 17; ’103 patent, claim 1. By the time the ’230 patent was filed three years later,
    however, the named inventors had realized that use of orientation data alone may pose problems.
    ’230 patent at 1:35-43 (identifying disadvantages of a system using only sensor data to match
    images). Thus, the later-filed ’230 patent describes an improved two-step alignment method. This
    method first uses data from motion sensors to help align images from two different sources and
    then performs a second step of comparing the content of the images themselves; then, using that
    comparison to evaluate whether the alignment is correct, adjusts the alignment as necessary. See,
    e.g., 
    id.
     at Abstract (“The sensor-based location is checked (and possibly adjusted) based on a
    comparison of the images.”); 2:57-62 (“FIGS. 8A through 8K illustrate checking and/or correcting
    an IMU-based position for one video image within another video image. FIGS. 9A through 9C
    illustrate correction of an IMU-based position calculation based on image comparison results.”);
    3:1-5 (“FIGS. 13A and 13B show a goggles images and scope image, respectively, used to describe
    an alternative image comparison algorithm. FIG. 14 is a flow chart for an alternative image
    comparison algorithm.”); 7:19-22 (“As discussed below, the location and rotation of weapon view
    59
    74 within user display 70 is determined by computer 30 based on output from sensors 13 and 18
    and based on comparison of the scope image with the goggles image.”); 9:45-47 (“For larger
    distances, image comparison position calculations (described below) compensate for errors caused
    by parallax.”); 10:5-16 (“In block 117, the IMU-based calculation for position and rotation of
    weapon view 74 within display 70 is checked using an image-based method.”). The ’230 patent
    summary provides an overview of this two-step alignment process:
    Data from the two images are then compared in order to evaluate the location
    determined from the sensor data. The sensor-based location is either confirmed, or
    a new location is found based on additional image comparisons. Once a location is
    selected (either a confirmed sensor-based location or a location found using image
    comparison), the two images are displayed such that the second source image (or a
    portion of that image) overlays a corresponding portion of the first source image.
    Locations obtained using image comparisons are used to calibrate (adjust) the
    manner in which subsequent sensor-based locations are determined.
    ’230 patent at 2:6-17.
    L3’s proposed construction gives proper meaning to the ’230 patent’s alignment process
    by delineating the use of motion data for the first step of the alignment process and the use of
    image data for the second step.
    In contrast, SAIC cannot cite to any intrinsic or extrinsic evidence to contradict L3’s
    construction; instead, SAIC argues in conclusory fashion that the phrase “based on a comparison
    of data from the first and second video source images” needs no construction. Pl.’s Supp. Cl.
    Constr. Br. at 21-22. Indeed, the portions of the specification SAIC references appear to support
    L3’s position that step (d) “evaluates” the accuracy of the initial alignment derived from motion
    data using the “content and contrast” of the images from the first and second video source. See
    ’230 patent at 10:21-26 (“To address these concerns, the relative orientation of goggles 11 and
    scope 17 can be independently deduced by processing image data from image generator 57 and
    60
    scope 17 if there is sufficient image content and contrast and if similar imaging technologies (e.g.
    microbolometers, CCD, etc.) are used.”).
    Accordingly, the Court construes “based on a comparison of data from the first and second
    video source images” to mean “based on the comparison of image data (e.g., content and contrast)
    from the first and second video source images.”
    V. “MOTION DATA”
    SAIC                    Government              Microsoft                  L3
    “data indicative of     Indefinite              N/A                        Indefinite
    motion, including at
    least orientation data”
    Court’s Construction: “data indicative of motion, including at least orientation data”
    The Government contends that the term “motion data” is a word or term with no established
    meaning outside the patents. Government’s Opening Cl. Constr. Br. at 15. Accordingly, the
    Government argues that, because there is no definition or even mention of “motion data” in the
    specification, a POSITA reviewing the specification and prosecution history at the time of the
    invention would not be informed “with reasonable certainty” of the claimed invention’s objective
    scope. Id. at 18. Moreover, according to the Government, SAIC’s construction of “motion data”
    as “data indicative of motion, including at least orientation data” is circular because it includes
    “data” and “motion” in the definition of “motion data,” then unjustifiably adds “including at least
    orientation data,” opening the door for additional, undefined “data” to be swept up within the
    scope. Id. at 17-19. The Government also contends that SAIC’s proposed construction would be
    repetitive and nonsensical because replacing “motion data” with SAIC’s proposed construction (as
    shown in underline) in claim 1, clause (b) of the ’230 patent would read as follows: “(b) receive
    61
    data indicative of motion, including at least orientation data indicative of motion of the first and
    second video sources.” Id. at 16. The Government also relies on extrinsic evidence, particularly
    the testimony of Ulrich Neumann, Ph.D., 15 Ronald Azuma, Ph.D., 16 and Gregory Welch, Ph.D.,17
    to support its assertion. See id. at 15-19.
    SAIC argues that motion data is readily understandable from the context of the patents and
    the surrounding claim language. Pl.’s Responsive Cl. Constr. Br. at 9. Additionally, SAIC argues
    that its construction is consistent with extrinsic evidence. Specifically, SAIC points to Dr. Welch’s
    testimony that the patents teach utilization of orientation data to determine the location of the
    weapon image within the goggles image. Pl.’s Responsive Cl. Constr. Br. at 10-11; see also Pl.’s
    Opening Cl. Constr. Br. at 33-34.
    The Court agrees with SAIC’s construction and holds that “motion data,” as used in the
    patents, is not indefinite. The intrinsic evidence supports SAIC’s construction. This term only
    appears in the Second Patent Family claims, particularly claims 1-3, 5, 15, 17, 19, 29, 31, and 33
    of the ’230 patent, and claims 1-2, 7-8, and 13-14 of the ’752 patent. Importantly, in the ’230 and
    ’752 patents, this term appears for the first time in the claims and cannot be found anywhere in the
    specification. The Government contends that “motion data” does not have a defined meaning
    outside the patents, and thus the specification must define the term to provide reasonable certainty.
    Government’s Opening Cl. Constr. Br. at 15 (quoting citing Acacia Media Techs. Corp. v. New
    15
    Dr. Neumann is the Government’s claim construction expert. Defendant’s Disclosure of Claim
    Construction Expert (ECF No. 48).
    16
    Dr. Azuma is a non-party, fact witness subpoenaed by the Government; however, the parties
    agree that he is also a “recognized pioneer and innovator in augmented reality.” Azuma Dep. at
    100:22-103:2, 136:5-8.
    17
    Dr. Welch is SAIC’s claim construction expert. Disclosure of Claim Construction Expert (ECF
    No. 49).
    62
    Destiny Internet Grp., 
    405 F. Supp. 2d 1127
    , 1136 (N.D. Cal. 2005)). The Government’s reliance
    on Acacia is misplaced. Acacia involved a patent for a data transmission system in which the
    plaintiff sought to define a “sequence encoder” as a “time encoder” to avoid an indefiniteness
    ruling. 
    405 F. Supp. 2d at 1134-36
    . While “time encoder” was used in one embodiment, the
    district court found no evidence that this was to be the only embodiment. 
    Id. at 1136
    . Acacia held
    that,
    [i]f a patentee uses a broad undefined term (such as ‘[motion data]’) in claiming an
    invention, when the validity of the patent is called into question in a legal
    proceeding, the owner of the patent cannot avoid invalidity by adopting a more
    limited definition (such as ‘[orientation data]’), unless that limitation can be fairly
    inferred from the specification.
    
    Id.
     Acacia is simply inapposite to the current case. SAIC is not seeking to limit “motion data” to
    “orientation data” but rather is arguing that orientation data is one type of data that must always
    be included within “motion data,” as that term is used within the patents. Pl.’s Cl. Constr. Br. at
    32-33. More importantly, unlike the plaintiff in Acacia that sought to use an embodiment to limit
    the scope of a claim, here, SAIC’s construction can fairly be inferred from the specification.
    The Acacia court acknowledged that not all undefined terms are indefinite. 
    405 F. Supp. 2d at 1136
    . As illustrated by the Federal Circuit’s decisions in Bancorp Servs., L.L.C. v. Hartford
    Life Ins. Co., 
    359 F.3d 1367
     (Fed. Cir. 2004) and Network Commerce, Inc. v. Microsoft Corp., 
    422 F.3d 1353
     (Fed. Cir. 2005), an undefined term with no specialized meaning in the field of the
    invention is not indefinite where the meaning of the term is fairly inferred from the patent. In
    Bancorp, the Federal Circuit addressed whether the phrase “surrender value protected investment
    credits” in a patent that described a system for tracking the value of life insurance policies was
    indefinite. 
    359 F.3d at 1372
    . The phrase was not defined in the patent and did not have an
    established definition in the industry. 
    Id. at 1372-73
    . However, the Federal Circuit held the term
    was definite because (1) the phrase’s component terms had “well-recognized meanings which
    63
    allow the reader to infer the meaning of the entire phrase with reasonable confidence,” and (2) the
    meaning of the phrase was “fairly inferable” from the specification and the dependent claims. 
    Id. at 1372-74
    . Likewise, in Network Commerce, the Federal Circuit held the term “download
    component” definite, despite being undefined, where the patents provided sufficient context as to
    how “download component” functioned in the claimed method. 
    422 F.3d at 1360-61
    .
    Here, ’230 patent claim 1 states that the system is comprised of “a controller coupled to
    the first and second video sources and to the display, wherein the controller is configured to . . .
    (b) receive motion data indicative of motion of the first and second video sources . . . .” ’230
    patent at 24:32-37. Step (b) of ’230 claim 1 identifies motion data. ’230 patent at 24:36-37. All
    subsequent uses of “motion data” are derived from how motion data is used in step (b). See, e.g.,
    Id. at 24:38-41.
    Step (c) of claim 1 continues that the controller will “identify, based on the received motion
    data, a part of a first video source image that potentially represents a portion of the external
    environment represented in a part of a second video source image.” Id. at 24:38-41. Thus, step
    (c) of ’230 claim 1 identifies how motion data is used.
    Read in light of the specification, it is clear that, at minimum, the motion data in steps (b)
    and (c) of claim 1 must measure or account for the relative orientation or alignment of both video
    sources. As discussed supra Section IV, the Second Patent Family claims a system in which the
    images from two different sources are aligned using only orientation data. See, e.g., ’012 patent,
    claims 1, 17; ’103 patent, claim 1. The ’230 patent purports to correct errors associated with the
    First Patent Family by adding an additional “evaluation” step to “check” the alignment initially
    performed using orientation data. See, e.g., ‘’230 patent at 24:42-44. Thus, claim 1 steps (b) and
    (c) are intended to incorporate the First Patent Family’s use of orientation data. For instance, in
    64
    describing the First Patent Family, the Second Patent Family discloses that “[s]ensors coupled to
    the rifle and to the goggles provide data indicating movement of the goggles and rifle. [T]he sensor
    data is then used to determine the relative orientation of the two sources and calculate a location
    for the rifle image within the image seen through the goggles.” ’230 patent at 1:25-34. In
    summarizing the invention, the ’230 patent uses almost identical language to describe the first step
    of Second Patent Family’s two-step alignment process. The summary of invention states that “[i]n
    at least some embodiments . . . , [s]ensors coupled to the two video sources provide data to the
    computer that indicates the spatial orientations of those sources. Using the sensor data, the
    computer determines a location for placing a video image (or a portion thereof) from a second of
    the sources (e.g., a rifle-mounted source) in the video image from a first of the sources (e.g., a
    goggles mounted source).” ’230 patent at 1:58-2:3.
    It is evident that the Second Patent Family did not intend to disavow the First Family’s use
    of orientation data, but rather sought to supplement the use of orientation data with an image check.
    This is further evidenced by the remainder of the specification, which is replete with references to
    sensors used to determine the orientation of the first and second video sources. See, e.g.,’230
    patent at 1:64-66; 5:28-6:48; 8:21-10:15; 23:63-65; Fig. 3. Though other types of motion data,
    such as position data (see, e.g., ’230 patent at 5:60-6:4), are referenced, the consistent reference to
    orientation data in each embodiment provides a POSITA with reasonable certainty that motion
    data includes at least orientation data.
    Defendant argues that SAIC’s construction of “including orientation data” is not supported
    by intrinsic data because “when the inventors sought to use ‘orientation data’ or even ’position
    data’ they did so expressly.” Government’s Opening Cl. Constr. Br. at 17-18. Therefore, the
    Government contends the inventors’ omission of orientation data from the claims indicates that
    65
    the inventors intended to use motion data differently than SAIC proposes. Id. Additionally, the
    Government cites to a preferred embodiment which explains that ultra-wideband radios can be
    used rather than using separate orientation sensors. Markman Hearing Tr. at 139:22-140:24. Thus,
    according to the Government, “at least orientation data” would exclude this embodiment.
    These arguments miss the point. SAIC is not arguing that orientation equals motion data.
    Motion data may include position data. See, e.g., ’230 patent at 8:21-28 (“After initial calibration,
    computer 30 receives position data for system 10 from GPS chipset 31 (FIG. 2) and/or from data
    communication chipset 32 in block 105.”). However, SAIC’s construction simply requires that
    orientation data be used.      The embodiment’s ultra-wideband radios section, cited by the
    Government, is not to the contrary. The specification consistently states that devices other than
    IMUs can be used to determine orientation. See, e.g., ’230 patent at 4:17-41. The embodiment
    does not state that orientation is not used; rather, it states that orientation data based on relative
    alignment of ultra-wideband radios can be used instead of IMUs. Id. Accordingly, contrary to the
    Government’s arguments, SAIC’s construction is consistent with the intrinsic evidence.
    The Government relies on the testimony of, Drs. Neumann, Azuma, and Welch to support
    its assertion that “motion data” is indefinite. As explained below, the cited extrinsic evidence is
    consistent with SAIC’s claim construction and does not amount to clear and convincing evidence
    of indefiniteness.
    First, the Government relies on statements made by Dr. Azuma during his deposition that
    “motion data” is “not . . . a well-defined term in the field.” Azuma Dep. at 126:20-127:12, 80:23-
    81:20. This statement standing alone does not amount to clear and convincing evidence that
    “motion data” as used in the patents is indefinite—primarily because, as the Government concedes,
    Dr. Azuma did not read the patents prior to his deposition. Azuma Dep. at 238:8-25 (“I have not
    66
    read the SAIC patents. . . . I am not familiar with those particular patents.”). Moreover, when
    asked how Dr. Azuma used “motion data,” Dr. Azuma’s explanation comports with SAIC’s
    proposed usage of the term. Specifically, Dr. Azuma stated that “motion data” includes, inter alia,
    orientation data. Azuma Dep. at 160:3-16. Thus, to the extent Dr. Azuma’s testimony is at all
    relevant, it comports with SAIC’s construction of motion data.
    Next, the Government relies on a statement made by Dr. Welch during his deposition that
    SAIC’s proposed construction is certainly “not elegant” when applied to the claims. Joint
    Submission of Cl. Constr. Experts, Ex. 2 Gregory Welch Deposition Transcript (Welch Dep.)
    (ECF No. 79-2) at 156:7-17. Again, this statement is not clear and convincing evidence of
    indefiniteness. Moreover, Dr. Welch ultimately opined that the asserted patents teach that sensor
    data indicating the movement of weapon and goggle components (i.e., data indicative of motion)
    includes orientation data and that this orientation data is used to determine a location of the weapon
    image within the goggles image. See Pl.’s Opening Cl. Constr. Br., Ex. L Declaration of Gregory
    Welch (Welch Decl.) (ECF No. 90-15) ¶ 63 (identifying patent disclosures in support of SAIC’s
    construction of motion data).
    Lastly, to support its position, the Government points to the following exchange between
    Dr. Neumann and Plaintiff’s counsel during Dr. Neumann’s deposition:
    Q. So the original question where you asked me to point you to some pieces of the
    specification was would you agree that the specification of the ‘230 patent that we
    had marked as Exhibit 8 discloses use of a rotation data to indicate relative motion
    between the two – the two video sources.
    A. Okay
    Q. Would you agree with that?
    A. The patent uses – the description uses specific terms like roll, pitch and yaw.
    Okay. Motion data can be many things. There are not just a single roll, pitch and
    yaw in this patent. There are multiples. There is a sub G. There’s the sub S.
    67
    There’s – there’s no sub anything. There’s just yaw and roll described. There are
    so many different types of data described that I think it adds to the confusion of
    what motion data means.
    Q. Well – okay. So the sub G and the sub S refer – correlate to each of the video
    sources, I believe. Would you agree with that?
    A. They – as I recall, they deal with the gun motion and the S was –
    Q. The scope, I believe. Although I would have to –
    A. See, I’m not sure anymore. But yes. There’s two different things that are
    moving. So we have two different suffixes.
    Q. Okay.
    A. Then there’s the data values from sensors indicate vertical rise pitch. So there’s
    just – there’s so many different types of motion data mentioned, things that could
    be motion data mentioned, which one – which one is it?
    Q. Why couldn’t it be any of those? Just – why is that a problem for you?
    …
    A. If you pick the wrong data, you may not be able to accomplish your purpose.
    It’s really important in technical documents to be clear. This signal goes from here
    to here. You can’t just say a signal goes from here to here. It could be any signal.
    Okay. When they say a motion data is used for this or for that, which data?
    …
    A. I mean, that the essence of – that’s the essence of why I say it’s indefinite. There
    is no definition.
    Joint Submission of Cl. Constr. Experts, Ex. 2 Ulrich Neumann Deposition Transcript (Neumann
    Dep.) (ECF No. 97-1) at 219:10-221:9.
    Dr. Neumann’s testimony does not amount to clear and convincing evidence that the term
    “motion data” is indefinite. An expert may articulate the meaning of a term to a POSITA, but then
    the court must conduct a legal analysis to see if that same meaning fits with the term “in the context
    of the specific patent claim under review,” because “experts may be examined to explain terms of
    art . . . but they cannot be used to prove the proper or legal construction of any instrument of
    68
    writing.” See Teva Pharms. USA, Inc. v. Sandoz, Inc., 
    574 U.S. 318
    , 331 (2015) (internal
    quotations and citations omitted). Although Dr. Neumann expressed concern that “motion data”
    may include data beyond orientation data, Dr. Neumann does not dispute that orientation data is
    subsumed within the broader category of motion data. He simply concluded that there are “so
    many different types of motion data mentioned [in the 230 patent’s specification, i]f you pick the
    wrong data, you may not be able to accomplish your purpose.” See Neumann Dep. at 220:14-23.
    It is well-established, however, that a term is not indefinite simply because it is broad. See BASF
    Corp. v. Johnson Matthey Inc., 
    875 F.3d 1360
    , 1367 (Fed. Cir. 2017) (“But the inference of
    indefiniteness simply from the scope finding is legally incorrect: ‘breadth is not indefiniteness.’”
    (quoting SmithKline Beecham Corp. v. Apotex Corp., 
    403 F.3d 1331
    , 1341 (Fed. Cir. 2005))). As
    Dr. Neumann acknowledges, the specification explicitly discloses multiple types of motion data.
    Neumann Dep. at 219:10-221:9.           Contrary to Dr. Neumann’s assertions, these extensive
    disclosures are a source of guidance rather than confusion. A POSITA seeking to interpret the
    bounds of motion data need simply refer to the patent document, which consistently refers to
    “orientation data” about the first and second video source. Aside from the numerosity of examples
    of motion data and a lack of an explicit definition, Dr. Neumann was unable to articulate how the
    term motion data as used in the patent is indefinite. Instead, Dr. Neumann summarily concluded
    that “[i]f you pick the wrong data, you may not be able to accomplish your purpose.” See id. at
    221:22-23 (emphasis added). To prove indefiniteness by clear and convincing evidence, the
    Government must do more. See Apple Inc. v. Samsung Elecs. Co., 
    786 F.3d 983
    , 1003 (Fed. Cir.
    2015) (attempting to discredit the patentee’s experts is not sufficient to find claim indefinite), rev’d
    on separate grounds, 
    137 S. Ct. 429
     (2016), remanded to 678 Fed. App’x 1012 (2017); Microsoft
    Corp. v. i4i Ltd. P’ship, 
    564 U.S. 91
    , 104-05 (2011) (referencing many instances where the Court
    69
    has required a heightened standard of proof to overcome patent’s presumption of validity).
    In sum, Drs. Neumann, Azuma, and Welch were each able to identify orientation data as a
    component of “motion data.” Accordingly, for the reasons stated above, the Court finds that the
    term “motion data” is definite, and its proper construction is “data indicative of motion, including
    at least orientation data.”
    VI. “IN REGISTRATION WITH” / “REGISTERING”
    SAIC                     Government               Microsoft                L3
    “in proper alignment Indefinite                   N/A                      Indefinite
    and
    position, so as to
    coincide
    and not be
    substantially offset” 18
    Court’s Construction: Indefinite
    L3 and the Government argue that the registration terms “in registration with” and
    “registering” are indefinite because the terms are (1) subjective and the patents fail to disclose
    parameters for acceptable degrees of error or inform a POSITA when registration is achieved, and
    (2) the patents fail to disclose how registration is accomplished. Government’s Opening Cl.
    Constr. Br. at 39-40; L3’s Opening Cl. Constr. Br. at 22. The Government submits that, without
    such criteria, a POSITA reviewing the intrinsic record would not be able to determine the objective
    scope of the registration terms with reasonable certainty. Government’s Responsive Cl. Constr.
    Br. at 24-25.
    18
    In the parties’ Joint Claim Construction Statement, SAIC proposed the following construction:
    70
    SAIC argues that “registration” means “in proper alignment and position, so as to coincide
    and not be substantially offset.” Pl.’s Opening Cl. Constr. Br. at 20, 30. SAIC further contends
    that “registration” is definite because the patents disclose examples of what is and what is not
    proper alignment to inform when registration is achieved. Pl.’s Opening Cl. Constr. Br. at 21-23
    (citing ’012 patent at 2:15-18, 3:24-4:6, 6:25-27, 6:38-7:12, Figs. 1, 4, and 8; ’230 patent at 10:11-
    15; ’012 Patent File History at 10-11). Additionally, the patents disclose exemplary methods for
    using orientation data to show how registration can be accomplished. Pl.’s Opening Cl. Constr.
    Br. at 21, 23 (citing ’012 patent at Abstract, 2:30-38, 4:51-54, 6:38-45, 7:30-42, 8:5-48, Figs. 8,
    9A-12B; ’230 patent at Abstract, 3:46-50, 3:67-4:16, Figs. 6A-H).
    This Court agrees with the Government and L3. Pursuant to statute, a specification must
    “conclude with one or more claims particularly pointing out and distinctly claiming the subject
    matter which the inventor or a joint inventor regards as the invention.” 
    35 U.S.C. § 112
    (b). A
    claim fails to satisfy this statutory requirement and is thus invalid for indefiniteness if its language,
    when read in light of the specification and the prosecution history, “fail[s] to inform, with
    reasonable certainty, those skilled in the art about the scope of the invention.” Nautilus, 572 U.S.
    at 901.
    Here, the patents fail to provide objective criteria for a POSITA to determine with
    reasonable certainty when registration is accomplished. The intrinsic evidence does not contain
    any criteria or other description by which to measure or know when “wherein boundaries of the
    “in proper alignment and position, so as to coincide and not be substantially offset.” Joint Cl.
    Constr. Statement, Ex. 1 Joint Cl. Constr. Chart (ECF No. 63-1) at 8. In its opening claim
    construction brief, SAIC substituted “substantially” with “distinctly” so that its proposed
    construction reads, “in proper alignment and position, so as to coincide and not be substantially
    (i.e., distinctly) offset. Pl.’s Opening Cl. Constr. Br. at 20. At oral argument, the Government
    stated that it believed that this change should not affect this Court’s indefiniteness analysis.
    Markman Hearing Tr. at 230:23-231:7.
    71
    displayed images are in registration with boundaries of portions of the visual field represented by
    the displayed images.” See Berkheimer v. HP Inc., 
    881 F.3d 1360
    , 1364 (Fed. Cir. 2017) (finding
    the term “minimal redundancy” indefinite because the patent lacked objective boundaries). The
    First Patent Family does not define registration. Instead, it uses examples to disclose when
    registration is accomplished. For example, ’012 Patent Figure 4 depicts an “image produced by
    an illustrative embodiment of the invention,” in which images from a weapon sight feed are in
    proper position and alignment (i.e., coincide and are not distinctly offset) with images seen via a
    HUD. ’012 patent at 3:56-57, Fig. 4. Conversely, in describing the prior art system of Figure 1,
    the specification discloses that “[b]oth images depict the same subjects, a group of soldiers
    accompanying an armored personnel carrier (APC)[,]” but “[t]he two images are distinctly offset,
    with . . . the same target appearing in different places in the field of view . . . .” Id. at 2:9-18.
    The Second Patent Family incorporates the First Family’s disclosure and consistently
    explains that “‘registration’ refers to positioning of a scope image (or portion of that scope image)
    within a goggles image so that the two images are properly aligned and positioned, and one image
    coincides with the other.” ’230 patent at 10:11-15.
    Here, neither SAIC’s cited examples, nor its proposed definition adequately inform a
    POSITA of the invention’s metes and bounds with reasonable certainty. Registration depends on
    the perspective of a particular application or user, the method of registration used, and the needs
    and precision required by the particular use in which the user is engaged. See Neumann Dep. at
    93:7-15; Welch Dep. at 203:22-212:16. At his deposition, Dr. Neumann explained that “in a
    nutshell” the field of registration involves computations designed to measure and address
    registration errors. Neumann Dep. at 93:22-94:7. In essence, Dr. Neumann testified, “registration”
    is a term of degree in that it is context-dependent and measurable. See Neumann Dep. at 86-87:4,
    72
    120:25-121:10. While terms of degree are not inherently indefinite, the patent must provide some
    objective criteria for a POSITA to determine the scope of the invention with reasonable certainty.
    Interval Licensing LLC, 766 F.3d at 1370-71 (citations omitted).
    Neither the ’230 patent’s use of the terms “proper alignment” or “distinctly offset,” nor
    SAIC’s proposed construction of “substantially offset” provide a POSITA with objective criteria
    to determine whether “registration” is achieved.        As explained by both SAIC’s and the
    Government’s experts, there are a wealth of registration techniques with different variations and
    parameters. Neumann Dep. at 120:25-129:16; see also Welch Dep. at 203:22-212:16. What is
    considered proper alignment using one measurement technique will not be considered proper
    alignment using another technique. See Neumann Dep. at 120:25-129:16; Welch Dep. at 203:22-
    212:16.
    The First Patent Family does not disclose any registration technique or combination of
    techniques for measuring whether registration has been accomplished. Rather the patents rely on
    the high-level flow chart in Figure 8 and direct a POSITA to registration techniques that “are well
    known in the art.” See, e.g., ’012 patent at 9:36-52 (“well known rigid or non-rigid image
    registration techniques . . . to register the images by, for example, finding common visual elements
    between them.”); ’012 patent at 6:62-64 (“Various algorithms for rotating an image by a certain
    number of degrees are well known in the art.”).
    The Second Patent Family includes similarly high-level flow charts. The flow chart in
    Figures 5A and 5B explain the registration process for the Second Patent Family. ’230 patent at
    7:46-11:29. As noted, registration is initially accomplished using motion data which is defined
    broadly as “including at least orientation data.” See supra Section V: Motion Data. The alignment
    accomplished with motion data is then “evaluated” using image data and adjusted as needed before
    73
    it is displayed on the HUD. Id. at 11:30-49. The written description states that “[t]he steps shown
    in FIGS. 5A and 5B (and in other flow charts described below) can be reordered, combined, split,
    replaced, etc.” Id. at 7:46-49. While the Second Patent Family references some of the metrics
    traditionally used to measure registration, these metrics must have objective bounds, and nothing
    in the patent explains when certain metrics would be used over others. See, e.g., ’230 patent at
    10:48-11:30 (discussing the use of Brouwer’s fixed point to check positioning of the images); see
    also Neumann Decl. ¶¶ 65-66. The Second Patent Family also implies that additional unnamed
    metrics may be used. For example, the ’230 patent mentions that peak to sidelobe ratio (PSR) is
    one metric to determine registration, but also mentions that “numerous definitions of PSR are
    known in the art . . . .” ’230 patent at 11:11-17. However, the ’230 patent does not reference PSR
    or any other objective metric when defining “registration.”
    These high-level flow charts give very little guidance to a POSITA as to how registration
    is objectively measured. This lack of guidance prevents a POSITA from ascertaining the scope of
    registration with reasonable certainty because different registration methodologies involve
    different parameters for determining whether registration has been accomplished. Ball Metal
    Beverage Container Corp. v. Crown Packaging Tech., Inc., 838 F. App’x 538, 542-43 (Fed. Cir.
    2020) (“Under our case law, then, a claim may be invalid as indefinite when (1) different known
    methods exist for calculating a claimed parameter, (2) nothing in the record suggests using one
    method in particular, and (3) application of the different methods result in materially different
    outcomes for the claim's scope such that a product or method may infringe the claim under one
    method but not infringe when employing another method. Such a claim lacks the required degree
    of precision ‘to afford clear notice of what is claimed, thereby apprising the public of what is still
    open to them.’” (quoting Nautilus, 572 U.S. at 909)). Dr. Neumann opined that there are multiple
    74
    methods of “evaluation of registration error and a minimization process.” Neumann Decl. ¶ 54.
    “For example, cross-correlation measures, Fourier phase correlation, and point mapping are
    common registration metrics.” Id. Dr. Welch also agreed that different image registration
    techniques will lead to different results. Specifically, Dr. Welch testified that:
    Q. Well, it’s fair to say if you take the same pair of images and you register them
    using different transformation, you might get different results. Fair?
    A. Well, as I said earlier, a lot of different things will affect whether you get -- the
    results would be different or not. In fact, it seems highly unlikely that any two --
    you know, it would be very small things that would vary that would give you
    different results.
    The transformations that I see here are no different than I think the
    transformations, at least some of these that were talked about in, I think the ‘012
    patent where it said that these were common, when it was talking about rotations,
    image rotations. These are the sort of things I think is what was meant there.
    Q. Maybe I missed it, but so it’s fair to say that using different transformations,
    you might get different results . . . for registration?
    A. You would get different results, and as I said earlier, I think -- I think you would
    likely get different results. As I said earlier, the differences would -- the impact of
    those differences or the importance of those differences would depend on the use
    case, the -- you know, the people who are developing the system.
    Welch Dep. at 211:9-212:16.
    In sum, testimony from both experts establishes that registration can be measured multiple
    ways and that different measuring techniques will yield different results in determining whether
    registration has occurred. Both experts also agreed that determining whether registration is
    achieved is dependent upon the application and the user’s tolerance for registration errors. Nothing
    in the patent itself provides a POSITA with any objective measure as to the bounds of registration.
    Without any objective measure, a POSITA is left “to consult the unpredictable vagaries of any one
    person’s opinion” to determine whether registration occurred. Dow Chem. Co. v. Nova Chemicals
    Corp. (Canada), 
    803 F.3d 620
    , 635 (Fed. Cir. 2015) (internal quotations and citation omitted)
    75
    (holding term indefinite where invention did not disclose a method to determine whether a claim
    parameter was met).
    Notwithstanding the patents’ failure to provide an objective metric to this highly contextual
    term, SAIC argues that the patents provide reasonable certainty by disclosing examples of a
    registered image and an unregistered image. Pl.’s Opening Cl. Constr. Br. at 21.
    This argument is without merit. While SAIC is correct in that a claim term can be rendered
    definite through the use of examples that provide points of comparison, those examples must
    provide some objective criteria by which a POSITA can determine the scope of a claim with
    reasonable certainty. Sonix Tech. Co., Ltd. v. Publ’ns Int’l, Ltd., 
    844 F.3d 1370
    , 1379 (Fed. Cir.
    2017) (holding “visually negligible” definite because examples in the specification provided
    objective criteria by which a POSITA could identify the scope of the invention with reasonable
    certainty); Guangdong Alison Hi-Tech Co. v. Int’l Trade Comm’n, 
    936 F.3d 1353
    , 1360–62 (Fed.
    Cir. 2019) (holding the term “lofty fibrous batting” definite where the specification provided seven
    detailed examples for comparison, and the parties’ expert testimony supported the conclusion that
    a POSITA could objectively identify characteristics of the term); One-E-Way, Inc. v. Int’l Trade
    Comm’n, 
    859 F.3d 1059
    , 1066 (Fed. Cir. 2017) (finding “virtually free from interference” definite
    where statements in the specification and prosecution history indicated that the phrase meant “free
    from eavesdropping,” which provided an objective standard to inform a POSITA of the scope of
    the invention with reasonable certainty). Indeed, the mere existence of examples in the written
    description will not always render a claim definite. Sonix Tech., 844 F.3d at 1380.
    As explained above, “registration” is understood on a continuum. See Neumann Dep. at
    86:21-87:4, 120:25-124:3, 175:9-176:4, 181:5-8, 193:12-16. Here, the examples provided in the
    patents at issue do not provide objective criteria to inform a POSITA of the scope of registration.
    76
    The parties agree the patents do not claim perfect registration. See Welch Dep. at 165:24-166:25
    (“There is no perfect registration.”), 167:5-168:10 (“[P]erfect registration doesn’t exist[,]” even in
    the context of the asserted patents), 169:8-25 (“[T]here’s no system that is going to match every
    pixel in intensity pixel for pixel.”). Outside of perfect registration, the concept of registration here
    is context dependent. Welch Dep. at 203:22-204:6. Because registration depends on context, there
    are no inherent objective parameters that a POSITA can use to determine the scope of the term.
    See Interval Licensing LLC, 766 F.3d at 1371-74. The patent, therefore, needs to provide some
    objective criteria for assessing an acceptable variance from perfect registration for a POSITA to
    determine the scope of the registration terms with reasonable certainty. Id.
    The First Patent Family provides two examples, which are incorporated by reference in the
    Second Patent Family. ’012 patent at Figs. 1, 4. In Figure 4 of the First Patent Family, the patents
    state that the images in that figure are registered but do not disclose any offset. See Welch Dep.
    at 165:24-166:25. In the counterexample taken from the prior art, Figure 1 illustrates a very
    significant offset, which the patents describe as not in registration. ’012 patent at 2:4-28. The
    tremendous gap between these two examples creates a zone of uncertainty. See Nautilus, 572 U.S.
    at 909-10. With examples only providing boundaries at the extremes, a skilled artisan is left to
    wonder what other images could fall between these two figures and still be considered registered.
    See, e.g., Automated Pack’g Systems, Inc. v. Free Flow Pack’g Int’l, Inc., No. 18-CV-00356-EMC,
    
    2018 WL 3659014
    , at *18 (N.D. Cal. Aug. 2, 2018) (finding examples of an inserting device that
    could not be as small as a needle, nor as big as a baseball bat, were too extreme to provide a
    POSITA with objective criteria to determine the scope of the patent claims); Power Integrations,
    Inc. v. ON Semiconductor Corp., No. 16-CV-06371-BLF, 
    2018 WL 5603631
    , at *20 (N.D. Cal.
    Oct. 26, 2018) (holding the term “moderate power threshold value” indefinite where the patent
    77
    provided an extreme range of very low values to the maximum peak value).
    The patents’ disclosure of registration errors also does not provide the required “reasonable
    certainty.” See Nautilus, 572 U.S. at 901. The patent specifications discuss parallax as a possible
    source of registration error. See, e.g., ’012 patent at 8:49-9:35. Specifically, the ’012 patent
    mentions parallax “error of about 2.9 degrees in the placement of the video frame” when the target
    is at 10 meters and “is to some extent a non-issue [as t]he system proposed would likely be used
    for targets greater than 10 meters more often than not.” Id. at 8:60-67. In its brief, SAIC contends
    that the ’012 patent—and a similar discussion exists for the ’230 patent—describing an error of
    2.9 degrees at 10 meters and then fewer degrees at greater target distances is an example of criteria
    from which a POSITA can assess an acceptable degree of error, e.g., 2.9 degrees or less in the
    video frame. Pl.’s Opening Cl. Constr. Br. at 27; see also ’012 patent at 8:49-64, ’230 patent at
    9:31-47. The problem with this argument is that the patent described an error of 2.9 degrees at 10
    meters assuming the scope image and goggle image were “perfectly aligned,” ’012 patent at 8:52-
    53, which the parties’ experts acknowledge is impossible. See Welch Dep. at 166:8 (“There is no
    perfect registration.”); Neumann Dep. at 186:24-187:11 (“[I]t’s fair to say there will be deviation,
    there will be error [in registration.]”). Thus, a POSITA could not rely on the disclosure of “2.9
    degrees or less” as an objective criterion for when registration is achieved. See Neumann Decl.
    (ECF No. 66) ¶ 57 (explaining why the disclosure of parallax error does not provide “objective
    boundaries and a POSITA would not be able to discern [the registration terms] with reasonable
    certainty”).
    The lack of guidance provided by the examples in the patents is confirmed by the extrinsic
    evidence. Both SAIC’s and the Government’s experts were unable to identify objective criteria
    for determining registration. Dr. Neumann opined, “[t]he intrinsic evidence does not contain any
    78
    criteria or other description by which to measure or know when [the claim term] ‘wherein the
    boundaries of the displayed images are in registration with boundaries of portions of the visual
    field represented by the displayed images.’” Neumann Decl. ¶ 53 (emphasis omitted). When
    discussing the patents’ disclosure of registration errors, Dr. Neumann also opined that the
    registration of overlapping images is impacted by several issues, such as “photonic (intensity)
    variations and image acquisition . . . differences[,]” not just parallax. Neumann Decl. ¶ 54.
    When SAIC's expert, Dr. Welch, was asked whether the patents provide any basis for
    determining what level of offset is acceptable, he stated, “[s]itting here right now, I don't recall
    that there’s anywhere where the patents disclose something like acceptable offsets.” Welch Dep.
    at 170:2-18. To put this comment in context, Dr. Welch came to this conclusion directly following
    a question about Figure 4 of the ’012 patent. Specifically, Dr. Welch was asked “[s]o you can't
    estimate whether there's any offset between the two images that are allegedly registered in Figure
    4 of the ’012 patent; correct?” Welch Dep. at 169:8-11. Dr. Welch responded:
    A: I as a human would have to understand what we mean by offset, because there
    are who knows how many pixels in that inset region. There's a lot of information
    in the middle, and we'd have -- and around the boundaries, and we'd have to sort of
    agree on what's important or not important because there's no system that is going
    to match every pixel in intensity pixel for pixel. So we’d have to agree on what it
    means to be okay. What I'm saying is the patent teaches that this is what they mean
    by registration.
    Welch Dep. at 169:13-25. The exchange demonstrates that Dr. Welch does not view Figures 1
    and 4 as objective guidance to inform a POSITA of the objective boundaries of registration. Dr.
    Welch does not specifically reference the patents’ disclosure of 2.9 degrees or less as an objective
    criterion for when registration is achieved in either his declaration or his deposition. In fact, in his
    declaration, Dr. Welch asserts that examples for addressing parallax and calibration methods are
    unnecessary for a POSITA to understand the patents’ claims with reasonable certainty. Welch
    79
    Decl. ¶ 57. This assertion is inconsistent with Federal Circuit precedent.
    The Federal Circuit’s decision in Berkheimer v. HP Inc., 
    881 F.3d 1360
     (Fed. Cir. 2017) is
    instructive. In Berkheimer, the Federal Circuit held the use of “minimal redundancy” to be an
    indefinite term of degree for which there must be objective boundaries. Id. at 1364. The patent in
    Berkheimer related to digitally processing and achieving files in a digital asset management
    system, which eliminated redundancy of common text and graphical elements. Id. at 1362.
    Relying on intrinsic evidence as well as an expert declaration, the district court had found that
    minimal redundancy was “highly subjective” and would not provide a POSITA with objective
    criteria for when minimal redundancy was achieved. Id. at 1363. The Federal Circuit agreed. Id.
    The Federal Circuit first noted that the specification inconsistently described the invention. Id. at
    1363-64. In certain places, the specification described the system as “minimizing redundant
    objects,” while elsewhere it stated that redundancy was “eliminat[ed].” Id. at 1364. In the
    prosecution history, the inventor had stated that the claim at issue desired to eliminate redundancy
    but used the term “minimal” because eliminating redundancy was unlikely. Id. Moreover, the
    only example included in the specification exhibited no redundancy. Id. Accordingly, the Federal
    Circuit stated that “[t]he specification contains no point of comparison for skilled artisans to
    determine an objective boundary of ‘minimal’ when the archive includes some redundancies.” Id.
    (citation and emphasis omitted). The Federal Circuit held that terms of degree require objective
    boundaries to inform a POSITA of the scope of the invention, and no objective boundary was
    provided where the invention failed to inform “how much is minimal.” Id. (emphasis in original).
    Here, like in Berkheimer, neither the disclosure of examples nor registration errors provide
    reasonable certainty of the bounds of registration. As was the case in Berkheimer, because perfect
    registration cannot be accomplished, the patents must disclose an acceptable level of “offset.” The
    80
    examples here are too extreme to provide objective criteria to a POSITA; and, like Berkheimer,
    expert testimony supports this conclusion. Both experts agree that (1) registration is context
    dependent, (2) the patents do not claim perfect registration, and (3) the patents do not disclose an
    acceptable offset. Accordingly, for the reasons stated above, the Court holds the terms “in
    registration with” and “registering” to be indefinite, as no objective way exists to calculate how
    much offset is acceptable.
    *****
    81
    CONCLUSION
    For the foregoing reasons, the Court construes:
    1. “video images” / “video source image” / “video data of images” as “digital, analog,
    or nonstandard video frames;”
    2. “transparent display” as “a display that has the effect of being transparent or
    translucent, allowing simultaneous viewing of the underlying visual field and other
    images or information;”
    3. “overlay” in accordance with its plain and ordinary meaning, “e.g., Overlaying:
    positioned over or upon. Overlay/overlays: are over or upon/is over or upon;”
    4. “based on a comparison” as “based on a comparison of image data (e.g., content and
    contrast) from the first and second video source images;” and
    5. “motion data” as “data indicative of motion, including at least orientation data.”
    Additionally, the Court holds the terms “in registration with” / “registering” indefinite.
    The parties are DIRECTED to file a Joint Status Report by August 23, 2021, proposing a
    schedule for further proceedings.
    IT IS SO ORDERED.
    s/ Eleni M. Roumel
    ELENI M. ROUMEL
    Judge
    Dated: August 6, 2021
    Washington, D.C.
    82
    

Document Info

Docket Number: 17-825

Judges: Eleni M. Roumel

Filed Date: 8/6/2021

Precedential Status: Precedential

Modified Date: 8/9/2021

Authorities (19)

Samsung Electronics Co. v. Apple Inc. , 137 S. Ct. 429 ( 2016 )

Minerals Separation, Ltd. v. Hyde , 37 S. Ct. 82 ( 1916 )

Vitronics Corporation v. Conceptronic, Inc. , 90 F.3d 1576 ( 1996 )

Microsoft Corporation v. Multi-Tech Systems, Inc., Multi-... , 357 F.3d 1340 ( 2004 )

United Carbon Co. v. Binney & Smith Co. , 63 S. Ct. 165 ( 1942 )

Acacia Media Technologies Corp. v. New Destiny Internet ... , 405 F. Supp. 2d 1127 ( 2005 )

Network Commerce, Inc. v. Microsoft Corp. , 422 F.3d 1353 ( 2005 )

Verizon Services Corp. v. Vonage Holdings Corp. , 503 F.3d 1295 ( 2007 )

C.R. Bard, Inc. And Davol Inc. v. United States Surgical ... , 388 F.3d 858 ( 2004 )

Edwards Lifesciences LLC v. Cook Inc. , 582 F.3d 1322 ( 2009 )

Herbert Markman and Positek, Inc. v. Westview Instruments, ... , 52 F.3d 967 ( 1995 )

omega-engineering-inc-v-raytek-corporation-davis-instrument , 334 F.3d 1314 ( 2003 )

bell-atlantic-network-services-inc-doing-business-as-verizon-services , 262 F.3d 1258 ( 2001 )

MARKMAN Et Al. v. WESTVIEW INSTRUMENTS, INC., Et Al. , 116 S. Ct. 1384 ( 1996 )

bancorp-services-llc-v-hartford-life-insurance-company-and , 359 F.3d 1367 ( 2004 )

irdeto-access-inc-formerly-known-as-tvcom-international-inc-v , 383 F.3d 1295 ( 2004 )

cordis-corporation-v-medtronic-ave-inc-defendant-cross-and-boston , 339 F.3d 1352 ( 2003 )

Enzo Biochem, Inc. v. Applera Corp. , 599 F.3d 1325 ( 2010 )

Helmsderfer v. Bobrick Washroom Equipment, Inc. , 527 F.3d 1379 ( 2008 )

View All Authorities »