US9970766B2 - Platform-mounted artificial vision system - Google Patents

Platform-mounted artificial vision system Download PDF

Info

Publication number
US9970766B2
US9970766B2 US14/041,849 US201314041849A US9970766B2 US 9970766 B2 US9970766 B2 US 9970766B2 US 201314041849 A US201314041849 A US 201314041849A US 9970766 B2 US9970766 B2 US 9970766B2
Authority
US
United States
Prior art keywords
data
frames
sequential
sequential frames
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/041,849
Other versions
US20160219245A1 (en
Inventor
Dustin D. Baumgartner
Bruce J. Schachter
Kathryn B. Stewart
Michael M. Becker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northrop Grumman Systems Corp
Original Assignee
Northrop Grumman Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northrop Grumman Systems Corp filed Critical Northrop Grumman Systems Corp
Priority to US14/041,849 priority Critical patent/US9970766B2/en
Assigned to NORTHROP GRUMMAN SYSTEMS CORPORATION reassignment NORTHROP GRUMMAN SYSTEMS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BAUMGARTNER, DUSTIN D., BECKER, MICHAEL M., SCHACHTER, Bruce J., STEWART, KATHRYN B.
Assigned to DEFENSE ADVANCED RESEARCH PROJECTS AGENCY, UNITED STATES GOVERNMENT reassignment DEFENSE ADVANCED RESEARCH PROJECTS AGENCY, UNITED STATES GOVERNMENT CONFIRMATORY LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: NORTHROP GRUMMAN SYSTEMS CORPORATION
Publication of US20160219245A1 publication Critical patent/US20160219245A1/en
Application granted granted Critical
Publication of US9970766B2 publication Critical patent/US9970766B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C23/00Combined instruments indicating more than one navigational value, e.g. for aircraft; Combined measuring devices for measuring two or more variables of movement, e.g. distance, speed or acceleration
    • G06T5/003
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/20Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/23229
    • H04N5/23258
    • H04N5/23267
    • H04N5/23293
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Definitions

  • the present invention relates generally to artificial vision systems, and specifically to a platform-mounted artificial vision system.
  • an artificial vision system can be mounted on a stationary platform and can be implemented for security measures.
  • an artificial vision system can be mounted on a moving platform (e.g., an aircraft), such as to assist a pilot in navigation of the moving platform.
  • a moving platform e.g., an aircraft
  • DVE degraded visual environment
  • Common degraded visual conditions affecting rotorcraft include brownout (sand in atmosphere), whiteout (snow in atmosphere), smoke, rain, mist, fog, turbulence, darkness, and helicopter rotor blade obstruction.
  • Such degraded visual conditions can result in a crash of the moving platform as the pilot's vision is obscured by the degraded visual condition.
  • landing a helicopter in brownout conditions can be particularly perilous.
  • One embodiment includes an artificial vision system mounted on a platform.
  • the system includes an image system comprising a video source that is configured to capture a plurality of sequential images.
  • the image system also includes an image processing system configured, via at least one processor, to process the plurality of sequential images to calculate situational awareness (SA) data with respect to each of the plurality of sequential images and to convert the processed plurality of sequential images to visible images.
  • SA situational awareness
  • the system further includes a video display system configured to display the visible images associated with the processed plurality of sequential images and to visibly identify the SA data relative to the platform.
  • Another embodiment includes a non-transitory computer readable medium configured to store instructions that, when executed by a processor, are configured to implement a method for providing artificial vision assistance for navigating a moving platform.
  • the method includes capturing a plurality of sequential frames of infrared (IR) image data via an IR video source mounted on the moving platform and calculating SA data associated with each of the plurality of sequential frames relative to the moving platform.
  • the method also includes converting the plurality of sequential frames to visible images and displaying the visible images on a video display system to assist in navigation of the moving platform.
  • the method further includes identifying the SA data relative to the moving platform on the visible images.
  • the system includes a self-enclosed image system.
  • the self-enclosed image system includes an infrared IR video source that is configured to capture a plurality of sequential frames of IR image data and an inertial measurement unit configured to generate inertial data associated with the moving platform.
  • the self-enclosed image system further includes an image processing system configured via at least one processor and based on the inertial data to stabilize the plurality of sequential frames, to calculate SA data with respect to each of the plurality of sequential frames, to enhance the plurality of sequential images with respect to environment-based occlusion of the video source based on recursively processing consecutive stabilized images of the plurality of sequential frames, and to convert the plurality of sequential frames into visible images.
  • the system further includes a video display system configured to display the visible images and to visibly identify the SA data relative to the moving platform.
  • FIG. 1 illustrates an example of an artificial vision system.
  • FIG. 2 illustrates an example of an image system.
  • FIG. 3 illustrates an example of a synthetic vision situational awareness component.
  • FIG. 4 illustrates an example of a lucky region imaging component.
  • FIG. 5 illustrates an example of a method for providing artificial vision assistance for navigating a moving platform.
  • the present invention relates generally to artificial vision systems, and specifically to a platform-mounted artificial vision system.
  • the artificial vision system can be mounted, for example, on a moving platform, such as an aircraft (e.g., helicopter).
  • the artificial vision system includes an image system, which can be a self-contained image system (e.g., image system package).
  • the image system includes a video source, which can be configured as a forward-looking infrared (FLIR) video source, that is configured to captures a plurality of sequential frames of image data.
  • FLIR forward-looking infrared
  • the term “artificial vision system” refers to a vision system that provides processed video images of a scene that can be viewed by a user in lieu of or in addition to the user's own vision.
  • the image system also includes one or more processors configured as an image processing system.
  • the image processing system is configured to process the sequential frames with minimum lag, such as to provide stabilization of the sequential frames and/or to calculate situational awareness (SA) data associated with the sequential frames relative to the platform.
  • SA situational awareness
  • the image processing system can also be configured to provide lucky region image processing, such as to enhance the sequential frames with respect to environment-based occlusion of the video source, such as based on a degraded visual environment (DVE), based on recursively processing consecutive stabilized images of the sequential frames.
  • the enhancement can be based on inertial data associated with the platform (i.e., a moving platform), such as generated by an inertial measurement unit (IMU) that is included within the self-contained image system.
  • the image system can also convert the sequential frames to visible images, such that the visible images can be displayed on a video display system, such as to assist a pilot in navigation of the platform (i.e., moving platform).
  • the image processing system can include a synthetic vision SA component configured to generate the SA data.
  • the synthetic SA component can be configured to calculate three-dimensional locations of objects relative to the platform based on parallax associated with consecutive stabilized images of the plurality of sequential frames based on the inertial data of the moving platform.
  • the image system can thus identify the SA data on the visible images, such as by highlighting the objects on the visible images, to further assist the pilot in navigating the platform, such as during take-off and landing maneuvers.
  • the image processing system can include a lucky region imaging component that can provide the enhancement to the sequential frames with respect to environment-based occlusion of the video source.
  • the lucky region imaging component can assign a weight to each pixel associated with a current one of the sequential frames based on the inertial data and on a selected mean image latency.
  • the lucky region imaging component can then apply a recursive imaging algorithm on the sequential frames based on the weight assigned to each pixel of the current one of the sequential frames relative to an immediately preceding previously processed one of the sequential frames to enhance the plurality of sequential frames.
  • FIG. 1 illustrates an example of an artificial vision system 10 .
  • the artificial vision system 10 can be mounted on a platform.
  • the platform can be a stationary platform, such that the artificial vision system 10 can be implemented for security purposes.
  • the artificial vision system 10 can be mounted on a moving platform, such as an aircraft (e.g., a helicopter), to assist an associated pilot in navigation of the moving platform, such as in a degraded visual environment (DVE), such as for the purpose of landing the aircraft.
  • a moving platform such as an aircraft (e.g., a helicopter)
  • DVE degraded visual environment
  • the artificial vision system 10 includes an image system 12 that can be configured as a self-contained package.
  • the image system 12 includes a video source 14 that is configured to capture a plurality of sequential frames of an environment scene.
  • the video source 14 can be configured as a forward-looking infrared (IR) video source, such that the sequential IR images are sequential frames.
  • the image system 12 can be mounted on at least one location of the moving platform, such as on a nose of a helicopter, such that the video source 14 captures the sequential frames from approximately the same perspective as the pilot of the moving platform.
  • the image system 12 includes an inertial measurement unit (IMU) 16 that is configured to generate inertial data associated with the sensor within the moving platform, such as movement in six-degrees of motion (e.g., yaw, pitch, roll, and vector motion in three-dimensions).
  • IMU inertial measurement unit
  • the inertial data can be more accurate with respect to the image processing of the sequential frames captured by the video source 14 , as described in greater detail herein.
  • the image system 12 also includes a memory 18 and one or more processors 20 that are configured as an image processing system 22 .
  • the memory 18 can be configured to store the instructions that implement the image processing system 22 via the processor(s) 20 , and can be implemented to buffer one or more sequential frames captured by the video source 14 and the processing thereof.
  • the image processing system 22 is configured to process the sequential frames and to provide the processed sequential frames as visible video data IMG to a video display system 24 .
  • the video display system 24 can be configured as a video monitor that provides visible images to a user to view the environment scene, such as to assist a pilot in navigating the moving platform.
  • the image processing system 22 can be configured to generate situational awareness (SA) data.
  • SA situational awareness
  • the SA data can include three-dimensional locations of objects relative to the platform based on parallax associated with consecutive sequential frames based on the inertial data generated by the IMU 16 of the moving platform.
  • the image processing system 22 can thus modify the visible video data IMG to identify the SA data on the visible images provided by the video display system 24 , such as by highlighting the objects on the visible images, to further assist a pilot in navigating the moving platform.
  • the image processing system 22 can be configured to enhance the sequential frames with respect to environment-based occlusion of the video source. Accordingly, the pilot can use the video display system 24 to assist in safely navigating the moving platform in a DVE, such as during aircraft landing and take-off.
  • FIG. 2 illustrates an example of an image system 50 .
  • the image system 50 can correspond to the image system 12 in the example of FIG. 1 . Therefore, the image system 50 can be implemented as a self-contained unit mounted on a platform, such as a moving platform, to capture sequential frames of an environment scene and provide visible video data IMG to a video display system (e.g., the video display system 24 ). Thus, the image system 50 can provide artificial vision for a user, such as a pilot of the moving platform.
  • the image system 50 includes a FLIR video source 52 that is configured to capture a plurality of sequential IR frames of image data of the environment scene.
  • the FLIR video source 52 generates image data VID INIT that can correspond to fourteen-bit IR image data provided at sixty frames per second (fps).
  • the FLIR video source 52 can be configured to implement image pre-processing on the captured IR images, such that the image data VID INIT is pre-processed.
  • the pre-processing of the FLIR video source 52 can include correction of non-uniformity (e.g., based on variability during the fabrication of an associated focal plane array (FPA)) and pixel errors.
  • FPA focal plane array
  • the image system 50 also includes an IMU 54 that is configured to generate inertial data MVMT associated with motion of the moving platform on which the image system 50 can be mounted (e.g., on one or more locations of a helicopter).
  • the inertial data MVMT generated by the IMU 54 can be associated with movement in six-degrees of motion (e.g., yaw, pitch, roll, and vector motion in three-dimensions) of the FLIR video source 52 based on being included within the self-contained package of the image system 50 , such as to provide a more accurate measurement of the movement of the FLIR video source 52 for processing of the image data VID INIT .
  • the image data VIDINT and the inertial data MVMT are each provided to an image processing system 56 .
  • the image processing system 56 can be configured as a set of software modules that are executed by one or more processors (e.g., the processor(s) 20 in the example of FIG. 1 ), such as in a system-on-chip (SOC) arrangement in the self-contained image system 50 .
  • the image data VID INIT is provided to a stabilization component 58 that is configured to provide video frame stabilization of the image data VID INIT to generate stabilized image data VID ST .
  • the stabilization component 58 can employ a Structure-From-Motion technique to estimate a pose of the FLIR video source 52 for each of the sequential frames of the image data VID INIT based on the inertial data MVMT.
  • a Structure-From-Motion technique to estimate a pose of the FLIR video source 52 for each of the sequential frames of the image data VID INIT based on the inertial data MVMT.
  • two consecutive IR images can be aligned based on a homography corresponding to a projective transformation that can be applied to render an image from a given pose into the perspective of another pose.
  • the stabilization component 58 can estimate a homography using robust salient features that are detected within each IR image of the image data VID INIT . The features are detected on the current IR image and can then be correlated with features detected in the previous image.
  • the homography can be determined to spatially align overlapping pixels in the current IR image to those of the previous IR image.
  • homography can capture frame-to-frame displacement caused by movement of the platform/FLIR video source 52 and/or gimbals pointing angle inaccuracies.
  • the stabilized image data VID ST can thus include a homography between a current frame and one or more (e.g., two) previous frames.
  • the stabilized image data VID ST is provided to a synthetic vision SA component 60 .
  • the synthetic vision SA component 60 is configured to process the stabilized image data VID ST based on the inertial data MVMT to calculate SA data associated with the sequential frames, such as can be implemented to assist in processing of the image data VID ST and/or to assist in navigation of the moving platform.
  • FIG. 3 illustrates an example of a synthetic vision SA component 100 .
  • the synthetic vision SA component 100 can correspond to the synthetic vision SA component 60 in the example of FIG. 2 , and can thus be implemented in the image system 50 to calculate SA data with respect to the FLIR video source 52 , and thus the moving platform.
  • the synthetic vision SA component 100 receives the inertial data MVMT and the stabilized image data VID ST , which can correspond to the fourteen-bit IR image frames received at sixty fps that includes the pose information of the FLIR video source 52 , and outputs image data VID SA , which can correspond to the stabilized image data VID ST that includes the SA data.
  • the synthetic vision SA component 100 includes a range map generator 102 , an obstacle detector 104 , and an image overlay component 106 .
  • the range map generator 102 is configured to generate an approximate three-dimensional range map that is constructed using consecutive frames of the stabilized image data VID ST as the moving platform moves.
  • the passive three-dimensional range map can include relative range information to objects in the scene.
  • the obstacle detector 104 is configured to approximate an actual range to the objects in the three-dimensional range map by comparing an apparent motion of the objects in the three-dimensional range map based on the known motion of the moving platform as provided by the inertial data MVMT over a given amount of time. In this manner, the obstacle detector 104 can determine the location of the obstacles based on parallax associated with consecutive stabilized images of the stabilized image data VID ST based on the inertial data MVMT.
  • the obstacle detector 104 is configured to determine the three-dimensional location of the obstacles in the three-dimensional range map based on high saliency regions that stand out from their surroundings on the three-dimensional range map. Obstacle locations can be maintained in three-dimensional coordinates even after obstacles are completely obscured, such as during severest brownout conditions.
  • the image overlay component 106 is configured to extract a distance to detected obstacles from the passive three-dimensional range map and insert distance data into the stabilized image data VID ST , such that the stabilized image data VID ST can be annotated with data associated with the obstacles.
  • the image overlay component 106 thus provides the annotated data as SA data VID SA , which can include the stabilized image data VID ST that is annotated with the data associated with the obstacles.
  • the image data VID SA can be processed to provide identification of the obstacles as overlays on the visible images displayed on the video display system 24 (e.g., via the visible video data IMG).
  • the obstacles can be highlighted on the displayed visible images separate from a respective background of the displayed visible images, such as based on having different colors, brightness, text overlays (e.g., displaying information), graphical cues, and/or other information.
  • the highlighting and/or other information of the overlaid obstacles can be updated based on the inertial data MVMT and the known range to the obstacles, and can thus be used by the pilot as reference points that would otherwise have been obscured by the DVE conditions.
  • the image data VID SA and the inertial data MVMT are provided to a lucky region imaging component 62 .
  • the lucky region imaging component 62 is configured to enhance the frames of the image data VID SA with respect to environment-based occlusion of the FLIR video source 52 based on recursively processing the consecutive stabilized frames of the image data VID SA based on the inertial data MVMT.
  • the environment-based occlusion of the FLIR video source 52 can be a result of the platform being in a DVE, such that the FLIR video source 52 can be occluded by sand (i.e., brownout), snow (i.e., whiteout), or a variety of other vision obscuring conditions.
  • the image processing system 56 may omit processing by the lucky region imaging component 62 in the absence of DVE conditions, such as for the moving platform (e.g., helicopter) flying in clear weather and/or too high to stir up dust clouds.
  • FIG. 4 illustrates an example of a lucky region imaging component 150 .
  • the lucky region imaging component 150 can correspond to the lucky region imaging component 62 in the example of FIG. 2 , and can thus be implemented in the image system 50 to enhance the frames of the image data VID SA with respect to environment-based occlusion of the FLIR video source 52 .
  • the lucky region imaging component 150 receives the inertial data MVMT and the image data VID SA , which can correspond to the fourteen-bit IR image frames received at sixty fps, and which may include the SA data associated with the image data.
  • the lucky region imaging component 150 outputs image data VID LK , which can correspond to the enhanced image data VID SA .
  • the lucky region imaging component 150 includes a weighting component 152 , a latency component 154 , and a lucky region identifier 156 .
  • the lucky region imaging component 150 is configured to implement a recursive lucky region imaging algorithm to enhance the frames of the image data VID SA .
  • a revised current frame of image data is formed from its own pixel contents, as well as the pixel contents of a previously processed data frame (e.g., an immediately preceding processed image frame).
  • the proportions of a current frame and a previously processed frame, at each frame time and at each pixel, can be computed based primarily upon the inertial data MVMT.
  • the weighting component 152 receives the inertial data MVMT and applies weight values to the pixels of the current frame and the immediately preceding processed frame, wherein a sum of the weight applied to current frame and the weight applied to preceding frame at a given pixel location equals one (i.e., 1.0). In this manner, the recursive algorithm may only access the current frame and immediately preceding processed frame, which can be stored in the memory 18 , thus minimizing storage requirements.
  • the weight values that are assigned by the weighting component 152 can vary across a given frame.
  • the weight values are computed by the weighting component 152 for each new frame processed for each pixel therein.
  • the corresponding weight is computed from an estimated optical flow for the pixel, which can be derived from the inertial data MVMT.
  • the optical flow value at each pixel can be computed as a function of the camera's roll, pitch, and yaw, velocity vectors, and/or an estimated range to the scene component associated with the given pixel of the frame.
  • the weight value can be raised to reduce blurring in the resultant processed image frame.
  • the optical flow within the frame can be lowest at a focus-of-expansion point, which can also be the point in the frame corresponding to a direction where the moving platform is headed.
  • the optical flow in the imagery can be highest toward the peripheral areas of the display, which can be the areas of the frame corresponding to a highest motion parallax.
  • the latency component 154 is configured to calculate the mean latency L associated with consecutive frames of the image data VID SA .
  • Latency can correspond to an average age of image data VID SA exiting the recursive algorithm relative to an age of a newest frame of image data VID SA entering the recursive algorithm.
  • the latency of the recursive algorithm can be defined as a function of the average age of the frames going into the recursive algorithm plus the time that it takes to process the frames.
  • the average age of the processed frames can be directly determined by an average weight applied to a current frame of the image data VID SA when combined with the preceding just processed frame of the image data VID SA . Lower latency can result when the weights applied to the current frame by the weight component 152 are increased.
  • weights applied to the current frame can be reduced, and weights can be adjusted to maintain the mean latency L over short time intervals.
  • latency can be reduced through choice of weight values by the weight component 152 , and during slow movement of the moving platform (e.g., a helicopter hovering), the latency can be increased through choice of weight values. This can be accomplished, for example, while maintaining a desired mean latency L on the order of tens of milliseconds (e.g., 33 milliseconds).
  • a longer mean latency L can cause disorientation for piloting a moving platform based on the video images displayed by the video display system 24 being noticeably out of synchronization with observable real-time.
  • too short a mean latency L can prevent the recursive lucky region imaging algorithm from operating effectively, thus only allowing single frame processing instead of the combining of information from consecutive frames.
  • the mean latency L can be calculated as follows:
  • the weights that are assigned to the pixels of the frames are provided to the lucky region identifier 156 , demonstrated as via a signal WGT.
  • the lucky region identifier 156 is configured to implement the recursive lucky region imaging process.
  • the horizontal pixel resolution of the FLIR video source 52 is greater than the horizontal pixel resolution of the video display system 24 , or an available display area, only a portion of the image data VID LK may be mapped to the video display system 24 , such as centered on a focus of expansion point.
  • the focus of expansion point can be computed from the inertial data MVMT, for example, and can thus be based on motion of the moving platform.
  • the lucky region imaging component 150 can be configured to add symbology data to the frames of the image data VID LK .
  • the focus of expansion point can be identified by the lucky region imaging component 150 , such that it can be displayed on the video display system 24 .
  • symbology data can include horizon, speed of the moving platform, or any of a variety of other types of identifiers to assist navigation of the moving platform (e.g., based on the Brown-Out Symbology System (BOSS) cues associated with the United States Air Force).
  • BOSS Brown-Out Symbology System
  • the enhanced image data VID LK is provided to a wavelet enhancement component 64 .
  • the wavelet enhancement component 64 is configured to decompose the monochrome image data VID LK into high spatial frequency, middle spatial frequency, and low spatial frequency bands.
  • the respective spatial frequency bands are demonstrated as image data VID WV .
  • the image data VID WV is provided to a video processor 66 that is configured to convert the image data VID WV into data suitable for display as visible images, demonstrated as the visible video data IMG in the example of FIG. 2 .
  • the video processor 66 can be configured to convert the respective high spatial frequency, middle spatial frequency, and low spatial frequency bands into an RGB color space, such that the visible video data IMG can be provided to the video display system 24 as color images. Accordingly, the pilot of the moving platform can use the enhanced visible images provided via the visible video data IMG to assist in navigating the moving platform. As a result, the pilot or user of the artificial vision system 10 can implement the video display system 24 to assist in viewing a scene or navigating a moving platform in conditions of limited or no naked-eye visibility (e.g., DVE conditions).
  • DVE conditions naked-eye visibility
  • FIG. 5 a methodology in accordance with various aspects of the present invention will be better appreciated with reference to FIG. 5 . While, for purposes of simplicity of explanation, the methodology of FIG. 5 is shown and described as executing serially, it is to be understood and appreciated that the present invention is not limited by the illustrated order, as some aspects could, in accordance with the present invention, occur in different orders and/or concurrently with other aspects from that shown and described herein. Moreover, not all illustrated features may be required to implement a methodology in accordance with an aspect of the present invention.
  • FIG. 5 illustrates an example of a method 200 for providing artificial vision assistance for navigating a moving platform.
  • a plurality of sequential frames of infrared (IR) image data e.g., the image data VID INIT
  • an IR video source e.g., the FLIR video source 52
  • SA data associated with each of the plurality of sequential frames relative to the moving platform is calculated (e.g., via the synthetic vision SA component 60 ).
  • the plurality of sequential frames are converted to visible images (e.g., the visible video data IMG).
  • the visible images are displayed on a video display system (e.g., the video display system 24 ) to assist in navigation of the moving platform.
  • the SA data relative to the moving platform is identified on the visible images.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

One embodiment includes an artificial vision system mounted on a platform. The system includes an image system comprising a video source that is configured to capture a plurality of sequential images. The image system also includes an image processing system configured, via at least one processor, to process the plurality of sequential images to calculate situational awareness (SA) data with respect to each of the plurality of sequential images and to convert the processed plurality of sequential images to visible images. The system further includes a video display system configured to display the visible images associated with the processed plurality of sequential images and to visibly identify the SA data relative to the platform.

Description

This invention was made with Government support under Contract No. W31P4Q-09-C-0539. The Government has certain rights in this invention.
TECHNICAL FIELD
The present invention relates generally to artificial vision systems, and specifically to a platform-mounted artificial vision system.
BACKGROUND
Artificial vision systems can be implemented for a variety of reasons. As an example, an artificial vision system can be mounted on a stationary platform and can be implemented for security measures. As another example, an artificial vision system can be mounted on a moving platform (e.g., an aircraft), such as to assist a pilot in navigation of the moving platform. As an example, both military and commercial rotorcraft survivability is significantly impacted while operating in a degraded visual environment (DVE). Common degraded visual conditions affecting rotorcraft, for example, include brownout (sand in atmosphere), whiteout (snow in atmosphere), smoke, rain, mist, fog, turbulence, darkness, and helicopter rotor blade obstruction. Such degraded visual conditions can result in a crash of the moving platform as the pilot's vision is obscured by the degraded visual condition. As an example, landing a helicopter in brownout conditions can be particularly perilous.
SUMMARY
One embodiment includes an artificial vision system mounted on a platform. The system includes an image system comprising a video source that is configured to capture a plurality of sequential images. The image system also includes an image processing system configured, via at least one processor, to process the plurality of sequential images to calculate situational awareness (SA) data with respect to each of the plurality of sequential images and to convert the processed plurality of sequential images to visible images. The system further includes a video display system configured to display the visible images associated with the processed plurality of sequential images and to visibly identify the SA data relative to the platform.
Another embodiment includes a non-transitory computer readable medium configured to store instructions that, when executed by a processor, are configured to implement a method for providing artificial vision assistance for navigating a moving platform. The method includes capturing a plurality of sequential frames of infrared (IR) image data via an IR video source mounted on the moving platform and calculating SA data associated with each of the plurality of sequential frames relative to the moving platform. The method also includes converting the plurality of sequential frames to visible images and displaying the visible images on a video display system to assist in navigation of the moving platform. The method further includes identifying the SA data relative to the moving platform on the visible images.
Another embodiment includes an artificial vision system mounted on a moving platform. The system includes a self-enclosed image system. The self-enclosed image system includes an infrared IR video source that is configured to capture a plurality of sequential frames of IR image data and an inertial measurement unit configured to generate inertial data associated with the moving platform. The self-enclosed image system further includes an image processing system configured via at least one processor and based on the inertial data to stabilize the plurality of sequential frames, to calculate SA data with respect to each of the plurality of sequential frames, to enhance the plurality of sequential images with respect to environment-based occlusion of the video source based on recursively processing consecutive stabilized images of the plurality of sequential frames, and to convert the plurality of sequential frames into visible images. The system further includes a video display system configured to display the visible images and to visibly identify the SA data relative to the moving platform.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an example of an artificial vision system.
FIG. 2 illustrates an example of an image system.
FIG. 3 illustrates an example of a synthetic vision situational awareness component.
FIG. 4 illustrates an example of a lucky region imaging component.
FIG. 5 illustrates an example of a method for providing artificial vision assistance for navigating a moving platform.
DETAILED DESCRIPTION
The present invention relates generally to artificial vision systems, and specifically to a platform-mounted artificial vision system. The artificial vision system can be mounted, for example, on a moving platform, such as an aircraft (e.g., helicopter). The artificial vision system includes an image system, which can be a self-contained image system (e.g., image system package). The image system includes a video source, which can be configured as a forward-looking infrared (FLIR) video source, that is configured to captures a plurality of sequential frames of image data. As described herein, the term “artificial vision system” refers to a vision system that provides processed video images of a scene that can be viewed by a user in lieu of or in addition to the user's own vision. The image system also includes one or more processors configured as an image processing system. The image processing system is configured to process the sequential frames with minimum lag, such as to provide stabilization of the sequential frames and/or to calculate situational awareness (SA) data associated with the sequential frames relative to the platform. The image processing system can also be configured to provide lucky region image processing, such as to enhance the sequential frames with respect to environment-based occlusion of the video source, such as based on a degraded visual environment (DVE), based on recursively processing consecutive stabilized images of the sequential frames. The enhancement can be based on inertial data associated with the platform (i.e., a moving platform), such as generated by an inertial measurement unit (IMU) that is included within the self-contained image system. The image system can also convert the sequential frames to visible images, such that the visible images can be displayed on a video display system, such as to assist a pilot in navigation of the platform (i.e., moving platform).
As an example, the image processing system can include a synthetic vision SA component configured to generate the SA data. For example, the synthetic SA component can be configured to calculate three-dimensional locations of objects relative to the platform based on parallax associated with consecutive stabilized images of the plurality of sequential frames based on the inertial data of the moving platform. The image system can thus identify the SA data on the visible images, such as by highlighting the objects on the visible images, to further assist the pilot in navigating the platform, such as during take-off and landing maneuvers. As another example, the image processing system can include a lucky region imaging component that can provide the enhancement to the sequential frames with respect to environment-based occlusion of the video source. As an example, the lucky region imaging component can assign a weight to each pixel associated with a current one of the sequential frames based on the inertial data and on a selected mean image latency. The lucky region imaging component can then apply a recursive imaging algorithm on the sequential frames based on the weight assigned to each pixel of the current one of the sequential frames relative to an immediately preceding previously processed one of the sequential frames to enhance the plurality of sequential frames.
FIG. 1 illustrates an example of an artificial vision system 10. The artificial vision system 10 can be mounted on a platform. As an example, the platform can be a stationary platform, such that the artificial vision system 10 can be implemented for security purposes. As another example, the artificial vision system 10 can be mounted on a moving platform, such as an aircraft (e.g., a helicopter), to assist an associated pilot in navigation of the moving platform, such as in a degraded visual environment (DVE), such as for the purpose of landing the aircraft.
The artificial vision system 10 includes an image system 12 that can be configured as a self-contained package. In the example of FIG. 1, the image system 12 includes a video source 14 that is configured to capture a plurality of sequential frames of an environment scene. For example, the video source 14 can be configured as a forward-looking infrared (IR) video source, such that the sequential IR images are sequential frames. As an example, the image system 12 can be mounted on at least one location of the moving platform, such as on a nose of a helicopter, such that the video source 14 captures the sequential frames from approximately the same perspective as the pilot of the moving platform. In the example of the artificial vision system 10 being located on a moving platform, the image system 12 includes an inertial measurement unit (IMU) 16 that is configured to generate inertial data associated with the sensor within the moving platform, such as movement in six-degrees of motion (e.g., yaw, pitch, roll, and vector motion in three-dimensions). By including the IMU 16 in the image system 12, as opposed to receiving the inertial data from an external source (e.g., a flight computer associated with the moving platform), the inertial data can be more accurate with respect to the image processing of the sequential frames captured by the video source 14, as described in greater detail herein.
The image system 12 also includes a memory 18 and one or more processors 20 that are configured as an image processing system 22. The memory 18 can be configured to store the instructions that implement the image processing system 22 via the processor(s) 20, and can be implemented to buffer one or more sequential frames captured by the video source 14 and the processing thereof. The image processing system 22 is configured to process the sequential frames and to provide the processed sequential frames as visible video data IMG to a video display system 24. As an example, the video display system 24 can be configured as a video monitor that provides visible images to a user to view the environment scene, such as to assist a pilot in navigating the moving platform. As an example, the image processing system 22 can be configured to generate situational awareness (SA) data. For example, the SA data can include three-dimensional locations of objects relative to the platform based on parallax associated with consecutive sequential frames based on the inertial data generated by the IMU 16 of the moving platform. The image processing system 22 can thus modify the visible video data IMG to identify the SA data on the visible images provided by the video display system 24, such as by highlighting the objects on the visible images, to further assist a pilot in navigating the moving platform. As another example, the image processing system 22 can be configured to enhance the sequential frames with respect to environment-based occlusion of the video source. Accordingly, the pilot can use the video display system 24 to assist in safely navigating the moving platform in a DVE, such as during aircraft landing and take-off.
FIG. 2 illustrates an example of an image system 50. The image system 50 can correspond to the image system 12 in the example of FIG. 1. Therefore, the image system 50 can be implemented as a self-contained unit mounted on a platform, such as a moving platform, to capture sequential frames of an environment scene and provide visible video data IMG to a video display system (e.g., the video display system 24). Thus, the image system 50 can provide artificial vision for a user, such as a pilot of the moving platform.
The image system 50 includes a FLIR video source 52 that is configured to capture a plurality of sequential IR frames of image data of the environment scene. In the example of FIG. 2, the FLIR video source 52 generates image data VIDINIT that can correspond to fourteen-bit IR image data provided at sixty frames per second (fps). As an example, the FLIR video source 52 can be configured to implement image pre-processing on the captured IR images, such that the image data VIDINIT is pre-processed. For example, the pre-processing of the FLIR video source 52 can include correction of non-uniformity (e.g., based on variability during the fabrication of an associated focal plane array (FPA)) and pixel errors. The image system 50 also includes an IMU 54 that is configured to generate inertial data MVMT associated with motion of the moving platform on which the image system 50 can be mounted (e.g., on one or more locations of a helicopter). The inertial data MVMT generated by the IMU 54 can be associated with movement in six-degrees of motion (e.g., yaw, pitch, roll, and vector motion in three-dimensions) of the FLIR video source 52 based on being included within the self-contained package of the image system 50, such as to provide a more accurate measurement of the movement of the FLIR video source 52 for processing of the image data VIDINIT.
The image data VIDINT and the inertial data MVMT are each provided to an image processing system 56. The image processing system 56 can be configured as a set of software modules that are executed by one or more processors (e.g., the processor(s) 20 in the example of FIG. 1), such as in a system-on-chip (SOC) arrangement in the self-contained image system 50. In the example of FIG. 2, the image data VIDINIT is provided to a stabilization component 58 that is configured to provide video frame stabilization of the image data VIDINIT to generate stabilized image data VIDST. As an example, the stabilization component 58 can employ a Structure-From-Motion technique to estimate a pose of the FLIR video source 52 for each of the sequential frames of the image data VIDINIT based on the inertial data MVMT. Once the stabilization component 58 determines a pose of the FLIR video source 52, two consecutive IR images can be aligned based on a homography corresponding to a projective transformation that can be applied to render an image from a given pose into the perspective of another pose. As an example, the stabilization component 58 can estimate a homography using robust salient features that are detected within each IR image of the image data VIDINIT. The features are detected on the current IR image and can then be correlated with features detected in the previous image. From this correlation, the homography can be determined to spatially align overlapping pixels in the current IR image to those of the previous IR image. Thus, homography can capture frame-to-frame displacement caused by movement of the platform/FLIR video source 52 and/or gimbals pointing angle inaccuracies. The stabilized image data VIDST can thus include a homography between a current frame and one or more (e.g., two) previous frames.
The stabilized image data VIDST is provided to a synthetic vision SA component 60. The synthetic vision SA component 60 is configured to process the stabilized image data VIDST based on the inertial data MVMT to calculate SA data associated with the sequential frames, such as can be implemented to assist in processing of the image data VIDST and/or to assist in navigation of the moving platform.
FIG. 3 illustrates an example of a synthetic vision SA component 100. The synthetic vision SA component 100 can correspond to the synthetic vision SA component 60 in the example of FIG. 2, and can thus be implemented in the image system 50 to calculate SA data with respect to the FLIR video source 52, and thus the moving platform. The synthetic vision SA component 100 receives the inertial data MVMT and the stabilized image data VIDST, which can correspond to the fourteen-bit IR image frames received at sixty fps that includes the pose information of the FLIR video source 52, and outputs image data VIDSA, which can correspond to the stabilized image data VIDST that includes the SA data. In the example of FIG. 3, the synthetic vision SA component 100 includes a range map generator 102, an obstacle detector 104, and an image overlay component 106.
The range map generator 102 is configured to generate an approximate three-dimensional range map that is constructed using consecutive frames of the stabilized image data VIDST as the moving platform moves. The passive three-dimensional range map can include relative range information to objects in the scene. Based on the three-dimensional range map, the obstacle detector 104 is configured to approximate an actual range to the objects in the three-dimensional range map by comparing an apparent motion of the objects in the three-dimensional range map based on the known motion of the moving platform as provided by the inertial data MVMT over a given amount of time. In this manner, the obstacle detector 104 can determine the location of the obstacles based on parallax associated with consecutive stabilized images of the stabilized image data VIDST based on the inertial data MVMT. Thus, the obstacle detector 104 is configured to determine the three-dimensional location of the obstacles in the three-dimensional range map based on high saliency regions that stand out from their surroundings on the three-dimensional range map. Obstacle locations can be maintained in three-dimensional coordinates even after obstacles are completely obscured, such as during severest brownout conditions.
The image overlay component 106 is configured to extract a distance to detected obstacles from the passive three-dimensional range map and insert distance data into the stabilized image data VIDST, such that the stabilized image data VIDST can be annotated with data associated with the obstacles. The image overlay component 106 thus provides the annotated data as SA data VIDSA, which can include the stabilized image data VIDST that is annotated with the data associated with the obstacles. Accordingly, the image data VIDSA can be processed to provide identification of the obstacles as overlays on the visible images displayed on the video display system 24 (e.g., via the visible video data IMG). For example, the obstacles can be highlighted on the displayed visible images separate from a respective background of the displayed visible images, such as based on having different colors, brightness, text overlays (e.g., displaying information), graphical cues, and/or other information. Accordingly, as the platform moves in DVE conditions (e.g., brownout), the highlighting and/or other information of the overlaid obstacles can be updated based on the inertial data MVMT and the known range to the obstacles, and can thus be used by the pilot as reference points that would otherwise have been obscured by the DVE conditions.
Referring back to the example of FIG. 2, the image data VIDSA and the inertial data MVMT are provided to a lucky region imaging component 62. The lucky region imaging component 62 is configured to enhance the frames of the image data VIDSA with respect to environment-based occlusion of the FLIR video source 52 based on recursively processing the consecutive stabilized frames of the image data VIDSA based on the inertial data MVMT. As an example, the environment-based occlusion of the FLIR video source 52 can be a result of the platform being in a DVE, such that the FLIR video source 52 can be occluded by sand (i.e., brownout), snow (i.e., whiteout), or a variety of other vision obscuring conditions. However, it is to be understood that the image processing system 56 may omit processing by the lucky region imaging component 62 in the absence of DVE conditions, such as for the moving platform (e.g., helicopter) flying in clear weather and/or too high to stir up dust clouds.
FIG. 4 illustrates an example of a lucky region imaging component 150. The lucky region imaging component 150 can correspond to the lucky region imaging component 62 in the example of FIG. 2, and can thus be implemented in the image system 50 to enhance the frames of the image data VIDSA with respect to environment-based occlusion of the FLIR video source 52. The lucky region imaging component 150 receives the inertial data MVMT and the image data VIDSA, which can correspond to the fourteen-bit IR image frames received at sixty fps, and which may include the SA data associated with the image data. The lucky region imaging component 150 outputs image data VIDLK, which can correspond to the enhanced image data VIDSA. In the example of FIG. 3, the lucky region imaging component 150 includes a weighting component 152, a latency component 154, and a lucky region identifier 156.
The lucky region imaging component 150 is configured to implement a recursive lucky region imaging algorithm to enhance the frames of the image data VIDSA. With the recursive algorithm, a revised current frame of image data is formed from its own pixel contents, as well as the pixel contents of a previously processed data frame (e.g., an immediately preceding processed image frame). The proportions of a current frame and a previously processed frame, at each frame time and at each pixel, can be computed based primarily upon the inertial data MVMT. The weighting component 152 receives the inertial data MVMT and applies weight values to the pixels of the current frame and the immediately preceding processed frame, wherein a sum of the weight applied to current frame and the weight applied to preceding frame at a given pixel location equals one (i.e., 1.0). In this manner, the recursive algorithm may only access the current frame and immediately preceding processed frame, which can be stored in the memory 18, thus minimizing storage requirements.
The weight values that are assigned by the weighting component 152 can vary across a given frame. The weight values are computed by the weighting component 152 for each new frame processed for each pixel therein. At each pixel location, for each frame processed, the corresponding weight is computed from an estimated optical flow for the pixel, which can be derived from the inertial data MVMT. For example, the optical flow value at each pixel can be computed as a function of the camera's roll, pitch, and yaw, velocity vectors, and/or an estimated range to the scene component associated with the given pixel of the frame. When the optical flow at a given pixel is high, the weight value can be raised to reduce blurring in the resultant processed image frame. As an example, the optical flow within the frame can be lowest at a focus-of-expansion point, which can also be the point in the frame corresponding to a direction where the moving platform is headed. The optical flow in the imagery can be highest toward the peripheral areas of the display, which can be the areas of the frame corresponding to a highest motion parallax. As an example, the optical flow for a given pixel (x,y) can be computed by the weighting component 152 as follows:
u(x,y)=(xV z −fV x)/z(x,y)+(xyω x)/f+yω z−ωy*(f+x 2 /f)  Equation 1
v(x,y)=(yV z −fV y)/z(x,y)+(xyω y)/f−xω zx*(f+y 2 /f)  Equation 2
Where:
    • u(x,y) is the optical flow for the pixel (x,y) in the x direction;
    • v(x,y) is the optical flow for the pixel (x,y) in the y direction;
    • z(x,y) is an estimated range through the pixel (x,y) to a first scene intersection point;
    • f is a focal length of the FLIR video source 52;
    • (Vx, Vy, Vz) corresponds to linear motion of the FLIR video source 52 as provided by the inertial data MVMT; and
    • x, ωy, ωz) corresponds to rotational motion of the FLIR video source 52 as provided by the inertial data MVMT.
      Based on the optical flow for the given pixel (x,y), the weights can be calculated for the given pixel (x,y) as follows:
      w i(x,y)=ψ[u i(x,y),v i(x,y); L;w i-1 . . . w i-n]  Equation 3
Where:
    • i is a frame number;
    • w(x,y) is the weight that is applied to the pixel (x,y), where 0≤wi≤1;
    • L is a mean latency for the consecutive frames;
    • w i-1 . . . w i-n are frame-averaged weights for n previous frames;
    • ψ is a function that combines the input terms.
      Equation 3 is but one manner in which the weights can be calculated by the weight component 152. As another example, the weight terms w(x,y) applied to each pixel (x,y) can be determined by an additional component related to the consistency of the imagery in a region about the pixel (x,y) in the current frame as compared to the same region in the previously processed preceding frame. In this example, an additional factor is applied based on the consistency of a gradient vector at the pixel (x,y) compared to the gradient vector at the preceding frame for the same pixel (x,y). For example, the weight component 152 can be configured to implement the following equation to calculate the weight w(x,y):
      w i(x,y)=ψ[u i(x,y),v i(x,y); L;w i-1 . . . w i-n;((g i(x,y g i-1(x,y))/(∥g i(x,y)∥∥ g i-1(x,y)∥)]  Equation 4
Where:
    • ((gi(x,y)·g i-1(x,y))/(∥g i(x,y)∥ ∥g i-1(x,y)∥) is a gradient vector (g) based luckiness measure.
The latency component 154 is configured to calculate the mean latency L associated with consecutive frames of the image data VIDSA. Latency can correspond to an average age of image data VIDSA exiting the recursive algorithm relative to an age of a newest frame of image data VIDSA entering the recursive algorithm. For example, the latency of the recursive algorithm can be defined as a function of the average age of the frames going into the recursive algorithm plus the time that it takes to process the frames. The average age of the processed frames can be directly determined by an average weight applied to a current frame of the image data VIDSA when combined with the preceding just processed frame of the image data VIDSA. Lower latency can result when the weights applied to the current frame by the weight component 152 are increased. For example, higher latency can result when the weights applied to the current frame are reduced, and weights can be adjusted to maintain the mean latency L over short time intervals. As an example, during rapid motion of the moving platform, latency can be reduced through choice of weight values by the weight component 152, and during slow movement of the moving platform (e.g., a helicopter hovering), the latency can be increased through choice of weight values. This can be accomplished, for example, while maintaining a desired mean latency L on the order of tens of milliseconds (e.g., 33 milliseconds). A longer mean latency L can cause disorientation for piloting a moving platform based on the video images displayed by the video display system 24 being noticeably out of synchronization with observable real-time. However, too short a mean latency L can prevent the recursive lucky region imaging algorithm from operating effectively, thus only allowing single frame processing instead of the combining of information from consecutive frames. As an example, the mean latency L can be calculated as follows:
L _ = P + Δ T n = 0 - α n + 1 n Equation 5
Where:
    • ΔT is equal to 1 divided by a frame rate (e.g., 16.5 milliseconds for sixty frames per second);
    • P is a processing time to reconstruct a new frame (e.g., 16 milliseconds);
    • n is a frame number, with n=0 corresponding to a current frame and larger numbers corresponding to older frames;
    • α=a latency control parameter, which is set to 1.0 for no lucky imaging latency, but can be set to approximately 0.5 with lucky imaging latency.
The weights that are assigned to the pixels of the frames are provided to the lucky region identifier 156, demonstrated as via a signal WGT. The lucky region identifier 156 is configured to implement the recursive lucky region imaging process. As an example, the lucky region identifier 156 can be configured to implement the following equation:
f i(x,y)=w i(x,y)f i(x,y)+[1−w i(x,y)]f i-1(x,y)  Equation 6
Where:
    • f i is a processed version of the current frame; and
    • f i-1 is a processed version of the preceding frame.
      As another example, the lucky region identifier 156 can be configured to implement the following equation, which can be a slightly more simplified version of Equation 6:
      f i(x,y)=α(x,y)f i(x,y)+[1−α(x,y)] f i-1(x,y), where 0≤α≤1  Equation 7
      As an example, the latency control parameter a can be set to approximately 0.5 to implement Equation 7. In the example of FIG. 4, the lucky region identifier 156 includes a buffer 158 configured to buffer a given processed frame of the image data VIDLK, such that it can be implemented as the preceding frame in a next calculation (i.e., to obtain the f i-1 term). In Equation 6, lower values of the weight term wi(x,y) at a pixel location (x,y) can correspond to spatially and temporally “luckier”, and thus result in increased latency. The output of the lucky region identifier 156 thus corresponds to the enhanced image data VIDLK. As an example, through the processing of the lucky region imaging component 150, the output frame rate does not necessarily have to match the input frame rate, even if the recursive algorithm implemented by the lucky region identifier 156 is applied to each incoming frame of the image data VIDSA. For example, the input frame rate can be sixty frames per second, while the output frame rate can be thirty frames per second. As a result, the processing requirement of subsequent stages of the image processing system 56 can be reduced. However, based on the processing capability of the processor(s) 20 and/or the video display system 24, the frame rate can be the same at the input and the output of the lucky region imaging component 150.
As another example, if the horizontal pixel resolution of the FLIR video source 52 is greater than the horizontal pixel resolution of the video display system 24, or an available display area, only a portion of the image data VIDLK may be mapped to the video display system 24, such as centered on a focus of expansion point. The focus of expansion point can be computed from the inertial data MVMT, for example, and can thus be based on motion of the moving platform. In addition, the lucky region imaging component 150 can be configured to add symbology data to the frames of the image data VIDLK. As one example, the focus of expansion point can be identified by the lucky region imaging component 150, such that it can be displayed on the video display system 24. Other examples of symbology data can include horizon, speed of the moving platform, or any of a variety of other types of identifiers to assist navigation of the moving platform (e.g., based on the Brown-Out Symbology System (BOSS) cues associated with the United States Air Force).
Referring back to the example of FIG. 2, the enhanced image data VIDLK is provided to a wavelet enhancement component 64. The wavelet enhancement component 64 is configured to decompose the monochrome image data VIDLK into high spatial frequency, middle spatial frequency, and low spatial frequency bands. In the example of FIG. 2, the respective spatial frequency bands are demonstrated as image data VIDWV. The image data VIDWV is provided to a video processor 66 that is configured to convert the image data VIDWV into data suitable for display as visible images, demonstrated as the visible video data IMG in the example of FIG. 2. For example, the video processor 66 can be configured to convert the respective high spatial frequency, middle spatial frequency, and low spatial frequency bands into an RGB color space, such that the visible video data IMG can be provided to the video display system 24 as color images. Accordingly, the pilot of the moving platform can use the enhanced visible images provided via the visible video data IMG to assist in navigating the moving platform. As a result, the pilot or user of the artificial vision system 10 can implement the video display system 24 to assist in viewing a scene or navigating a moving platform in conditions of limited or no naked-eye visibility (e.g., DVE conditions).
In view of the foregoing structural and functional features described above, a methodology in accordance with various aspects of the present invention will be better appreciated with reference to FIG. 5. While, for purposes of simplicity of explanation, the methodology of FIG. 5 is shown and described as executing serially, it is to be understood and appreciated that the present invention is not limited by the illustrated order, as some aspects could, in accordance with the present invention, occur in different orders and/or concurrently with other aspects from that shown and described herein. Moreover, not all illustrated features may be required to implement a methodology in accordance with an aspect of the present invention.
FIG. 5 illustrates an example of a method 200 for providing artificial vision assistance for navigating a moving platform. At 202, a plurality of sequential frames of infrared (IR) image data (e.g., the image data VIDINIT) are captured via an IR video source (e.g., the FLIR video source 52) mounted on the moving platform. At 204, SA data associated with each of the plurality of sequential frames relative to the moving platform is calculated (e.g., via the synthetic vision SA component 60). At 206, the plurality of sequential frames are converted to visible images (e.g., the visible video data IMG). At 208, the visible images are displayed on a video display system (e.g., the video display system 24) to assist in navigation of the moving platform. At 210, the SA data relative to the moving platform is identified on the visible images.
What have been described above are examples of the invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the invention are possible. Accordingly, the invention is intended to embrace all such alterations, modifications, and variations that fall within the scope of this application, including the appended claims.

Claims (14)

What is claimed is:
1. An artificial vision system mounted on a moving platform, the system comprising:
an image system comprising a video source to capture a plurality of sequential frames of image data and an image processing system to, via at least one processor, process the plurality of sequential frames to calculate situational awareness (SA) data with respect to each of the plurality of sequential frames based on inertial data and to convert the processed plurality of sequential frames to visible images and to process the plurality of sequential frames to calculate the SA data with respect to each of the plurality of sequential frames based on the inertial data associated with motion of the moving platform, wherein the image processing system further comprises a lucky region imaging component to enhance the plurality of sequential frames with respect to environment-based occlusion of the video source based on recursively processing the consecutive stabilized frames of the image data based on the inertial data and to assign a weight to each pixel associated with a current one of the plurality of sequential frames based on the inertial data and on a selected mean image latency, and wherein the lucky region imaging component further to apply a recursive imaging algorithm on the plurality of sequential frames based on the weight assigned to each pixel of the current one of the plurality of sequential frames and based on the image data associated with the current one of the plurality of sequential frames relative to an immediately preceding processed one of the plurality of sequential frames to enhance the plurality of sequential frames substantially continuously; and
a video display system to display the visible images associated with the processed plurality of sequential frames and to visibly identify the SA data relative to the platform.
2. The system of claim 1, wherein the SA data comprises three-dimensional locations of obstacles with respect to the moving platform, wherein the image processing system comprises a synthetic vision SA component to generate a three-dimensional range map and to detect the obstacles on the three-dimensional range map based on parallax associated with consecutive stabilized frames of the image data and based on the inertial data.
3. The system of claim 2, wherein the image processing system further comprises a stabilization component to establish a homography between consecutive frames of the image data based on the inertial data to stabilize frame displacement between the plurality of sequential frames based on the homography to provide the consecutive stabilized frames.
4. The system of claim 2, wherein the synthetic vision SA component further to adjust data associated with the detected obstacles, such that the detected obstacles are highlighted on the displayed visible images separate from a respective background of the displayed visible images.
5. The system of claim 1, wherein the lucky region imaging component further to add symbology data to the plurality of sequential frames, the symbology data being overlayed onto the visible images on the video display system as at least a portion of the SA data.
6. The system of claim 1, wherein the lucky region imaging component further to calculate the weight assigned to each pixel based on computing an optical flow associated with a motion of each pixel associated with a current one of the plurality of sequential frames relative to at least one preceding sequential frame of the plurality of sequential frames.
7. The system of claim 1, wherein the video source is a forward-looking infrared (FUR) video source to capture the plurality of sequential frames as a plurality of sequential infrared (IR) frames, wherein the image system further to convert the plurality of sequential IR frames into the visible images.
8. The system of claim 6, wherein the image processing system comprises a wavelet enhancement component to convert the plurality of sequential IR frames into an RGB color space based on separate respective IR spatial frequency bands associated with each of the plurality of sequential IR frames.
9. A non-transitory computer readable medium to store instructions that, when executed by a processor, implement a method for providing artificial vision assistance for navigating a moving platform, the method comprising:
generating inertial data via an image system comprising an inertial measurement unit (IMU);
capturing a plurality of sequential frames of infrared (IR) image data via an IR video source mounted on the moving platform;
calculating situational awareness (SA) data associated with each of the plurality of sequential frames based on the inertial data and relative to the moving platform;
converting the plurality of sequential frames to visible images;
displaying the visible images on a video display system to assist in navigation of the moving platform;
identifying the SA data relative to the moving platform on the visible images; and
recursively processing consecutive frames of the IR image data based on inertial data associated with the moving platform to enhance the plurality of sequential frames with respect to environment-based occlusion of the IR video source;
wherein recursively processing the consecutive frames comprises:
selecting a mean image latency with respect to the plurality of sequential frames based on the inertial data;
assigning a weight to each pixel associated with a current one of the plurality of sequential frames based on the inertial data and on the selected mean image latency; and
applying a recursive imaging algorithm to the plurality of sequential frames based on the weight assigned to each pixel of the current one of the plurality of sequential frames and based on image data associated with the current one of the plurality of sequential images relative to an immediately preceding processed one of the plurality of sequential frames.
10. The medium of claim 9, wherein calculating the SA data comprises:
generate a three-dimensional range map based on parallax associated with consecutive frames of the plurality of sequential frames; and
calculating three-dimensional locations of obstacles on the three-dimensional range map relative to the moving platform based on the parallax and based on inertial data associated with the moving platform.
11. The medium of claim 9, wherein identifying the SA data comprises overlaying symbology data onto the visible images on the video display system as at least a portion of the SA data, the symbology data being further arranged to assist in the navigation of the moving platform.
12. An artificial vision system mounted on a moving platform, the system comprising:
a self-enclosed image system comprising:
an infrared (IR) video source to capture a plurality of sequential frames of IR image data;
an inertial measurement unit to generate inertial data associated with the moving platform;
an image processing system via at least one processor and based on the inertial data to stabilize the plurality of sequential frames, to calculate situational awareness (SA) data with respect to each of the plurality of sequential frames based on the inertial data, to enhance the plurality of sequential images with respect to environment-based occlusion of the video source based on recursively processing consecutive stabilized images of the plurality of sequential frames, and to convert the plurality of sequential frames into visible images, the image processing system further comprising a lucky region imaging component to assign a weight to each pixel associated with a current one of the plurality of sequential images based on the inertial data and on a mean image latency, and wherein the lucky region imaging component further to apply a recursive imaging algorithm on the plurality of sequential images based on the weight assigned to each pixel of the current one of the plurality of sequential frames and based on image data associated with the current one of the plurality of sequential frames relative to an immediately preceding processed one of the plurality of sequential frames to enhance the plurality of sequential images; and
a video display system configured to display the visible images and to visibly identify the SA data relative to the moving platform.
13. The system of claim 12, wherein the SA data comprises three-dimensional locations of obstacles with respect to the moving platform, wherein the image processing system comprises a synthetic vision SA component to generate a three-dimensional range map and to detect the obstacles on the three-dimensional range map based on parallax associated with consecutive stabilized images of the plurality of sequential frames based on the inertial data.
14. The system of claim 1, wherein the image system further comprises an inertial measurement unit (IMU) to generate the inertial data independently of an IMU associated with the moving platform.
US14/041,849 2013-09-30 2013-09-30 Platform-mounted artificial vision system Active 2036-08-12 US9970766B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/041,849 US9970766B2 (en) 2013-09-30 2013-09-30 Platform-mounted artificial vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/041,849 US9970766B2 (en) 2013-09-30 2013-09-30 Platform-mounted artificial vision system

Publications (2)

Publication Number Publication Date
US20160219245A1 US20160219245A1 (en) 2016-07-28
US9970766B2 true US9970766B2 (en) 2018-05-15

Family

ID=56433537

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/041,849 Active 2036-08-12 US9970766B2 (en) 2013-09-30 2013-09-30 Platform-mounted artificial vision system

Country Status (1)

Country Link
US (1) US9970766B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9598011B2 (en) 2014-01-09 2017-03-21 Northrop Grumman Systems Corporation Artificial vision system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061068A (en) * 1998-06-30 2000-05-09 Raytheon Company Method and apparatus for providing synthetic vision using reality updated virtual image
US6330356B1 (en) * 1999-09-29 2001-12-11 Rockwell Science Center Llc Dynamic visual registration of a 3-D object with a graphical model
US20040083038A1 (en) * 2002-10-28 2004-04-29 Gang He Method for producing 3D perspective view avionics terrain displays
US20040169663A1 (en) * 2003-03-01 2004-09-02 The Boeing Company Systems and methods for providing enhanced vision imaging
US20070279755A1 (en) * 2006-06-01 2007-12-06 3M Innovative Properties Company Head-Up Display System
US20080205791A1 (en) * 2006-11-13 2008-08-28 Ramot At Tel-Aviv University Ltd. Methods and systems for use in 3d video generation, storage and compression
US20090212994A1 (en) * 2008-02-27 2009-08-27 Honeywell International Inc. Systems and method for dynamic stabilization of target data detected from a moving platform
US20100231705A1 (en) * 2007-07-18 2010-09-16 Elbit Systems Ltd. Aircraft landing assistance
US8019490B2 (en) 2006-09-29 2011-09-13 Applied Minds, Llc Imaging and display system to aid helicopter landings in brownout conditions
US8023760B1 (en) 2007-12-06 2011-09-20 The United States Of America As Represented By The Secretary Of The Navy System and method for enhancing low-visibility imagery
US20120044476A1 (en) * 2008-05-09 2012-02-23 Ball Aerospace & Technologies Corp. Systems and methods of scene and action capture using imaging system incorporating 3d lidar
US8149245B1 (en) 2008-12-16 2012-04-03 The United States Of America As Represented By The Secretary Of The Navy Adaptive linear contrast method for enhancement of low-visibility imagery
US20120249827A1 (en) 2011-03-31 2012-10-04 Drs Sustainment Systems, Inc. Method for Image Processing of High-Bit Depth Sensors

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061068A (en) * 1998-06-30 2000-05-09 Raytheon Company Method and apparatus for providing synthetic vision using reality updated virtual image
US6330356B1 (en) * 1999-09-29 2001-12-11 Rockwell Science Center Llc Dynamic visual registration of a 3-D object with a graphical model
US20040083038A1 (en) * 2002-10-28 2004-04-29 Gang He Method for producing 3D perspective view avionics terrain displays
US20040169663A1 (en) * 2003-03-01 2004-09-02 The Boeing Company Systems and methods for providing enhanced vision imaging
US20070279755A1 (en) * 2006-06-01 2007-12-06 3M Innovative Properties Company Head-Up Display System
US8019490B2 (en) 2006-09-29 2011-09-13 Applied Minds, Llc Imaging and display system to aid helicopter landings in brownout conditions
US20080205791A1 (en) * 2006-11-13 2008-08-28 Ramot At Tel-Aviv University Ltd. Methods and systems for use in 3d video generation, storage and compression
US20100231705A1 (en) * 2007-07-18 2010-09-16 Elbit Systems Ltd. Aircraft landing assistance
US8023760B1 (en) 2007-12-06 2011-09-20 The United States Of America As Represented By The Secretary Of The Navy System and method for enhancing low-visibility imagery
US20090212994A1 (en) * 2008-02-27 2009-08-27 Honeywell International Inc. Systems and method for dynamic stabilization of target data detected from a moving platform
US20120044476A1 (en) * 2008-05-09 2012-02-23 Ball Aerospace & Technologies Corp. Systems and methods of scene and action capture using imaging system incorporating 3d lidar
US8149245B1 (en) 2008-12-16 2012-04-03 The United States Of America As Represented By The Secretary Of The Navy Adaptive linear contrast method for enhancement of low-visibility imagery
US20120249827A1 (en) 2011-03-31 2012-10-04 Drs Sustainment Systems, Inc. Method for Image Processing of High-Bit Depth Sensors

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Klein, et al.: "Use of 3D Conformal Symbology on HMD for a Safer Flight in Degraded Visual Environment"; pp. 1-10.

Also Published As

Publication number Publication date
US20160219245A1 (en) 2016-07-28

Similar Documents

Publication Publication Date Title
US10748338B2 (en) Image processing apparatus and image processing method
EP2261604B1 (en) Computer arrangement for and method of calculating motion vectors using range sensor data
US10529064B2 (en) Artificial vision system
CN104126299B (en) Video image stabilisation
US7511736B2 (en) Augmented reality navigation system
US20110298988A1 (en) Moving object detection apparatus and moving object detection method
WO2015146068A1 (en) Information display device, information display method, and program
JP2008511080A (en) Method and apparatus for forming a fused image
US20200391751A1 (en) Road surface detection apparatus, image display apparatus using road surface detection apparatus, obstacle detection apparatus using road surface detection apparatus, road surface detection method, image display method using road surface detection method, and obstacle detection method using road surface detection method
US20230232103A1 (en) Image processing device, image display system, method, and program
EP3533219B1 (en) Depth data adjustment based on non-visual pose data
KR101764106B1 (en) AVM system and method for compositing image with blind spot
JP5086824B2 (en) TRACKING DEVICE AND TRACKING METHOD
US9970766B2 (en) Platform-mounted artificial vision system
CN118648019A (en) Advanced temporal low-light filtering with global and local motion compensation
CN113011212B (en) Image recognition method and device and vehicle
KR20170020666A (en) AVM system and method for compositing image with blind spot
US10917585B2 (en) Method and system for facilitating transportation of an observer in a vehicle
US11727092B2 (en) Method, software product, device and system for integrating images
JP2008109283A (en) Vehicle periphery display device and method for presenting visual information
US20220413295A1 (en) Electronic device and method for controlling electronic device
JP2019029922A (en) Remote handling equipment
WO2022064576A1 (en) Position information acquisition device, head-mounted display, and position information acquisition method
WO2022064605A1 (en) Positional information acquisition system, positional information acquisition method, and positional information acquisition device

Legal Events

Date Code Title Description
AS Assignment

Owner name: NORTHROP GRUMMAN SYSTEMS CORPORATION, VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BAUMGARTNER, DUSTIN D.;SCHACHTER, BRUCE J.;STEWART, KATHRYN B.;AND OTHERS;REEL/FRAME:031311/0153

Effective date: 20130930

AS Assignment

Owner name: DEFENSE ADVANCED RESEARCH PROJECTS AGENCY, UNITED

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:NORTHROP GRUMMAN SYSTEMS CORPORATION;REEL/FRAME:033245/0565

Effective date: 20131009

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4