EP2096602B1 - Methods and apparatus for runway segmentation using sensor analysis - Google Patents

Methods and apparatus for runway segmentation using sensor analysis Download PDF

Info

Publication number
EP2096602B1
EP2096602B1 EP09152623A EP09152623A EP2096602B1 EP 2096602 B1 EP2096602 B1 EP 2096602B1 EP 09152623 A EP09152623 A EP 09152623A EP 09152623 A EP09152623 A EP 09152623A EP 2096602 B1 EP2096602 B1 EP 2096602B1
Authority
EP
European Patent Office
Prior art keywords
runway
blob
slope
template
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP09152623A
Other languages
German (de)
French (fr)
Other versions
EP2096602A1 (en
Inventor
Rida M. Hamza
Mohammed Ibrahim Mohideen
Dinesh Ramegowda
Thea L. Feyereisen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honeywell International Inc
Original Assignee
Honeywell International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honeywell International Inc filed Critical Honeywell International Inc
Publication of EP2096602A1 publication Critical patent/EP2096602A1/en
Application granted granted Critical
Publication of EP2096602B1 publication Critical patent/EP2096602B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates generally to providing guidance to aircraft crew, and more particularly, relates to forming a combined sensor and synthetic navigation data that provides guidance to aircraft operators in limited or no visibility conditions.
  • the need to land aircraft, such as airplanes, helicopters, and spacecraft in zero/zero conditions is driving sensor fusion and computer vision systems for next-generation head-up displays.
  • Safely landing the aircraft requires accurate information about the location of a target (e.g., runway).
  • a target e.g., runway
  • the pilot must carefully control the navigation of the aircraft relative to a touchdown point. That is, pilots need to have a good situational awareness (e.g., heads up) of the outside world through heavy fog, smoke, snow, dust, or sand, to detect runways and obstacles on runways and/or in the approach path for a safe landing.
  • Advanced synthetic vision is a major focus of aerospace industry efforts to improve aviation safety.
  • Some current research is focused on developing new and improved Enhanced Vision Systems.
  • synthetic vision platforms to provide pilots with additional features so that they can easily navigate to an airport, identify potential hazards, take avoidance action, and/or obtain sufficient visual reference of the runway.
  • the navigation data from a synthetic vision system (SVS) database is generated by many sources including, but not limited to, a differential global positioning system (DGPS), an inertial reference system (IRS), an attitude-heading reference system (AHRS), satellite and ground based devices (e.g., Instrument Landing Systems (ILS) and Microwave Landing System (MLS)).
  • DGPS differential global positioning system
  • IRS inertial reference system
  • AHRS attitude-heading reference system
  • satellite and ground based devices e.g., Instrument Landing Systems (ILS) and Microwave Landing System (MLS)
  • SVS modeling is advancing toward improving situational awareness in supporting pilots' ability to navigate in all conditions by providing information such as pitch, roll, yaw, lateral and vertical deviation, barometric altitudes and global positioning with runway heading, position, and dimensions.
  • the pilot may not be able to visually verify the correctness of navigation data and the SVS database.
  • SVS data is based on archived information (taken at earlier time than the time of the flight), the data can be impeded by updates into the scene, and thus, some cues may be missing from the actual data.
  • navigation data cannot be used to navigate the aircraft to avoid obstacles on or near a runway because SVS models do not provide real-time information related to obstacles.
  • only a limited number of runways are equipped with adequate devices for providing accurate navigation attributes with the required accuracy to safely make low approaches and high-end equipment (e.g., ILS) is costly and is not available at all airports or to all runways at a particular airport.
  • US 6157876 discloses a method and apparatus for navigating an aircraft from an image of a runway.
  • the method includes detecting edges of an image the edges being coronated to determine a location of the runway with in the image.
  • various embodiments of the present invention are configured to provide visual cues from one or more sensor images to enable a safe landing approach and minimize the number of missed approaches during low visibility conditions.
  • One system comprises a camera configured to capture an image of the ROI, an analysis module coupled to the camera and configured to generate a binary large object (BLOB) of at least a portion of the ROI, and a synthetic vision system (SVS) including a template of the runway.
  • BLOB binary large object
  • SVS synthetic vision system
  • the system further comprises a segmentation module coupled to the analysis module and the SVS. The segmentation algorithm is configured to determine if the ROI includes the runway based on a comparison of the template and the BLOB.
  • Various embodiments also provide methods for determining whether a binary large object (BLOB) represents a runway having a plurality of corners.
  • One method comprises the steps of identifying a position for each corner on the BLOB and forming a polygon on the BLOB based on the position of each corner.
  • the method further comprises the step of determining that the BLOB represents the runway based on a comparison of a template of the runway and the polygon.
  • One computer-readable medium includes instructions that, when executed by a processor, cause the processor to perform a method for determining whether a binary large object (BLOB) represents a runway having a plurality of corners.
  • One computer-readable medium includes instructions for a method comprising the steps of identifying a position for each corner on the BLOB and forming a polygon on the BLOB based on the position of each corner. The method further comprises the step of determining that the BLOB represents the runway based on a comparison of a template of the runway and the polygon.
  • FIG. is a block diagram of one embodiment of a system for segmenting a runway
  • FIG. 2 is a diagram illustrating dynamic range calibration for image enhancement
  • FIG. 3 is a block diagram of one embodiment of a validation algorithm included in the image enhancement section included in the system of FIG. 1 ;
  • FIG. 4 is a diagram illustrating an embodiment of a binary large object (BLOB) that represents a runway generated by the system of FIG. 1 from a captured image;
  • BLOB binary large object
  • FIG. 5 is a block diagram illustrating one embodiment of a profile filter sector included within the system of FIG. 1 ;
  • FIG. 6 is a block diagram illustrating one embodiment of a corner-based segmentation algorithm included within the system of FIG. 1 ;
  • FIG. 7 is a diagram illustrating one embodiment of a Freeman chain code
  • FIG. 8 is a diagram illustrating one embodiment of a consistency matrix.
  • An apparatus for locating a runway by detecting the runway coordinates and edges within data representing a region of interest (ROI) provided by a synthetic vision sensor.
  • ROI region of interest
  • FIG. 1 is a block diagram of one embodiment of a system 100 for segmenting a runway.
  • system 100 comprises a camera 110, a synthetic vision system (SVS) database 120, an analysis module 130, a correlation filter 140, a segmentation module 150, and a display 160 coupled to one another via a bus 170 (e.g., a wired and/or wireless bus).
  • a bus 170 e.g., a wired and/or wireless bus.
  • Camera 110 may be any system, device, hardware (and software), or combinations thereof capable of capturing an image of region of interest (ROI) within an environment.
  • camera 110 is a forward-looking camera is mounted on a vehicle (e.g., an aircraft, a spacecraft, etc.) with a predefined field of view that overlaps the same scene being surveyed by navigation data stored in SVS database 120.
  • SVS database 120 is configured to store navigational data 1210 representing one or more regions of interest (ROI) including a target (e.g., a runway, a landing strip, a landing pad, and the like) that is present in the environment being surveyed by camera 110.
  • SVS database 120 is further configured to provide the portion of navigational data 1210 related to an ROI where a runway (i.e., a target) is presumably present in the forward vicinity of the present location of system 100.
  • SVS database 120 is also configured to store imagery data 1220 (e.g., target templates or target images) that mimics the corresponding real-world view of each target. That is, imagery data 1220 contains one or more target templates that illustrate how a target should look from one or more visual perspectives (e.g., one or more viewing angles, one or more viewing distances, etc.). SVS database 120 is further configured to provide imagery data 1220 (i.e., a template) representing the one or more visual perspectives of the target.
  • imagery data 1220 e.g., target templates or target images
  • SVS database 120 is configured to sort through various sources of information (not shown) related to airports, ranging, and other similar information for the present ROI. Specifically, SVS database 120 is configured to obtain the present location of system 100 from an external source (e.g., a global positioning system (GPS)) and retrieve the corresponding portions of navigational data 1210 and imagery data 1220 (e.g., runway template) for the ROI related to the present location of system 100. Once the corresponding portions of navigational data 1210 and imagery data 1220 are retrieved, the corresponding portions of navigational data 1210 and imagery data 1220, along with the images captured by camera 110, are transmitted to analysis module 130.
  • GPS global positioning system
  • Image Enhancement sector 1310 includes an image enhancement sector 1310, an adaptive threshold sector 1320, and a profile filter sector 1330.
  • image enhancement sector 1310 comprises one or more filters (e.g., a Median Filter, a Gaussian filter with zero average, etc) to filter the noise from the captured images.
  • one filter may use multi-peak histogram equalization where mid-nodes are locally determined and the affected regions (i.e. regions with contracted contrast) are substituted with generalized histogram intensities.
  • Another filter may use a Kuwahara filter where a square systematic neighborhood is divided into four overlapping windows, with each window containing a central pixel that is replaced by the mean of the most homogeneous window (i.e., the window with the least standard deviation).
  • image enhancement sector 1310 is configured to enhance the contrast of the captured ROI to bring uniformity into the analysis.
  • image enhancement sector 1310 is configured to "stretch” the contrast (also known as, “contrast normalization”) to improve the captured image. That is, the contrast in a captured image is improved by "stretching" the range of intensity values within the predefined ROI.
  • I max & I min are the maximum and minimum gray values of the image I(x,y) excluding some outliers depicted at the tail of the image intensity distribution. These outliers are then removed to limit their side effect on the choice of the desired contrast range.
  • FIG. 2 shows one example of an output of image enhancement sector 1310 (via display 160).
  • Portion 210 of FIG. 2 illustrates the choice of outliers and images 215, 220 illustrate the contrast enhancement, image 215 being the original image and image 220 being the filtered image.
  • Portion 230 of FIG. 2 depicts a curve that represents the cumulative distribution of a normalized histogram. The data representing the ROI is enhanced by normalizing the contrast locally around the runway and its surroundings within the ROI prior to analysis.
  • the captured images may include extraneous objects (e.g., a secondary runway, a taxiway, a mountainous structure beside the airport, and/or any other structure with similar sensing characteristics within the associated IR wavelength).
  • profile filter sector 1330 is configured to process the captured images to generate a binary image of the ROI including a binary large object (BLOB) that can be analyzed for the presence of a runway.
  • BLOB binary large object
  • Profile filter sector 1330 is configured to generate a BLOB and to separate the runway from the rest of the background. To accomplish such separation, profile filter sector 1330 uses the following cues for background-foreground segmentation:
  • profile filter sector 1330 is configured to depict a probability density function of the intensity data representing the ROI to segment the foreground pixels from the background pixels.
  • a histogram estimator is then used to approximate the density function so that the runway BLOB is represented by a grouping of pixels in the histogram that makes up a certain percentage of the overall ROI data above the adaptive threshold.
  • the percentage of the BLOB is determined from synthetic runway proportion estimates from data sets based upon the positioning of system 100 with respect to the runway coordinates and orientation perspective.
  • a validation algorithm 300 determines whether a runway exists within the predefined ROI by employing one or more combinations of validation processes of the following queries:
  • the threshold required to segment foreground pixels from background pixels is derived based upon the analysis of the cumulative histogram of intensities and its distribution within the ROI.
  • the template size is scaled by ⁇ to account for variations in range, orientation, and synthetic noise margin.
  • the binary image comprises more than one BLOB, possibly because the ROI covers more than one runway or a runway and a taxiway.
  • the BLOB having a mass center that is closest to the template centroid is selected.
  • profile filter sector 1330 performs an iterative binarization procedure with varied cut-off values until desired characteristics for the BLOB are obtained.
  • the surface area of the BLOB is measured and the threshold is again fine tuned to match the size of the predicted template. That is, the iteration is continued until
  • system 100 uses correlation filter 140 to perform one or more filtering processes.
  • the one or more filtering processes predict the expected target orientation and shape based upon the dynamics of the system and how the system is navigating (e.g., a Kalman filter approach or an approach using a like filter).
  • correlation filter 140 is configured to preserve those binary pixels associated with the perspective shape of a runway presented by the runway template and its perspective orientation projection into the 2D imagery plane.
  • FIG. 5 illustrates the operation of one embodiment of profile filter sector 1330.
  • the process for distinguishing between the runway features and non-runway features begins by determining the center of mass of the BLOB to centralize the template on top of the BLOB (block 510).
  • the major axis of the template splits the BLOB into two parts (e.g., left edges and right edges) (block 520).
  • An estimate using a least square algorithm is used to identify both sides (e.g., right and left sides) of the BLOB that fits the runway profile (blocks 530, 540).
  • the top and bottom corners of the BLOB are connected to form a polygon (block 550).
  • BLOB profile points that are outside the polygon are considered as outlying points and are ignored.
  • the resulting polygon (e.g., a quadrilateral) is a pre-estimate for the runway edges that encloses the boundaries of the true BLOB.
  • Segmentation module 150 is configured to determine the appropriate boundaries for the polygon so that the polygon can be compared to the template. Determining the appropriate boundaries for the polygon for comparison to the template is important in effectively removing outliers from the BLOB profile.
  • the tolerance interval (or error range) is set as a function of the ranging perspective within the captured image. For example, a relatively large error margin is preferred for effective filtering of outliers near the top edge, while a relatively small error margin is required near the bottom edge so as to accommodate effects of pixel spillage since infrared image characteristics and other shape irregularities are typically seen near the bottom edge.
  • a contour includes both vertices (i.e., discontinuous edge points) and continuous edge points. Because discontinuous vertices define the limits of a contour, discontinuous vertices are the dominant points in fitting a model into a boundary. Accordingly, the corners of the BLOB are used to simplify the process of fitting a model to runway boundaries. Runway segmentation using a corner-based segmentation algorithm 600 stored in segmentation module 150 is shown in FIG. 6 .
  • a contour is extracted by subjecting the BLOB with a Freeman chain code 700 (see FIG. 7 ). That is, the BLOB is initially scanned to obtain a top most pixel, which is used as a starting point for contour tracing. Starting with this pixel and moving in, for example, a clockwise direction, each pixel is assigned a code based upon its direction from the previous pixel.
  • n digital points describe a closed boundary curve of a BLOB
  • a region of support covering both sides of a boundary pixel is defined.
  • the Comerity Index indicates the prominence of a corner point (i.e., the larger the value of the comerity index of a boundary point, the stronger is the evidence that the boundary point is a corner).
  • the computed comerity indices are subjected to thresholding such that only boundary pixels with strong comerity index are retained.
  • the threshold is derived as a function of the region of support length to be at least ⁇ ⁇ 3 ⁇ k 10 ⁇ 2 .
  • the detected corner points are portioned into four quadrants formed by the major and minor axis of the BLOB (block 620).
  • the slope of the BLOB major axis is assumed to be same as that of the template and the BLOB's geometrical centroid serves as the origin.
  • the next step is to form a polygon such that the polygon has maximum BLOB area coverage.
  • the process of finding such a polygon is divided into two step process where the points on the left quadrants are analyzed separately from the right quadrants.
  • the basic step includes populating a consistency matrix with corner candidates where slope of selected four corners are within a certain margin based on the template estimates from the navigation data (see consistency matrix 810 in FIG. 8 ).
  • the selection of the corners is done in pairs by matching template slopes using two corner pairs at the left side and two corner pairs at the right side as well (block 630).
  • a quadrilateral polygon with four points is selected from the consistency matrix such that they have a best fit measure with respect to the runway edges (block 640).
  • the corner detection estimates are sensitive to the size of the support region. While too large of a value for a support region will smooth out fine corner points, a small value will generate a large number of unwanted corner points. If there are no corner points in any one of the quadrants, the process is reiterated to adjust the length of the region of support for conversions. After a predefined number of iterations, if the process does not converge to create at least one corner in each quadrant, a "no runway" (i.e., "no target”) report is generated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Description

    CROSS-REFERENCES TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 61/031,900, filed on February 27, 2008 .
  • TECHNICAL FIELD
  • The present invention relates generally to providing guidance to aircraft crew, and more particularly, relates to forming a combined sensor and synthetic navigation data that provides guidance to aircraft operators in limited or no visibility conditions.
  • BACKGROUND
  • The need to land aircraft, such as airplanes, helicopters, and spacecraft in zero/zero conditions is driving sensor fusion and computer vision systems for next-generation head-up displays. Safely landing the aircraft requires accurate information about the location of a target (e.g., runway). During an approach to a runway, the pilot must carefully control the navigation of the aircraft relative to a touchdown point. That is, pilots need to have a good situational awareness (e.g., heads up) of the outside world through heavy fog, smoke, snow, dust, or sand, to detect runways and obstacles on runways and/or in the approach path for a safe landing.
  • Advanced synthetic vision is a major focus of aerospace industry efforts to improve aviation safety. Some current research is focused on developing new and improved Enhanced Vision Systems. In these research efforts, there were various attempts to fuse sensor data from different modalities (based upon certified sensor availability) with synthetic vision platforms to provide pilots with additional features so that they can easily navigate to an airport, identify potential hazards, take avoidance action, and/or obtain sufficient visual reference of the runway.
  • The navigation data from a synthetic vision system (SVS) database is generated by many sources including, but not limited to, a differential global positioning system (DGPS), an inertial reference system (IRS), an attitude-heading reference system (AHRS), satellite and ground based devices (e.g., Instrument Landing Systems (ILS) and Microwave Landing System (MLS)). SVS modeling is advancing toward improving situational awareness in supporting pilots' ability to navigate in all conditions by providing information such as pitch, roll, yaw, lateral and vertical deviation, barometric altitudes and global positioning with runway heading, position, and dimensions. However, under low visibility conditions the pilot may not be able to visually verify the correctness of navigation data and the SVS database. Because SVS data is based on archived information (taken at earlier time than the time of the flight), the data can be impeded by updates into the scene, and thus, some cues may be missing from the actual data. In addition, navigation data cannot be used to navigate the aircraft to avoid obstacles on or near a runway because SVS models do not provide real-time information related to obstacles. Moreover, only a limited number of runways are equipped with adequate devices for providing accurate navigation attributes with the required accuracy to safely make low approaches and high-end equipment (e.g., ILS) is costly and is not available at all airports or to all runways at a particular airport.
  • Hamza et al. In 'Augmented vision perception in infrared: algorithms and applied Systems', Springer-Verlag, London, 2009, pages 243 to 267 discloses a method for runway positioning and moving object detection prior to landing of an aircraft using an enhanced vision system.
  • US 6157876 discloses a method and apparatus for navigating an aircraft from an image of a runway. The method includes detecting edges of an image the edges being coronated to determine a location of the runway with in the image.
  • The article "Autonomous infrared-based guidance system for approach and landing" by H.-U. Doehler et al, Enhanced and Synthetic Vision 2004, SPIE discloses DLR's recent development of a considerable robust and reliable method to estimate the relative position of an aircraft with respect to a runway based on camera images only.
  • Accordingly, there is a need to analyze real-time sensor imageries and fuse sensor data with SVS data to provide pilots with additional features so that they can easily navigate to the airport, identify potential hazards, take avoidance action, and obtain sufficient visual reference of a runway in real-time. As such, various embodiments of the present invention are configured to provide visual cues from one or more sensor images to enable a safe landing approach and minimize the number of missed approaches during low visibility conditions.
  • BRIEF SUMMARY OF THE INVENTION
  • The present invention in its various aspects is as set out in the appended claims. Systems for determining whether a region of interest (ROI) includes a runway having a plurality of corners are provided. One system comprises a camera configured to capture an image of the ROI, an analysis module coupled to the camera and configured to generate a binary large object (BLOB) of at least a portion of the ROI, and a synthetic vision system (SVS) including a template of the runway. The system further comprises a segmentation module coupled to the analysis module and the SVS. The segmentation algorithm is configured to determine if the ROI includes the runway based on a comparison of the template and the BLOB.
  • Various embodiments also provide methods for determining whether a binary large object (BLOB) represents a runway having a plurality of corners. One method comprises the steps of identifying a position for each corner on the BLOB and forming a polygon on the BLOB based on the position of each corner. The method further comprises the step of determining that the BLOB represents the runway based on a comparison of a template of the runway and the polygon.
  • Also provided are computer-readable mediums including instructions that, when executed by a processor, cause the processor to perform a method for determining whether a binary large object (BLOB) represents a runway having a plurality of corners. One computer-readable medium includes instructions for a method comprising the steps of identifying a position for each corner on the BLOB and forming a polygon on the BLOB based on the position of each corner. The method further comprises the step of determining that the BLOB represents the runway based on a comparison of a template of the runway and the polygon.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and
  • FIG. is a block diagram of one embodiment of a system for segmenting a runway;
  • FIG. 2 is a diagram illustrating dynamic range calibration for image enhancement;
  • FIG. 3 is a block diagram of one embodiment of a validation algorithm included in the image enhancement section included in the system of FIG. 1;
  • FIG. 4 is a diagram illustrating an embodiment of a binary large object (BLOB) that represents a runway generated by the system of FIG. 1 from a captured image;
  • FIG. 5 is a block diagram illustrating one embodiment of a profile filter sector included within the system of FIG. 1;
  • FIG. 6 is a block diagram illustrating one embodiment of a corner-based segmentation algorithm included within the system of FIG. 1;
  • FIG. 7 is a diagram illustrating one embodiment of a Freeman chain code; and
  • FIG. 8 is a diagram illustrating one embodiment of a consistency matrix.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The following detailed description is merely exemplary in nature and is not intended to limit the invention, the application, and/or uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description.
  • An apparatus is provided for locating a runway by detecting the runway coordinates and edges within data representing a region of interest (ROI) provided by a synthetic vision sensor. The following aspects of the invention are described in conjunction with the pictorial illustrations and block diagrams, and those skilled in the art will appreciate that the scope of the invention is not limited to and/or by these illustrations. Modifications in the section, design, and/or arrangement of various components and steps discussed in what follows may be made without departing form the intended scope of the invention.
  • Turning now to the figures, FIG. 1 is a block diagram of one embodiment of a system 100 for segmenting a runway. At least in the illustrated embodiment, system 100 comprises a camera 110, a synthetic vision system (SVS) database 120, an analysis module 130, a correlation filter 140, a segmentation module 150, and a display 160 coupled to one another via a bus 170 (e.g., a wired and/or wireless bus).
  • Camera 110 may be any system, device, hardware (and software), or combinations thereof capable of capturing an image of region of interest (ROI) within an environment. In one embodiment, camera 110 is a forward-looking camera is mounted on a vehicle (e.g., an aircraft, a spacecraft, etc.) with a predefined field of view that overlaps the same scene being surveyed by navigation data stored in SVS database 120.
  • SVS database 120 is configured to store navigational data 1210 representing one or more regions of interest (ROI) including a target (e.g., a runway, a landing strip, a landing pad, and the like) that is present in the environment being surveyed by camera 110. SVS database 120 is further configured to provide the portion of navigational data 1210 related to an ROI where a runway (i.e., a target) is presumably present in the forward vicinity of the present location of system 100.
  • SVS database 120 is also configured to store imagery data 1220 (e.g., target templates or target images) that mimics the corresponding real-world view of each target. That is, imagery data 1220 contains one or more target templates that illustrate how a target should look from one or more visual perspectives (e.g., one or more viewing angles, one or more viewing distances, etc.). SVS database 120 is further configured to provide imagery data 1220 (i.e., a template) representing the one or more visual perspectives of the target.
  • To provide navigational data 1210 and imagery data 1220, SVS database 120 is configured to sort through various sources of information (not shown) related to airports, ranging, and other similar information for the present ROI. Specifically, SVS database 120 is configured to obtain the present location of system 100 from an external source (e.g., a global positioning system (GPS)) and retrieve the corresponding portions of navigational data 1210 and imagery data 1220 (e.g., runway template) for the ROI related to the present location of system 100. Once the corresponding portions of navigational data 1210 and imagery data 1220 are retrieved, the corresponding portions of navigational data 1210 and imagery data 1220, along with the images captured by camera 110, are transmitted to analysis module 130.
  • Analysis module 130, at least in the illustrated embodiment, includes an image enhancement sector 1310, an adaptive threshold sector 1320, and a profile filter sector 1330. Since the images captured by camera 110 may contain noise (e.g., flicker noise, electronic noise, coding artifacts, quantization artifacts during digitization, etc.) that can influence the accuracy of the runway segmentation, image enhancement sector 1310 comprises one or more filters (e.g., a Median Filter, a Gaussian filter with zero average, etc) to filter the noise from the captured images.
  • More advanced filters applying an edge-preserving smoothing algorithm are available in the art and various embodiments of the invention contemplate such advanced edge-preserving smoothing algorithms. For example, one filter may use multi-peak histogram equalization where mid-nodes are locally determined and the affected regions (i.e. regions with contracted contrast) are substituted with generalized histogram intensities. Another filter may use a Kuwahara filter where a square systematic neighborhood is divided into four overlapping windows, with each window containing a central pixel that is replaced by the mean of the most homogeneous window (i.e., the window with the least standard deviation).
  • After filtering, image enhancement sector 1310 is configured to enhance the contrast of the captured ROI to bring uniformity into the analysis. In one embodiment, image enhancement sector 1310 is configured to "stretch" the contrast (also known as, "contrast normalization") to improve the captured image. That is, the contrast in a captured image is improved by "stretching" the range of intensity values within the predefined ROI. For example, given the image I(x,y), the contrast corrected image G(x, y) is defined as follows: G x y = D min + min I x y , I M - max I x y , I m I M - I m D max - D min ,
    Figure imgb0001

    where Dmax and Dmin are the desired limits for operation (e.g. assuming 8 bit resolution, the values are depicted at 255 and 0) and Imax & Imin are the maximum and minimum gray values of the image I(x,y) excluding some outliers depicted at the tail of the image intensity distribution. These outliers are then removed to limit their side effect on the choice of the desired contrast range.
  • For example, FIG. 2 shows one example of an output of image enhancement sector 1310 (via display 160). Portion 210 of FIG. 2 illustrates the choice of outliers and images 215, 220 illustrate the contrast enhancement, image 215 being the original image and image 220 being the filtered image. Portion 230 of FIG. 2 depicts a curve that represents the cumulative distribution of a normalized histogram. The data representing the ROI is enhanced by normalizing the contrast locally around the runway and its surroundings within the ROI prior to analysis.
  • In addition to the target runway, the captured images may include extraneous objects (e.g., a secondary runway, a taxiway, a mountainous structure beside the airport, and/or any other structure with similar sensing characteristics within the associated IR wavelength). To reduce the potential impact that extraneous objects may have on the analysis, profile filter sector 1330 is configured to process the captured images to generate a binary image of the ROI including a binary large object (BLOB) that can be analyzed for the presence of a runway.
  • Profile filter sector 1330 is configured to generate a BLOB and to separate the runway from the rest of the background. To accomplish such separation, profile filter sector 1330 uses the following cues for background-foreground segmentation:
    1. (1) The proportion of template surface area to ROI surface area is approximated;
    2. (2) The runway BLOB center of mass should be closest to the template geometric center; and
    3. (3) The target runway structure should be similar in size to the template structure. Each of these three cues will now be discussed for clarification.
  • To approximate the proportionality of the template and ROI surface areas, profile filter sector 1330 is configured to depict a probability density function of the intensity data representing the ROI to segment the foreground pixels from the background pixels. A histogram estimator is then used to approximate the density function so that the runway BLOB is represented by a grouping of pixels in the histogram that makes up a certain percentage of the overall ROI data above the adaptive threshold. The percentage of the BLOB is determined from synthetic runway proportion estimates from data sets based upon the positioning of system 100 with respect to the runway coordinates and orientation perspective.
  • A validation algorithm 300 (see FIG. 3) then determines whether a runway exists within the predefined ROI by employing one or more combinations of validation processes of the following queries:
    1. 1) Measuring the offset of the center of mass of the runway BLOB with respect to the template;
    2. 2) Verifying the actual size of the estimated BLOB with respect to the ROI and template; and
    3. 3) Verifying the shape of the BLOB. Failure to validate any one of these measures results in a report of no target within the specified region.
  • The threshold required to segment foreground pixels from background pixels is derived based upon the analysis of the cumulative histogram of intensities and its distribution within the ROI. First, the area that the background region occupies within the ROI is estimated as: S B = 1 - α S p ω . h ,
    Figure imgb0002
    where ω and h represent the width and height of the associated ROI, respectively, and Sp is the area within the specified template polygon (block 310). The template size is scaled by α to account for variations in range, orientation, and synthetic noise margin.
  • Once the background area is estimated, the corresponding intensity level at which the cumulative histogram value equals background area will be used as a cut off value for thresholding (block 320). If H(g) = δF/∂g represents the ROI image normalized histogram (i.e. H( g )dg =1) and F(g) represents the cumulative distribution function of H, then the threshold λ can be derived as: F λ = S B λ = F - 1 S B .
    Figure imgb0003
  • Portion 230 of FIG. 2 illustrates the threshold selection from the cumulative distribution function curve. Based on the estimated threshold, a one-step binarization procedure is performed to obtain the initial binary image. B x y = { = 0 if I x y < λ = 1 O . w .
    Figure imgb0004

    The resulting binary image is further processed using morphological operations to reduce speckle noise, to fill in for missing pixels, and/or to remove eroded outliers (block 325).
  • Often, the binary image comprises more than one BLOB, possibly because the ROI covers more than one runway or a runway and a taxiway. To pick the target runway, the BLOB having a mass center that is closest to the template centroid is selected.
  • To compensate for inconsistent runway scenes (e.g., texture-less surface versus stripe-painting on some runway areas, lighting conditions, and the like conditions), profile filter sector 1330, in one embodiment, performs an iterative binarization procedure with varied cut-off values until desired characteristics for the BLOB are obtained. Upon completion of initial thresholding, the surface area of the BLOB is measured and the threshold is again fine tuned to match the size of the predicted template. That is, the iteration is continued until |Ŝp - SP| reaches a minimum value as shown in FIG 3.
  • It has been observed that in certain scenarios roads with a structure similar to a runway can pose a problem in identifying the actual runway. The difficulty arises due to the fact that these non-runway regions span the thermal profile of the runway and, hence, non-runway regions are mapped to contrast levels that are equal to, or nearly equal to, that of the runway region. This artifact is depicted in FIG. 4 where a series of lamp poles may be confused with the actual lighting signals located at the runway edges. FIG. 4 also depicts that, in addition to the runway region, there exists a road along one side of the runway. When processed, these non-runway structures near the target runway, in effect, may extend the boundaries of the runway BLOB 420. The impact of this noise extends the BLOB beyond the runway boundaries to include extraneous regions within the scene.
  • To further reduce the ambiguity associated with segmentation of the actual runway boundaries from those of the secondary driveways and taxiways, system 100 uses correlation filter 140 to perform one or more filtering processes. The one or more filtering processes predict the expected target orientation and shape based upon the dynamics of the system and how the system is navigating (e.g., a Kalman filter approach or an approach using a like filter). Specifically, correlation filter 140 is configured to preserve those binary pixels associated with the perspective shape of a runway presented by the runway template and its perspective orientation projection into the 2D imagery plane. For example, based on the perspective field of view of an aircraft as it is approaching the runway, it is true that the runway width (measured as pixel distance between left edge and right edge of runway) monotonically increases as the top edge of the runway (i.e., the side of the runway farthest from the aircraft) is traversed to the bottom edge of the runway (i.e., the side of the runway closest to the aircraft). FIG. 5 illustrates the operation of one embodiment of profile filter sector 1330.
  • In one embodiment, the process for distinguishing between the runway features and non-runway features begins by determining the center of mass of the BLOB to centralize the template on top of the BLOB (block 510). The major axis of the template splits the BLOB into two parts (e.g., left edges and right edges) (block 520). An estimate using a least square algorithm is used to identify both sides (e.g., right and left sides) of the BLOB that fits the runway profile (blocks 530, 540). After the two sides have been identified, the top and bottom corners of the BLOB are connected to form a polygon (block 550). BLOB profile points that are outside the polygon are considered as outlying points and are ignored. The resulting polygon (e.g., a quadrilateral) is a pre-estimate for the runway edges that encloses the boundaries of the true BLOB.
  • Segmentation module 150 is configured to determine the appropriate boundaries for the polygon so that the polygon can be compared to the template. Determining the appropriate boundaries for the polygon for comparison to the template is important in effectively removing outliers from the BLOB profile. To overcome problems associated with non-uniform noise variations, the tolerance interval (or error range) is set as a function of the ranging perspective within the captured image. For example, a relatively large error margin is preferred for effective filtering of outliers near the top edge, while a relatively small error margin is required near the bottom edge so as to accommodate effects of pixel spillage since infrared image characteristics and other shape irregularities are typically seen near the bottom edge.
  • In general, a contour includes both vertices (i.e., discontinuous edge points) and continuous edge points. Because discontinuous vertices define the limits of a contour, discontinuous vertices are the dominant points in fitting a model into a boundary. Accordingly, the corners of the BLOB are used to simplify the process of fitting a model to runway boundaries. Runway segmentation using a corner-based segmentation algorithm 600 stored in segmentation module 150 is shown in FIG. 6.
  • From the detected BLOB, a contour is extracted by subjecting the BLOB with a Freeman chain code 700 (see FIG. 7). That is, the BLOB is initially scanned to obtain a top most pixel, which is used as a starting point for contour tracing. Starting with this pixel and moving in, for example, a clockwise direction, each pixel is assigned a code based upon its direction from the previous pixel.
  • For example and with reference again to FIG. 6, let the sequence of n digital points describe a closed boundary curve of a BLOB, B = {Pi (x, y); i =1,..., n}. To extract the corner points, a region of support covering both sides of a boundary pixel is defined. To qualify a boundary pixel as a corner point, a measure called a Comerity Index based on statistical and geometrical properties of the pixel is defined (block 610). That is, let Sk (p) = {pj / j = i-k;..., i,.., i+ k} denote a small curve segment of B called region of support of the point pi , which is the center of the segment. The center point of the segment given by the following equation and is used to measure the shift of each edge point: P i = x i = 1 2 k + 1 i - k i + k x j , y i = 1 2 k + 1 i - k i + k y i
    Figure imgb0005
  • As such, a corner point may be defined as any region of support having a midpoint that has a larger shift when compared to other points on the boundary curve. Therefore, the cornerity index of Pi is defined to be the Euclidean distance d = x i - x i 2 + y i - y i 2
    Figure imgb0006
    between the mid-points Pi and its region of support mass center point Pi . The Comerity Index indicates the prominence of a corner point (i.e., the larger the value of the comerity index of a boundary point, the stronger is the evidence that the boundary point is a corner). The computed comerity indices are subjected to thresholding such that only boundary pixels with strong comerity index are retained. The threshold is derived as a function of the region of support length to be at least γ 3 k 10 2 .
    Figure imgb0007
    (This limit is derived based upon a corner with an angular range varies within 0 φ 3 π 4 ,
    Figure imgb0008
    and a region of support of length 5 (i.e. k=2)). The length of the region of support is selected as k = min 5 ω R 10 ;
    Figure imgb0009
    where the variable " ω R is the minimum expected runways width at all acquired ranges.
  • To obtain the four corners of the runway, the detected corner points are portioned into four quadrants formed by the major and minor axis of the BLOB (block 620). The slope of the BLOB major axis is assumed to be same as that of the template and the BLOB's geometrical centroid serves as the origin.
  • Once the detected corner points are partitioned into four groups, the next step is to form a polygon such that the polygon has maximum BLOB area coverage. The process of finding such a polygon is divided into two step process where the points on the left quadrants are analyzed separately from the right quadrants. The basic step includes populating a consistency matrix with corner candidates where slope of selected four corners are within a certain margin based on the template estimates from the navigation data (see consistency matrix 810 in FIG. 8). The selection of the corners is done in pairs by matching template slopes using two corner pairs at the left side and two corner pairs at the right side as well (block 630). Next, a quadrilateral polygon with four points is selected from the consistency matrix such that they have a best fit measure with respect to the runway edges (block 640).
  • The corner detection estimates are sensitive to the size of the support region. While too large of a value for a support region will smooth out fine corner points, a small value will generate a large number of unwanted corner points. If there are no corner points in any one of the quadrants, the process is reiterated to adjust the length of the region of support for conversions. After a predefined number of iterations, if the process does not converge to create at least one corner in each quadrant, a "no runway" (i.e., "no target") report is generated.
  • While at least one exemplary embodiment has been presented in the foregoing detailed description of the invention, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment of the invention. It being understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope of the invention as set forth in the appended claims.

Claims (5)

  1. A system for determining whether a region of interest, ROI, includes a runway having a plurality of corners, comprising:
    a camera configured to capture an image of the ROI; and
    an analysis module coupled to the camera and configured to generate a binary large object, BLOB, of at least a portion of the ROI,
    the BLOB being created by a profile filter sector 1330 configured to generate a BLOB and to separate the runway from background,
    wherein the profile filter sector operates so that the following conditions are met
    (i) the proportion of template surface area to ROI surface area is approximated;
    (ii) the runway BLOB center of mass should be closest to the template geometric center; and
    (iii) the target runway structure should be similar in size to the template structure;
    a synthetic vision system, SVS, including a database containing a target image template of the runway wherein said image template comprises data and mimics a corresponding real-world view of the runway; and
    said image data contains one or more runway templates that illustrate how the runway should look from one or more visual perspectives; and
    a segmentation module coupled to the analysis module and the SVS, the segmentation module configured to determine if the ROI includes the runway based on a comparison of the target image template and the BLOB.
  2. The system of claim 1, wherein the analysis module is further configured to:
    identify a position for each corner on the BLOB and form a polygon on the BLOB based on the position of each corner; and
    divide the BLOB into a left portion containing a first upper section and a first lower section, and a right portion containing a second upper section and a second lower section.
  3. The system of claim 2, wherein the segmentation module is configured to:
    identify a first slope from a corner in the first upper section to a corner in the first lower section, a second slope from a corner in the second upper section to a corner in the second lower section, or both;
    compare the first slope with a third slope on a left side of the target image template, comparing the second slope with a fourth slope on a right side of the target image template, or both; and
    determine that the BLOB represents the runway if the first slope matches the third slope within a predetermined error, if the second slope matches the fourth slope within the predetermined error, or both.
  4. The system of claim 1, wherein the analysis module, in generating the BLOB, is configured to:
    stretch a contrast of the captured image to create a range of intensity values in the captured image;
    filter intensity values that are outside of the range of intensity values; and
    determine a shape of the BLOB based on the target image template.
  5. The system of claim 4, wherein the segmentation module is further configured to:
    determine a center of mass of the BLOB based on the target image template;
    adjust one or more boundaries of the target image template; and
    filter pixels in the BLOB that lie outside of the adjusted one or more boundaries of the target image template.
EP09152623A 2008-02-27 2009-02-11 Methods and apparatus for runway segmentation using sensor analysis Active EP2096602B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US3190008P 2008-02-27 2008-02-27
US12/336,976 US7826666B2 (en) 2008-02-27 2008-12-17 Methods and apparatus for runway segmentation using sensor analysis

Publications (2)

Publication Number Publication Date
EP2096602A1 EP2096602A1 (en) 2009-09-02
EP2096602B1 true EP2096602B1 (en) 2012-03-21

Family

ID=40578584

Family Applications (1)

Application Number Title Priority Date Filing Date
EP09152623A Active EP2096602B1 (en) 2008-02-27 2009-02-11 Methods and apparatus for runway segmentation using sensor analysis

Country Status (2)

Country Link
US (1) US7826666B2 (en)
EP (1) EP2096602B1 (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8385672B2 (en) * 2007-05-01 2013-02-26 Pictometry International Corp. System for detecting image abnormalities
US9262818B2 (en) 2007-05-01 2016-02-16 Pictometry International Corp. System for detecting image abnormalities
FR2924831B1 (en) * 2007-12-11 2010-11-19 Airbus France METHOD AND DEVICE FOR GENERATING A LACET SPEED ORDER FOR A GROUNDING AIRCRAFT
US8457812B2 (en) * 2008-06-20 2013-06-04 David Zammit-Mangion Method and system for resolving traffic conflicts in take-off and landing
US9105115B2 (en) * 2010-03-16 2015-08-11 Honeywell International Inc. Display systems and methods for displaying enhanced vision and synthetic images
US8914166B2 (en) 2010-08-03 2014-12-16 Honeywell International Inc. Enhanced flight vision system for enhancing approach runway signatures
US9726486B1 (en) * 2011-07-29 2017-08-08 Rockwell Collins, Inc. System and method for merging enhanced vision data with a synthetic vision data
GB201118694D0 (en) 2011-10-28 2011-12-14 Bae Systems Plc Identification and analysis of aircraft landing sites
US8744763B2 (en) * 2011-11-17 2014-06-03 Honeywell International Inc. Using structured light to update inertial navigation systems
US9165366B2 (en) 2012-01-19 2015-10-20 Honeywell International Inc. System and method for detecting and displaying airport approach lights
US9797981B2 (en) * 2012-03-06 2017-10-24 Nissan Motor Co., Ltd. Moving-object position/attitude estimation apparatus and moving-object position/attitude estimation method
EP2986940A4 (en) * 2013-04-16 2017-04-05 Bae Systems Australia Limited Landing site tracker
AU2013245477A1 (en) * 2013-10-16 2015-04-30 Canon Kabushiki Kaisha Method, system and apparatus for determining a contour segment for an object in an image captured by a camera
ES2563098B1 (en) * 2015-06-15 2016-11-29 Davantis Technologies Sl IR image enhancement procedure based on scene information for video analysis
FR3049744B1 (en) 2016-04-01 2018-03-30 Thales METHOD FOR SYNTHETICALLY REPRESENTING ELEMENTS OF INTEREST IN A VISUALIZATION SYSTEM FOR AN AIRCRAFT
CN108364311B (en) * 2018-01-29 2020-08-25 深圳市亿图视觉自动化技术有限公司 Automatic positioning method for metal part and terminal equipment
US11026585B2 (en) 2018-06-05 2021-06-08 Synaptive Medical Inc. System and method for intraoperative video processing
EP3921808A4 (en) * 2019-02-20 2022-03-30 Samsung Electronics Co., Ltd. Apparatus and method for displaying contents on an augmented reality device
US11555703B2 (en) 2020-04-22 2023-01-17 Honeywell International Inc. Adaptive gaussian derivative sigma systems and methods
US11650057B2 (en) * 2020-04-22 2023-05-16 Honeywell International Inc. Edge detection via window grid normalization
CN111507287B (en) * 2020-04-22 2023-10-24 山东省国土测绘院 Method and system for extracting road zebra crossing corner points in aerial image
CN117191011A (en) * 2023-08-17 2023-12-08 北京自动化控制设备研究所 Target scale recovery method based on inertial/visual information fusion

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3830496C1 (en) * 1988-09-08 1996-09-19 Daimler Benz Aerospace Ag Target detection and tracking device for aircraft navigation
SE503679C2 (en) * 1994-11-18 1996-07-29 Lasse Karlsen Acoustic wind meter
US5719567A (en) * 1995-05-30 1998-02-17 Victor J. Norris, Jr. System for enhancing navigation and surveillance in low visibility conditions
US6157876A (en) * 1999-10-12 2000-12-05 Honeywell International Inc. Method and apparatus for navigating an aircraft from an image of the runway
US6606563B2 (en) * 2001-03-06 2003-08-12 Honeywell International Inc. Incursion alerting system
FR2835314B1 (en) * 2002-01-25 2004-04-30 Airbus France METHOD FOR GUIDING AN AIRCRAFT IN THE FINAL LANDING PHASE AND CORRESPONDING DEVICE
US7113779B1 (en) * 2004-01-08 2006-09-26 Iwao Fujisaki Carrier

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
BERND KORN AND PETER HECKER: "Pilot Assistance Systems: Enhanced and Synthetic Visionfor Automatic Situation Assessment", HCI-02 AERO PROCEEDINGS, 2002, pages 186 - 191 *
FRIEDRICH A ET AL: "Airport databases for 3D synthetic-vision flight-guidance displays: database design, quality assessment, and data generation", PROCEEDINGS OF SPIE, SPIE, USA, vol. 3691, 1 April 1999 (1999-04-01), pages 108 - 115, XP002406244, ISSN: 0277-786X, DOI: 10.1117/12.354413 *
GURU D S ET AL: "Boundary based corner detection and localization using new 'cornerity' index: A robust approach", PROCEEDINGS - 1ST CANADIAN CONFERENCE ON COMPUTER AND ROBOT VISION - PROCEEDINGS - 1ST CANADIAN CONFERENCE ON COMPUTER AND ROBOT VISION 2004 IEEE COMPUTER SOCIETY US, 2004, pages 417 - 423 *
JIAJIA SHANG AND ZHONGKE SHI: "Vision-based Runway Recognition for UAV Autonomous Landing", IJCSNS INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, pages 112 - 117 *
MENG DING ET AL: "A Method to Recognize and Track Runway in the Image Sequences Based on Template Matching", SYSTEMS AND CONTROL IN AEROSPACE AND ASTRONAUTICS, 2006. ISSCAA 2006. 1ST INTERNATIONAL SYMPOSIUM ON HARBIN, CHINA 19-21 JAN. 2006, PISCATAWAY, NJ, USA,IEEE, 19 January 2006 (2006-01-19), pages 1218 - 1221, XP010914030, ISBN: 978-0-7803-9395-0, DOI: 10.1109/ISSCAA.2006.1627585 *
SCHIEFELE J ET AL: "World-wide precision airports for SVS", PROCEEDINGS OF SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING - ENHANCED AND SYNTHETIC VISION 2004 2004 SPIE US, vol. 5424, 2004, pages 1 - 10, DOI: 10.1117/12.539542 *
SCHIEFELE J ET AL: "World-wide precision airports for SVS", PROCEEDINGS OF THE SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING SPIE-INT. SOC. OPT. ENG. USA, vol. 5424, no. 1, 2004, pages 1 - 10, ISSN: 0277-786X *

Also Published As

Publication number Publication date
US20090214080A1 (en) 2009-08-27
US7826666B2 (en) 2010-11-02
EP2096602A1 (en) 2009-09-02

Similar Documents

Publication Publication Date Title
EP2096602B1 (en) Methods and apparatus for runway segmentation using sensor analysis
US10290219B2 (en) Machine vision-based method and system for aircraft docking guidance and aircraft type identification
Kang et al. Pothole detection system using 2D LiDAR and camera
US10255520B2 (en) System and method for aircraft docking guidance and aircraft type identification
EP2228666B1 (en) Vision-based vehicle navigation system and method
Shen et al. A vision-based automatic safe landing-site detection system
US8537338B1 (en) Street curb and median detection using LIDAR data
EP2884305A1 (en) Semantics based safe landing area detection for an unmanned vehicle
EP2207010A1 (en) House change judgment method and house change judgment program
US20210158157A1 (en) Artificial neural network learning method and device for aircraft landing assistance
CN110197173B (en) Road edge detection method based on binocular vision
CN107808524B (en) Road intersection vehicle detection method based on unmanned aerial vehicle
Nagarani et al. Unmanned Aerial vehicle’s runway landing system with efficient target detection by using morphological fusion for military surveillance system
CN117677976A (en) Method for generating travelable region, mobile platform, and storage medium
Miraliakbari et al. Automatic extraction of road surface and curbstone edges from mobile laser scanning data
Theuma et al. An image processing algorithm for ground navigation of aircraft
Kniaz A fast recognition algorithm for detection of foreign 3d objects on a runway
Bulatov et al. Segmentation methods for detection of stationary vehicles in combined elevation and optical data
Tandra et al. Robust edge-detection algorithm for runway edge detection
CN112020722A (en) Road shoulder identification based on three-dimensional sensor data
US20230282001A1 (en) Adaptive feature extraction to detect letters and edges on vehicle landing surfaces
JP7402756B2 (en) Environmental map generation method and environmental map generation device
EP4239587A1 (en) Adaptive feature extraction to detect letters and edges on vehicle landing surfaces
CN111611942B (en) Method for extracting and building database by perspective self-adaptive lane skeleton
KR102540624B1 (en) Method for create map using aviation lidar and computer program recorded on record-medium for executing method therefor

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090211

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA RS

R17P Request for examination filed (corrected)

Effective date: 20090211

17Q First examination report despatched

Effective date: 20091021

AKX Designation fees paid

Designated state(s): BE DE FR GB

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): BE DE FR GB

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602009005957

Country of ref document: DE

Effective date: 20120516

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20130102

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602009005957

Country of ref document: DE

Effective date: 20130102

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 8

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: BE

Payment date: 20200225

Year of fee payment: 12

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20210228

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20220222

Year of fee payment: 14

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210228

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230525

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20230211

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230211

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230211

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240228

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240226

Year of fee payment: 16