CN117649446A - Method for realizing docking ranging of corridor bridge of parking apron based on machine vision - Google Patents
Method for realizing docking ranging of corridor bridge of parking apron based on machine vision Download PDFInfo
- Publication number
- CN117649446A CN117649446A CN202311621450.8A CN202311621450A CN117649446A CN 117649446 A CN117649446 A CN 117649446A CN 202311621450 A CN202311621450 A CN 202311621450A CN 117649446 A CN117649446 A CN 117649446A
- Authority
- CN
- China
- Prior art keywords
- edge
- pixel
- image
- bridge
- aircraft
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000003032 molecular docking Methods 0.000 title claims abstract description 26
- 230000002159 abnormal effect Effects 0.000 claims abstract description 12
- 238000003708 edge detection Methods 0.000 claims abstract description 11
- 238000012216 screening Methods 0.000 claims abstract description 6
- 238000010586 diagram Methods 0.000 claims abstract description 4
- 238000001514 detection method Methods 0.000 claims description 15
- 230000001186 cumulative effect Effects 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 11
- 238000009825 accumulation Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 4
- 230000001629 suppression Effects 0.000 claims description 4
- 230000008030 elimination Effects 0.000 claims description 3
- 238000003379 elimination reaction Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64F—GROUND OR AIRCRAFT-CARRIER-DECK INSTALLATIONS SPECIALLY ADAPTED FOR USE IN CONNECTION WITH AIRCRAFT; DESIGNING, MANUFACTURING, ASSEMBLING, CLEANING, MAINTAINING OR REPAIRING AIRCRAFT, NOT OTHERWISE PROVIDED FOR; HANDLING, TRANSPORTING, TESTING OR INSPECTING AIRCRAFT COMPONENTS, NOT OTHERWISE PROVIDED FOR
- B64F1/00—Ground or aircraft-carrier-deck installations
- B64F1/30—Ground or aircraft-carrier-deck installations for embarking or disembarking passengers
- B64F1/305—Bridges extending between terminal building and aircraft, e.g. telescopic, vertically adjustable
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B5/00—Measuring arrangements characterised by the use of mechanical techniques
- G01B5/02—Measuring arrangements characterised by the use of mechanical techniques for measuring length, width or thickness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B5/00—Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied
- G08B5/22—Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission
- G08B5/36—Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission using visible light sources
- G08B5/38—Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission using visible light sources using flashing light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Architecture (AREA)
- Civil Engineering (AREA)
- Structural Engineering (AREA)
- Mechanical Engineering (AREA)
- Aviation & Aerospace Engineering (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method for realizing the docking ranging of a parking apron bridge based on machine vision, which comprises the following steps: (1) Acquiring real-time image information of an airplane and a gallery by using a camera installed on the gallery, and converting the real-time image information from an RGB space image to an HSV space image; (2) Selecting an ROI region according to a set threshold value of the HSV space image, determining the region where the plane and the corridor bridge platform are located, and converting the HSV space image into a gray level image; (3) Performing edge detection on the ROI in the gray level diagram; (4) Detecting straight lines of the image obtained after edge detection, and screening and optimizing edges of the detected straight lines; (5) calculating the pixel distance of the gallery bridge platform to the aircraft edge; (6) Calculating an included angle between the gallery bridge platform and the edge of the aircraft, and eliminating abnormal data existing in the included angle; (7) Drawing the detected straight line on the corresponding position of the image, and carrying out corresponding alarm reminding according to the detected distance and angle.
Description
Technical Field
The invention relates to the technical field of machine vision, in particular to the technical field of image processing, and particularly relates to a method for realizing parking apron bridge butt joint distance measurement based on machine vision.
Background
The apron bridge, also called boarding bridge (Airport Boarding Bridge), is an important device for airports and is widely used in airports around the world. The docking station realizes docking of the airplane and the terminal building, so that passengers can directly walk into or leave the airplane from the bridge box, and convenience and rapidness are realized. Whatever weather conditions and temperatures, the corridor bridge can protect passengers from wind, sun and rain. However, during docking with an aircraft, the tarmac bridge often suffers from scratch and inaccurate docking, which is inconvenient for passengers and causes economic loss to airlines. Therefore, when the corridor bridge is too close or the angle deviation is large, an operator is reminded, the high-precision connection of the parking apron corridor bridge and the airplane is ensured, the scratch and other problems are avoided, the passenger can be ensured to successfully board, and the airplane parking system is one of important requirements meeting aviation safety.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a method for realizing the docking ranging of the apron bridge of the parking apron based on machine vision, which has the advantages of low cost, high efficiency and strong stability.
In order to achieve the above purpose, the method for achieving the parking apron corridor bridge docking ranging based on the machine vision comprises the following steps:
the method for realizing the docking ranging of the apron bridge of the parking apron based on the machine vision is mainly characterized in that the system comprises the following steps: the method comprises the following steps:
(1) Acquiring real-time image information of an airplane and a gallery by using a camera installed on the gallery, and converting the real-time image information from an RGB space image to an HSV space image;
(2) Selecting an ROI region according to a set threshold value of the HSV space image, determining the region where the plane and the corridor bridge platform are located, and converting the HSV space image into a gray level image;
(3) Performing edge detection on the ROI in the gray level diagram;
(4) Performing linear detection on the image obtained after edge detection, screening and optimizing the detected lines, and selecting the most suitable line as the edge of the aircraft and the edge of the gallery bridge platform;
(5) Calculating the pixel distance from the gallery bridge platform to the edge of the aircraft, and obtaining the actual distance between the gallery bridge platform and the edge of the aircraft by using the pixel distance;
(6) Calculating an included angle between the gallery bridge platform and the edge of the aircraft, and eliminating abnormal data existing in the included angle;
(7) Drawing the detected straight line on the corresponding position of the image, and carrying out corresponding alarm reminding according to the detected distance and angle.
Preferably, the step (1) specifically includes:
the camera is arranged at the central position of the entrance of the corridor bridge to collect real-time image information of the aircraft and the corridor bridge, and the acquired RGB space image is converted into an HSV space image according to the following mode:
setting the maximum and minimum RGB components equal, i.e. r=g=b, then hue h=0;
when the maximum RGB component is r, then the hue h= ((g-b)/(max-min))x60;
when the maximum RGB component is g, then the hue h= ((b-r)/(max-min)) ×60+120;
when the largest RGB component is b, then hue h= ((r-g)/(max-min)) ×60+240;
when the maximum and minimum RGB components are equal, i.e. r=g=b, then the saturation s=0, otherwise the saturation s= (max-min)/max, and the brightness V is equal to the maximum of the RGB components.
Preferably, the step (2) specifically includes:
and respectively carrying out color extraction on the airplane and corridor bridge areas, counting the numerical ranges of HSV space images of the airplane and corridor bridge, selecting the corresponding ROI areas by setting HSV thresholds between [0, 100] and [175,48,255], and shielding other unselected areas by using a mask, so that the HSV space images are converted into gray images.
Preferably, the step (3) specifically includes the following steps:
(3.1) first performing noise removal on the ROI area in the gray-scale image using a gaussian filter;
(3.2) finding edges in the gray scale image by using the first derivative, calculating the gradient and the direction of each pixel in the image, taking the gradient direction in the image as the edge direction, performing non-maximum suppression, only preserving the local maximum value of the gradient amplitude in the edge direction, and suppressing other values to zero;
(3.3) defining a high threshold and a low threshold for hysteresis thresholding: if the pixel gradient amplitude is greater than the high threshold, then the strong edge is marked; if the current pixel gradient magnitude is between the high threshold and the low threshold, then the current pixel gradient magnitude is marked as a weak edge; if the current pixel gradient magnitude is less than the low threshold, the pixel gradient is suppressed;
all the strong edges acquired and the weak edges connected thereto are determined as the final edge image of the ROI area in the gray scale map.
Preferably, the step (4) specifically includes the following steps:
(4.1) creating a two-dimensional Hough cumulative array, wherein each element represents a possible straight line within the parameter space;
(4.2) initially, all elements in the cumulative array are initialized to zero;
(4.3) for each edge point, calculating corresponding straight line parameters rho and theta according to the positions of the edge points, and adding relevant element values in the Hough cumulative array;
(4.4) for each discrete value of the polar angle θ, the polar diameter ρ is calculated as follows:
ρ=x×cos(θ)+y×sin(θ),
wherein (x, y) are the coordinates of the edge points respectively;
and (4.5) finding out the element corresponding to (rho, theta) in the Hough accumulation array, adding the corresponding value, finding out the element with the maximum value in the Hough accumulation array, determining an effective straight line, drawing the detected straight line on an original image after determining the parameters of the effective straight line, and taking the detected straight line as the edge of the aircraft and the edge of the gallery bridge platform.
Preferably, the detected straight line is screened and optimized by:
discarding the straight line when the length of the obtained straight line obviously exceeds the length of the gallery bridge platform or the aircraft; in the approach process of the corridor bridge, the line segment of the corridor bridge platform and the line segment of the aircraft edge are always changed within a certain slope range, and if the slope between the corridor bridge platform and the aircraft edge exceeds a preset range, the line segment is abandoned; taking the detected midpoint of the line segment as the center of a circle, taking 5 pixel points as the radius, and discarding the line segment if the obvious abnormal HSV value exists in the drawn circle; if the two line segments are short in length and similar in slope, the two line segments are connected.
Preferably, the step (5) specifically includes the following steps:
(5.1) respectively calculating the distances from the leftmost end, the rightmost end and the middle point of the gallery bridge platform to the edge of the aircraft by using a calculation formula of the point-to-straight line distance, and selecting the shortest distance as the distance from the gallery bridge platform to the edge of the aircraft;
(5.2) calculating the actual distance between the pixel distance and the bridge deck and the aircraft by the following conversion:
acquiring the pixel length and the true length of the gallery bridge platform scale: a section of staff gauge is stuck on a corridor bridge platform, the real length of the staff gauge is known, the length is Lr, the pixel length of the staff gauge is measured in an image, and the pixel length is assumed to be L;
(5.3) calculating the pixel scale at the edge of the gallery bridge platform: dividing the actual length Lr of the scale by the pixel length L of the scale in the image to obtain the ratio of the actual length corresponding to each pixel to the pixel length;
the pixel scale is calculated as follows:
(5.4) calculating the actual width of the aircraft door in the image: the pixel length of the door width is measured in the image, assuming that the length is L 1 Pixel length L of width of cabin door 1 The real width Lr of the cabin door can be obtained by multiplying the pixel size Sr 1 ;
(5.5) calibrating different distances by using the width of the cabin door as a scale: measuring the pixel length of the hatch door at different pixel distances, the pixel distance being noted as x 1 ,x 2 ,x 3 ,x 4 ,x 5 ....a hatch pixel length is recorded as
L 1 ,L 2 ,L 3 ,L 4 ,L 5 .., and calculating the pixel size of the image at different pixel distances by the above formula, and recording as Sr 1 ,Sr 2 ,Sr 3 ,Sr 4 ,Sr 5 ......;
(5.6) fitting a polynomial: the pixel distance x and the pixel size Sr are functionally related, a polynomial fit is used to approximate the function, and the obtained functional relation is as follows:
Sr(x)=f(x)
(5.7) calculating an actual distance: calculating a certain pixel distance x using integration h The actual distance Lr of (a) is calculated as follows:
preferably, the step (6) specifically includes the following steps:
(6.1) calculating the magnitude of the included angle between the aircraft edge and the gallery bridge platform edge by using a straight line representing the two edges;
(6.2) if the pixel distance between the plane edge and the corridor bridge platform edge in the two frames of images is increased by more than two pixels or the distance is reduced by more than one pixel, judging that the straight line of the following frame is detected in error, so that the data detected by the frame is not adopted;
(6.3) if the distance difference between the two frames of images is within a specified range but the gradient difference is large, multiplying the gradient of the straight line detected by the next frame by a coefficient to make the output video image line segment smooth;
and (6.4) if the lengths of the line segments detected by the aircraft edge are different, but the lengths of the line segments detected by the gallery bridge platform edge are consistent, setting the lengths of the line segments of the aircraft edge to be consistent with the lengths of the gallery bridge platform edge, and smoothing the output video image line segments so as to finish the elimination of abnormal data.
Preferably, the step (7) specifically includes the following steps:
drawing the detected line segments at the corresponding positions of the images, marking the detected distances and angles on the images, when the corridor bridge is far away from the aircraft, using blue lines to represent the detected line segments, when the corridor bridge is close to a certain distance, using yellow lines to represent the detected line segments, when the corridor bridge is close to a dangerous position, using red lines to represent the detected line segments, and displaying the flashing red character on the images to remind that the distance is too close.
The method for realizing the docking ranging of the corridor bridge of the parking apron based on the machine vision is adopted, and mainly utilizes an image processing means to detect the edge of an airplane and the edge of a corridor bridge platform and obtain the distance and the included angle of the two. The detection algorithm is used for dividing the ROI region based on the HSV space threshold value, so that the influence of a stop line on the ground on detection can be reduced. The algorithm is based on real-time images, and the continuity of the detection results can be sequentially considered because the aircraft gallery bridge docking process has continuity in the front and rear images, so that the front and rear detection results are prevented from generating larger errors. The technical scheme has the advantages of non-contact, low cost, high stability and the like, can assist an operator in operation, realizes timely monitoring and early warning, reduces accident rate, reduces risk for an airline company and saves cost.
Drawings
Fig. 1 is a flow chart of a method of implementing tarmac bridge docking ranging based on machine vision of the present invention.
FIG. 2 is a flow chart of the present invention for obtaining a function of converting pixel distance to actual distance.
Detailed Description
In order to more clearly describe the technical contents of the present invention, a further description will be made below in connection with specific embodiments.
Before describing in detail embodiments that are in accordance with the present invention, it should be observed that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, the method for implementing the docking ranging of the apron bridge based on the machine vision comprises the following steps:
(1) Acquiring real-time image information of an airplane and a gallery by using a camera installed on the gallery, and converting the real-time image information from an RGB space image to an HSV space image;
(2) Selecting an ROI region according to a set threshold value of the HSV space image, determining the region where the plane and the corridor bridge platform are located, and converting the HSV space image into a gray level image;
(3) Performing edge detection on the ROI in the gray level diagram;
(4) Performing linear detection on the image obtained after edge detection, screening and optimizing the detected lines, and selecting the most suitable line as the edge of the aircraft and the edge of the gallery bridge platform;
(5) Calculating the pixel distance from the gallery bridge platform to the edge of the aircraft, and obtaining the actual distance between the gallery bridge platform and the edge of the aircraft by using the pixel distance;
(6) Calculating an included angle between the gallery bridge platform and the edge of the aircraft, and eliminating abnormal data existing in the included angle;
(7) Drawing the detected straight line on the corresponding position of the image, and carrying out corresponding alarm reminding according to the detected distance and angle.
As a preferred embodiment of the present invention, the step (1) specifically includes:
the camera is arranged at the central position of the entrance of the corridor bridge to collect real-time image information of the aircraft and the corridor bridge, and the acquired RGB space image is converted into an HSV space image according to the following mode:
setting the maximum and minimum RGB components equal, i.e. r=g=b, then hue h=0;
when the maximum RGB component is r, then the hue h= ((g-b)/(max-min))x60;
when the maximum RGB component is g, then the hue h= ((b-r)/(max-min)) ×60+120;
when the largest RGB component is b, then hue h= ((r-g)/(max-min)) ×60+240;
when the maximum and minimum RGB components are equal, i.e. r=g=b, then the saturation s=0, otherwise the saturation s= (max-min)/max, and the brightness V is equal to the maximum of the RGB components.
As a preferred embodiment of the present invention, the step (2) specifically includes:
and respectively carrying out color extraction on the airplane and corridor bridge areas, counting the numerical ranges of HSV space images of the airplane and corridor bridge, selecting the corresponding ROI areas by setting HSV thresholds between [0, 100] and [175,48,255], and shielding other unselected areas by using a mask, so that the HSV space images are converted into gray images.
As a preferred embodiment of the present invention, the step (3) specifically includes the following steps:
(3.1) first performing noise removal on the ROI area in the gray-scale image using a gaussian filter;
(3.2) finding edges in the gray scale image by using the first derivative, calculating the gradient and the direction of each pixel in the image, taking the gradient direction in the image as the edge direction, performing non-maximum suppression, only preserving the local maximum value of the gradient amplitude in the edge direction, and suppressing other values to zero;
(3.3) defining a high threshold and a low threshold for hysteresis thresholding: if the pixel gradient amplitude is greater than the high threshold, then the strong edge is marked; if the current pixel gradient magnitude is between the high threshold and the low threshold, then the current pixel gradient magnitude is marked as a weak edge; if the current pixel gradient magnitude is less than the low threshold, the pixel gradient is suppressed;
all the strong edges acquired and the weak edges connected thereto are determined as the final edge image of the ROI area in the gray scale map.
As a preferred embodiment of the present invention, the step (4) specifically includes the following steps:
(4.1) creating a two-dimensional Hough cumulative array, wherein each element represents a possible straight line within the parameter space;
(4.2) initially, all elements in the cumulative array are initialized to zero;
(4.3) for each edge point, calculating corresponding straight line parameters rho and theta according to the positions of the edge points, and adding relevant element values in the Hough cumulative array;
(4.4) for each discrete value of the polar angle θ, the polar diameter ρ is calculated as follows:
ρ=x×cos(θ)+y×sin(θ),
wherein (x, y) are the coordinates of the edge points respectively;
and (4.5) finding out the element corresponding to (rho, theta) in the Hough accumulation array, adding the corresponding value, finding out the element with the maximum value in the Hough accumulation array, determining an effective straight line, drawing the detected straight line on an original image after determining the parameters of the effective straight line, and taking the detected straight line as the edge of the aircraft and the edge of the gallery bridge platform.
As a preferred embodiment of the present invention, the detected straight line is screened and optimized by:
discarding the straight line when the length of the obtained straight line obviously exceeds the length of the gallery bridge platform or the aircraft; in the approach process of the corridor bridge, the line segment of the corridor bridge platform and the line segment of the aircraft edge are always changed within a certain slope range, and if the slope between the corridor bridge platform and the aircraft edge exceeds a preset range, the line segment is abandoned; taking the detected midpoint of the line segment as the center of a circle, taking 5 pixel points as the radius, and discarding the line segment if the obvious abnormal HSV value exists in the drawn circle; if the two line segments are short in length and similar in slope, the two line segments are connected.
As a preferred embodiment of the present invention, the step (5) specifically includes the steps of:
(5.1) respectively calculating the distances from the leftmost end, the rightmost end and the middle point of the gallery bridge platform to the edge of the aircraft by using a calculation formula of the point-to-straight line distance, and selecting the shortest distance as the distance from the gallery bridge platform to the edge of the aircraft;
(5.2) calculating the actual distance between the pixel distance and the bridge deck and the aircraft by the following conversion:
acquiring the pixel length and the true length of the gallery bridge platform scale: a section of staff gauge is stuck on a corridor bridge platform, the real length of the staff gauge is known, the length is Lr, the pixel length of the staff gauge is measured in an image, and the pixel length is assumed to be L;
(5.3) calculating the pixel scale at the edge of the gallery bridge platform: dividing the actual length Lr of the scale by the pixel length L of the scale in the image to obtain the ratio of the actual length corresponding to each pixel to the pixel length;
the pixel scale is calculated as follows:
(5.4) calculating the actual width of the aircraft door in the image: the pixel length of the door width is measured in the image, assuming that the length is L 1 Pixel length L of width of cabin door 1 The real width Lr of the cabin door can be obtained by multiplying the pixel size Sr 1 ;
(5.5) calibrating different distances by using the width of the cabin door as a scale: measuring the pixel length of the hatch door at different pixel distances, the pixel distance being noted as x 1 ,x 2 ,x 3 ,x 4 ,x 5 .....The hatch pixel length is recorded as L 1 ,L 2 ,L 3 ,L 4 ,L 5 .., and calculating the pixel size of the image at different pixel distances by the above formula, and recording as Sr 1 ,Sr 2 ,Sr 3 ,Sr 4 ,Sr 5 ......;
(5.6) fitting a polynomial: the pixel distance x and the pixel size Sr are functionally related, a polynomial fit is used to approximate the function, and the obtained functional relation is as follows:
Sr(x)=f(x)
(5.7) calculating an actual distance: calculating a certain pixel distance x using integration h The actual distance Lr of (a) is calculated as follows:
as a preferred embodiment of the present invention, the step (6) specifically includes the steps of:
(6.1) calculating the magnitude of the included angle between the aircraft edge and the gallery bridge platform edge by using a straight line representing the two edges;
(6.2) if the pixel distance between the plane edge and the corridor bridge platform edge in the two frames of images is increased by more than two pixels or the distance is reduced by more than one pixel, judging that the straight line of the following frame is detected in error, so that the data detected by the frame is not adopted;
(6.3) if the distance difference between the two frames of images is within a specified range but the gradient difference is large, multiplying the gradient of the straight line detected by the next frame by a coefficient to make the output video image line segment smooth;
and (6.4) if the lengths of the line segments detected by the aircraft edge are different, but the lengths of the line segments detected by the gallery bridge platform edge are consistent, setting the lengths of the line segments of the aircraft edge to be consistent with the lengths of the gallery bridge platform edge, and smoothing the output video image line segments so as to finish the elimination of abnormal data.
As a preferred embodiment of the present invention, the step (7) specifically includes the steps of:
drawing the detected line segments at the corresponding positions of the images, marking the detected distances and angles on the images, when the corridor bridge is far away from the aircraft, using blue lines to represent the detected line segments, when the corridor bridge is close to a certain distance, using yellow lines to represent the detected line segments, when the corridor bridge is close to a dangerous position, using red lines to represent the detected line segments, and displaying the flashing red character on the images to remind that the distance is too close.
In practical application, the following will describe in detail each step of the method for implementing the tarmac bridge docking ranging based on machine vision in the present technical solution:
(A) A five megapixel camera was mounted in the center of the entrance to the gallery bridge. The camera installed on the gallery bridge is used for acquiring real-time image information of the airplane and the gallery bridge, the real-time image information is RGB space images, each frame of image is processed respectively, and the currently processed image is named img_RGB. Processing real-time image information obtained by the camera, converting the RGB space image into an HSV space image, and setting the image img_RGB to img_HSV. The specific conversion mode is as follows: if the maximum and minimum RGB components are equal (r=g=b), then hue h=0. If the largest RGB component is r, hue h= ((g-b)/(max-min)) ×60. If the largest RGB component is g, hue H= ((b-r)/(max-min)). Times.60+120. If the largest RGB component is b, hue H= ((r-g)/(max-min)). Times.60+240. If the maximum and minimum RGB components are equal, i.e. r=g=b, then the saturation s=0. Otherwise, the saturation s= (max-min)/max. The brightness V is equal to the maximum of the RGB components;
(B) The aircraft and the corridor bridge area are respectively colored, the numerical range of HSV space is counted, and through testing, the HSV threshold value in the project is generally between [0, 100] and [175,48,255 ]. And selecting an ROI region according to the threshold value of HSV, and selecting a region where the airplane and the corridor bridge platform are located. Masking other areas of the image using masking reduces the impact of redundant information in the image on subsequent inspection results, e.g., overground shutdown lines may interfere with the inspection results. Converting the HSV space image into a gray level image, and setting the gray level image as img_gray;
(C) And (3) performing edge detection on the ROI in the gray level image, wherein a canny operator is mainly used for edge detection, and a main contour in the image is obtained. The specific implementation mode is as follows: the image is first noise-removed using a gaussian filter to avoid the effect of noise on edge detection. The concept of the first derivative is used to find edges in the image, and the gradient and direction of each pixel in the image is calculated. Taking the gradient direction in the image as the edge direction and performing non-maximum suppression, only the local maximum of the gradient magnitude in the edge direction is preserved, while the other values will be suppressed to zero. To further screen edges, a hysteresis thresholding is used, which includes defining two thresholds, a high threshold and a low threshold. If the pixel gradient magnitude is greater than the high threshold, it is marked as a strong edge, if the gradient magnitude is between the high and low thresholds, it is marked as a weak edge, and if the gradient magnitude is less than the low threshold, it is suppressed. The weak edges may be further processed depending on whether they are connected to the strong edges or not. The weak edge points will be further processed to connect to the strong edges. Finally, all strong edges and weak edges connected to them will be extracted as the final edge image;
(D) And (C) detecting straight lines of the image obtained after the detection in the step (C), respectively detecting in the selected ROI areas of the airplane and the corridor bridge, and mainly detecting by using a Hough conversion detection straight line method to obtain a plurality of straight lines as alternatives. The specific implementation mode is as follows: a two-dimensional Hough cumulative array is created, with each element representing a possible straight line in the parameter space. Initially, all elements in the cumulative array are initialized to zero. For each edge point, the possible straight line parameters ρ and θ are calculated according to their positions, and the relevant element values are added to the Hough cumulative array. For each discrete value of polar angle θ, a polar diameter ρ is calculated: ρ=x×cos (θ) +y×sin (θ), where (x, y) is the coordinates of the edge point. The element corresponding to (ρ, θ) is found in the Hough cumulative array, and its value is incremented. The element with the maximum value is found in the Hough accumulation array, and a valid straight line is determined. After determining parameters of the straight line, the detected straight line is drawn on the original image.
(E) And (D) screening and optimizing the straight line detected in the step (D), selecting the most suitable straight line as the edge of the aircraft and the edge of the corridor bridge platform, wherein the main screening mode is based on the length, the slope and the color of the line segment, and discarding the line segment if the length of the line segment obviously exceeds the corridor bridge platform or the aircraft length. In the approach process of the corridor bridge, the line segment of the corridor bridge platform and the line segment of the aircraft edge always change within a certain slope range, and if the detected slope of the line segment exceeds the range, the line segment is abandoned. Taking the detected midpoint of the line segment as the center of a circle, taking 5 pixel points as the radius, and discarding the line segment if the obvious abnormal HSV value exists in the drawn circle. If the two line segments are short in length and similar in slope, connecting the two line segments;
(F) And calculating the distances from the gallery bridge platform to the edge of the aircraft, respectively calculating the distances from the leftmost end, the rightmost end and the midpoint of the gallery bridge platform to the edge of the aircraft, and selecting the shortest distance as the distance from the gallery bridge platform to the edge of the aircraft. The main calculation formula is a point-to-straight line distance calculation formula;
(G) The actual distance is obtained by using a functional relation between the pixel distance and the actual distance which are obtained in advance.
(H) And calculating the included angle between the gallery bridge platform and the edge of the aircraft. The straight lines detected by the edge of the corridor bridge platform and the edge of the airplane are brought into a formula for solving an included angle, the included angle between the two straight lines is calculated, and if the included angle calculated by the formula is larger than 90 degrees, the rest angles are taken as output angles;
(I) Detection errors possibly occurring are avoided, abnormal distance and included angle data are removed, and the output distance value is changed smoothly. The specific rejection standard is that if the pixel distance between the plane edge and the corridor bridge platform edge in the two frames of images is increased by more than two pixels or the distance is reduced by more than one pixel, the straight line detection error of the following frame is judged, and the data detected by the frame is not adopted. If the difference of the distance between two frames is within a prescribed range but the difference of the slopes is large, the slope of the detected straight line for the following frame is multiplied by a coefficient, so that the line segment change of the output video image is smoothed. The lengths of the line segments detected by the edges of the aircraft are different, but the lengths of the line segments detected by the edges of the gallery bridge platform are uniform, and the line segments of the aircraft edges are set to be consistent with the lengths of the edges of the gallery bridge platform, so that the line segments of the output video images change smoothly.
(J) Drawing the detected line segments at the corresponding positions of the images, and labeling the detected distances and angles on the images.
(K) When the distance is smaller than a certain value or the included angle is larger than a certain value, an alarm is sent out to remind. The specific implementation process is that when the corridor bridge is far away from the airplane, blue lines are used for representing detected line segments, when the corridor bridge is close to a certain distance, yellow lines are used for representing detected line segments, when the corridor bridge is close to a dangerous position, red lines are used for representing detected line segments, and the flashing red character reminding distance is displayed on an image to be too close.
In practical application, as early work of the present technical solution, the obtaining manner of the functional relation of the pixel distance to the actual distance includes:
(1) Acquiring the pixel length and the true length of the gallery bridge platform scale: a section of scale is stuck on the corridor bridge platform, the real length of the scale is known, and the length is Lr (unit: meter). The length of the pixel of the scale is measured in the image, assuming that the length is L (units: pixels).
(2) Calculating pixel dimensions at edges of the gallery bridge platform: the length Lr of the scale in practice is divided by the pixel length L (the number of pixels) of the scale in the image, to obtain the ratio of the actual length corresponding to each pixel to the pixel length.
The pixel scale is calculated as follows: (unit: actual length/pixel).
(3) Calculating the real width of the aircraft door in the image: taking the fact that the real lengths corresponding to one pixel at different distances are different into consideration, the width of the airplane cabin door is used as a scale for multipoint calibration. It is assumed that the aircraft door does not undergo a length deformation when the docking of the aircraft with the corridor bridge is completed, i.e. the pixel length at the aircraft door is the same as the pixel length of the corridor bridge platform scale. The pixel length of the door width is measured in the image, assuming that the length is L 1 (Unit: pixel), the pixel length L of the hatch width 1 (Unit: actual)Length/pixel) multiplied by the pixel size Sr (unit: actual length/pixel), the actual width Lr of the cabin door can be obtained 1 (unit: meters).
(4) The width of the cabin door is used as a scale to calibrate different distances: and acquiring the pixel distance between the aircraft and the corridor bridge platform by a method for detecting the edge of the corridor bridge platform and the edge of the aircraft. Measuring the pixel length of the hatch door at different pixel distances, the pixel distance being noted as x 1 ,x 2 ,x 3 ,x 4 ,x 5 ... (unit: pixels), hatch pixel length is recorded as L 1 ,L 2 ,L 3 ,L 4 ,L 5 .. (unit: pixels), the pixel size of the image at different pixel distances is calculated using formula 1 and noted as Sr 1 ,Sr 2 ,Sr 3 ,Sr 4 ,Sr 5 ....(units: actual length/pixel).
(5) Fitting a polynomial: the pixel distance x and the pixel size Sr are functionally related, a polynomial fit is used to approximate the function, and the obtained functional relation is as follows:
Sr(x)=f(x)
(6) Calculating the actual distance: after the general functional relation is obtained, the actual distance can be directly calculated through the pixel distance. Calculating a certain pixel distance x using integration h The actual distance Lr (unit: pixel) is calculated as follows:
any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution device.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "examples," "specific examples," or "embodiments," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.
The method for realizing the docking ranging of the corridor bridge of the parking apron based on the machine vision is adopted, and mainly utilizes an image processing means to detect the edge of an airplane and the edge of a corridor bridge platform and obtain the distance and the included angle of the two. The detection algorithm is used for dividing the ROI region based on the HSV space threshold value, so that the influence of a stop line on the ground on detection can be reduced. The algorithm is based on real-time images, and the continuity of the detection results can be sequentially considered because the aircraft gallery bridge docking process has continuity in the front and rear images, so that the front and rear detection results are prevented from generating larger errors. The technical scheme has the advantages of non-contact, low cost, high stability and the like, can assist an operator in operation, realizes timely monitoring and early warning, reduces accident rate, reduces risk for an airline company and saves cost.
In this specification, the invention has been described with reference to specific embodiments thereof. It will be apparent, however, that various modifications and changes may be made without departing from the spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Claims (9)
1. The method for realizing the docking ranging of the apron bridge of the parking apron based on the machine vision is characterized by comprising the following steps:
(1) Acquiring real-time image information of an airplane and a gallery by using a camera installed on the gallery, and converting the real-time image information from an RGB space image to an HSV space image;
(2) Selecting an ROI region according to a set threshold value of the HSV space image, determining the region where the plane and the corridor bridge platform are located, and converting the HSV space image into a gray level image;
(3) Performing edge detection on the ROI in the gray level diagram;
(4) Performing linear detection on the image obtained after edge detection, screening and optimizing the detected lines, and selecting the most suitable line as the edge of the aircraft and the edge of the gallery bridge platform;
(5) Calculating the pixel distance from the gallery bridge platform to the edge of the aircraft, and obtaining the actual distance between the gallery bridge platform and the edge of the aircraft by using the pixel distance;
(6) Calculating an included angle between the gallery bridge platform and the edge of the aircraft, and eliminating abnormal data existing in the included angle;
(7) And drawing the detected straight line on the corresponding position of the original RGB space image, and carrying out corresponding alarm reminding according to the detected distance and angle.
2. The method for implementing the docking ranging of the apron bridge based on the machine vision according to claim 1, wherein the step (1) is specifically:
the camera is arranged at the central position of the entrance of the corridor bridge to collect real-time image information of the aircraft and the corridor bridge, and the acquired RGB space image is converted into an HSV space image according to the following mode:
setting the maximum and minimum RGB components equal, i.e. r=g=b, then hue h=0;
when the maximum RGB component is r, then the hue h= ((g-b)/(max-min))x60;
when the maximum RGB component is g, then the hue h= ((b-r)/(max-min)) ×60+120;
when the largest RGB component is b, then hue h= ((r-g)/(max-min)) ×60+240;
when the maximum and minimum RGB components are equal, i.e. r=g=b, then the saturation s=0, otherwise the saturation s= (max-min)/max, and the brightness V is equal to the maximum of the RGB components.
3. The method for implementing the docking ranging of the apron bridge based on the machine vision according to claim 1, wherein the step (2) is specifically:
and respectively carrying out color extraction on the airplane and corridor bridge areas, counting the numerical ranges of HSV space images of the airplane and corridor bridge, selecting the corresponding ROI areas by setting HSV thresholds between [0, 100] and [175,48,255], and shielding other unselected areas by using a mask, so that the HSV space images are converted into gray images.
4. The method for implementing the docking ranging of the apron bridge based on the machine vision according to claim 3, wherein the step (3) specifically comprises the following steps:
(3.1) first performing noise removal on the ROI area in the gray-scale image using a gaussian filter;
(3.2) finding edges in the gray scale image by using the first derivative, calculating the gradient and the direction of each pixel in the image, taking the gradient direction in the image as the edge direction, performing non-maximum suppression, only preserving the local maximum value of the gradient amplitude in the edge direction, and suppressing other values to zero;
(3.3) defining a high threshold and a low threshold for hysteresis thresholding: if the pixel gradient amplitude is greater than the high threshold, then the strong edge is marked; if the current pixel gradient magnitude is between the high threshold and the low threshold, then the current pixel gradient magnitude is marked as a weak edge; if the current pixel gradient magnitude is less than the low threshold, the pixel gradient is suppressed;
all the strong edges acquired and the weak edges connected thereto are determined as the final edge image of the ROI area in the gray scale map.
5. The method for implementing the tarmac bridge docking ranging based on the machine vision according to claim 4, wherein the step (4) specifically comprises the following steps:
(4.1) creating a two-dimensional Hough cumulative array, wherein each element represents a possible straight line within the parameter space;
(4.2) initially, all elements in the cumulative array are initialized to zero;
(4.3) for each edge point, calculating corresponding straight line parameters rho and theta according to the positions of the edge points, and adding relevant element values in the Hough cumulative array;
(4.4) for each discrete value of the polar angle θ, the polar diameter ρ is calculated as follows:
ρ=x×cos(θ)+y×sin(θ),
wherein (x, y) are the coordinates of the edge points respectively;
and (4.5) finding out the element corresponding to (rho, theta) in the Hough accumulation array, adding the corresponding value, finding out the element with the maximum value in the Hough accumulation array, determining an effective straight line, drawing the detected straight line on the original RGB space image after determining the parameters of the effective straight line, and taking the detected straight line as the edge of the aircraft and the edge of the gallery platform.
6. The method for implementing tarmac bridge abutment ranging based on machine vision according to claim 5, characterized in that the detected straight line is screened and optimized by:
discarding the straight line when the length of the obtained straight line obviously exceeds the length of the gallery bridge platform or the aircraft; in the approach process of the corridor bridge, the line segment of the corridor bridge platform and the line segment of the aircraft edge are always changed within a certain slope range, and if the slope between the corridor bridge platform and the aircraft edge exceeds a preset range, the line segment is abandoned; taking the detected midpoint of the line segment as the center of a circle, taking 5 pixel points as the radius, and discarding the line segment if the obvious abnormal HSV value exists in the drawn circle; if the two line segments are short in length and similar in slope, the two line segments are connected.
7. The method for implementing the tarmac bridge docking ranging based on the machine vision according to claim 5, wherein the step (5) specifically comprises the following steps:
(5.1) respectively calculating the distances from the leftmost end, the rightmost end and the middle point of the gallery bridge platform to the edge of the aircraft by using a calculation formula of the point-to-straight line distance, and selecting the shortest distance as the distance from the gallery bridge platform to the edge of the aircraft;
(5.2) calculating the actual distance between the pixel distance and the bridge deck and the aircraft by the following conversion:
acquiring the pixel length and the true length of the gallery bridge platform scale: a section of scale is stuck on a corridor bridge platform, the real length of the scale is known, the length is Lr, the pixel length of the scale is measured in an original RGB space image, and the pixel length is assumed to be L;
(5.3) calculating the pixel scale at the edge of the gallery bridge platform: dividing the length Lr of the scale in practice by the pixel length L of the scale in the original RGB space image to obtain the ratio of the actual length corresponding to each pixel to the pixel length;
the pixel scale is calculated as follows:
(5.4) Calculating the real width of the aircraft door in the original RGB space image: measuring the pixel length of the hatch width in the original RGB space image, assuming the length is L 1 Pixel length L of width of cabin door 1 The real width Lr of the cabin door can be obtained by multiplying the pixel size Sr 1 ;
(5.5) calibrating different distances by using the width of the cabin door as a scale: measuring the pixel length of the hatch door at different pixel distances, the pixel distance being noted as x 1 ,x 2 ,x 3 ,x 4 ,x 5 ....a hatch pixel length is recorded as L 1 ,L 2 ,L 3 ,L 4 ,L 5 .., and calculating the pixel size of the image at different pixel distances by the above formula, and recording as Sr 1 ,Sr 2 ,Sr 3 ,Sr 4 ,Sr 5 ......;
(5.6) fitting a polynomial: the pixel distance x and the pixel size Sr are functionally related, a polynomial fit is used to approximate the function, and the obtained functional relation is as follows:
Sr(x)=f(x)
(5.7) calculating an actual distance: calculating a certain pixel distance x using integration h The actual distance Lr of (a) is calculated as follows:
8. the method for implementing the tarmac bridge docking ranging based on the machine vision according to claim 7, wherein the step (6) specifically comprises the following steps:
(6.1) calculating the magnitude of the included angle between the aircraft edge and the gallery bridge platform edge by using a straight line representing the two edges;
(6.2) if the pixel distance between the plane edge and the corridor bridge platform edge in the two frames of original RGB space images is increased by more than two pixels or the distance is reduced by more than one pixel, judging that the straight line of the next frame is detected in error, so that the data detected by the frame is not adopted;
(6.3) if the distance difference between the two frames of images is within a specified range but the gradient difference is large, multiplying the gradient of the straight line detected by the next frame by a coefficient to make the output video image line segment smooth;
and (6.4) if the lengths of the line segments detected by the aircraft edge are different, but the lengths of the line segments detected by the gallery bridge platform edge are consistent, setting the lengths of the line segments of the aircraft edge to be consistent with the lengths of the gallery bridge platform edge, and smoothing the output video image line segments so as to finish the elimination of abnormal data.
9. The method for implementing the tarmac bridge docking ranging based on the machine vision according to claim 8, wherein the step (7) specifically comprises the following steps:
drawing the detected line segments at the corresponding positions of the original RGB space images, marking the detected distances and angles on the original RGB space images, using blue lines to represent the detected line segments when the corridor bridge is far from the airplane, using yellow lines to represent the detected line segments when the corridor bridge is close to a certain distance, using red lines to represent the detected line segments when the corridor bridge is close to a dangerous position, and displaying the flashing red word reminding distance on the original RGB space images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311621450.8A CN117649446A (en) | 2023-11-30 | 2023-11-30 | Method for realizing docking ranging of corridor bridge of parking apron based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311621450.8A CN117649446A (en) | 2023-11-30 | 2023-11-30 | Method for realizing docking ranging of corridor bridge of parking apron based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117649446A true CN117649446A (en) | 2024-03-05 |
Family
ID=90045984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311621450.8A Pending CN117649446A (en) | 2023-11-30 | 2023-11-30 | Method for realizing docking ranging of corridor bridge of parking apron based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117649446A (en) |
-
2023
- 2023-11-30 CN CN202311621450.8A patent/CN117649446A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10934023B2 (en) | Image recognition for vehicle safety and damage inspection | |
US10290219B2 (en) | Machine vision-based method and system for aircraft docking guidance and aircraft type identification | |
US11270111B2 (en) | Automated management of potentially hazardous objects near power lines | |
EP3196863B1 (en) | System and method for aircraft docking guidance and aircraft type identification | |
US8983172B2 (en) | Visual inspection apparatus, secure one-way data transfer device and methods therefor | |
US20100063650A1 (en) | System and methods for aircraft preflight inspection | |
CN110415544B (en) | Disaster weather early warning method and automobile AR-HUD system | |
CN108345855B (en) | Lane line pressing detection method and system | |
CN111709994B (en) | Autonomous unmanned aerial vehicle visual detection and guidance system and method | |
CN110276787B (en) | Conductor galloping monitoring method based on marker image detection | |
KR20190088647A (en) | Automatic Lane Painting System Using Unmanned Vehicles | |
CN109614864B (en) | Method for detecting retractable state of undercarriage of multi-model aircraft at ground-based view angle | |
CN110826412A (en) | Highway visibility detection system and method | |
Rice et al. | Automating the visual inspection of aircraft | |
EP4063279B1 (en) | Automated assessment of aircraft structure damage | |
CN108225735B (en) | Precision approach indicator flight verification method based on vision | |
CN111950456A (en) | Intelligent FOD detection method and system based on unmanned aerial vehicle | |
CN117649446A (en) | Method for realizing docking ranging of corridor bridge of parking apron based on machine vision | |
JP3500425B2 (en) | Road surface condition judgment method in visible image type road surface condition grasping device | |
US20230267753A1 (en) | Learning based system and method for visual docking guidance to detect new approaching aircraft types | |
CN105335764B (en) | A kind of docking aircraft model identification verification system and method | |
CN114973206A (en) | Automatic pavement disease identification and management method | |
CN109409282B (en) | Method and system for detecting foreign objects on airfield runway | |
CN105447496A (en) | Docking airplane model identification and verification method and system based on machine vision | |
CN113269097A (en) | Method, system and device for removing foreign objects on airport runway and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |