CN110047111B - Parking apron corridor bridge butt joint error measuring method based on stereoscopic vision - Google Patents
Parking apron corridor bridge butt joint error measuring method based on stereoscopic vision Download PDFInfo
- Publication number
- CN110047111B CN110047111B CN201910313291.2A CN201910313291A CN110047111B CN 110047111 B CN110047111 B CN 110047111B CN 201910313291 A CN201910313291 A CN 201910313291A CN 110047111 B CN110047111 B CN 110047111B
- Authority
- CN
- China
- Prior art keywords
- bridge
- cameras
- apron
- docking
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 87
- 210000001503 joint Anatomy 0.000 title claims abstract description 16
- 238000003032 molecular docking Methods 0.000 claims abstract description 57
- 238000011156 evaluation Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000001360 synchronised effect Effects 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 9
- 238000003384 imaging method Methods 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 238000000691 measurement method Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000001035 drying Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000006748 scratching Methods 0.000 description 2
- 230000002393 scratching effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
A method for measuring docking errors of a corridor bridge of an air park based on stereoscopic vision. The method comprises the steps of establishing a system for measuring the butt joint error of an apron bridge and an airplane cabin door; calibrating a camera to obtain internal and external parameters; collecting an image of the docking process of the apron gallery bridge and the airplane cabin door by using a camera; preprocessing an image; obtaining characteristic points in the image; removing outliers to obtain target feature points; solving the space three-dimensional coordinates of the target feature points; reconstructing a three-dimensional model according to the spatial three-dimensional coordinates; and applying the three-dimensional model to the real-time evaluation of the butt joint process of the terminal processor to the gallery bridge and the aircraft hatch. The method has the advantages of non-contact property, full-field measurement, high precision, high stability and the like, is not easily influenced by the outside, can be used for measuring the butting error of the apron bridge and the airplane cabin door in a high-precision, high-efficiency and full-field non-contact manner, can be used for accurately measuring the error, and can ensure that the apron bridge is connected with the airplane cabin door in a high-precision manner.
Description
Technical Field
The invention belongs to the technical field of binocular vision three-dimensional coordinate measurement, and particularly relates to a method for measuring docking errors of a apron gallery bridge based on stereoscopic vision.
Background
The Airport Bridge is used as important passenger Boarding and disembarking equipment in airports and is widely adopted in various airports in the world depending on the advantages of the Airport Bridge, so that the butt joint of an airplane and a terminal building is realized, passengers can directly walk into or leave the airplane from a Bridge box when Boarding, the passengers can greatly enter and exit the cabin conveniently, and the Airport Bridge is safe and comfortable. And under any weather and any temperature, the corridor bridge can prevent passengers from being blown by wind, exposed to the sun and rain when boarding and disembarking. However, when the parking apron bridge is butted with the airplane in the working area, the problems of scratching, inaccurate positioning and butting and the like are very easy to occur, so that inconvenience is caused to passengers, and certain economic loss is caused to airlines, so that the problems of ensuring high-precision connection between the parking apron bridge and the airplane, avoiding scratching and the like are very important, and the passengers can smoothly board the airplane, which is one of the key requirements of aviation safety.
Since the development of Global Positioning System (GPS) in the united states and its success in various application fields in 1974, global Navigation Satellite System (GNSS) has been actively developed in the world in various countries. At present, the main global navigation satellite systems include the GPS system in the united states, the GLONASS (GLONASS) system in russia, the galileo system in europe, and the beidou system in china, and these positioning systems are widely used in the civil aviation industry. The positioning system used in part of airports in China imitates most airports in China or adopts a system based on a GPS system in China, the docking of an apron corridor bridge and an airplane is realized mostly by adopting GPS positioning, the Beidou satellite navigation system independently developed in China is an independently operated global satellite navigation system in recent years, and the Beidou is applied to the civil aviation industry, has huge potential and space, and has great significance for improving the national international influence and technical level of the civil aviation in China. Whether the Beidou or the GPS can meet the requirement of integrity monitoring in non-precise approach. In the docking process of the parking apron corridor bridge and the airplane, a GPS positioning method is often adopted, and the method has dead angles in the measuring range and is easily interfered by external electric wave signals, namely, the defects of low signal precision, easy signal interference and the like exist.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a method for measuring the docking error of a corridor bridge of an air park based on stereoscopic vision.
In order to achieve the above purpose, the method for measuring the docking error of the apron corridor bridge based on the stereoscopic vision provided by the invention comprises the following steps in sequence:
step 1) establishing a docking error measuring system of an apron bridge and an airplane cabin door;
the system comprises two cameras, a synchronous strobe control device and a terminal processor; wherein: the two cameras are image acquisition devices comprising an illumination light source and cameras, and are respectively arranged at two sides of the airport connecting port of the air park corridor, and the cameras face the door of the airplane; the synchronous stroboscopic control device is an image acquisition synchronous control device, is respectively connected with the two cameras and is used for realizing the synchronous image acquisition of the two cameras; the terminal processor is respectively connected with the two cameras, and the terminal processor acquires and processes images of the docking process of the apron bridge and the airplane cabin door through the two cameras;
step 2) calibrating two cameras in the system for measuring the butting error of the apron bridge and the airplane cabin door to obtain the internal and external parameters of the cameras;
step 3) acquiring images of the docking process of the parking apron bridge and the airplane cabin door by using the two cameras;
step 4) preprocessing the image of the docking process of the apron bridge and the airplane cabin door to obtain a corrected image;
step 5) detecting the characteristic points in the corrected image to obtain the characteristic points;
step 6) removing outliers from the feature points to obtain target feature points;
step 7) solving the space three-dimensional coordinates of the target feature points;
step 8) reconstructing a three-dimensional model of the docking process of the apron corridor bridge and the airplane cabin door according to the spatial three-dimensional coordinates of the target characteristic points;
and 9) applying the three-dimensional model to real-time evaluation of the docking process of the terminal processor to the bridge and the airplane hatch to guide the docking process of the apron bridge and the airplane hatch.
In the step 1), the two cameras adopt industrial digital cameras with the same model.
In step 2), the method for calibrating the two cameras in the system for measuring the docking error of the apron bridge and the airplane cabin door to obtain the internal and external parameters of the cameras comprises the following steps: setting the included angle of the main optical axes of the two cameras to be 60 degrees; according to the binocular stereo vision principle, the two cameras are subjected to system calibration on the basis of a Zhangyingyou calibration method to obtain initial internal and external parameters of the cameras, and then the cameras are calibrated by adopting a camera internal parameter calibration method of a BP neural network to obtain the internal and external parameters of the cameras.
In step 3), the method for acquiring images of the docking process of the apron bridge and the airplane cabin door by using the two cameras is as follows:
under the control of the terminal processor, the two cameras are controlled by the synchronous frequency flash control device to synchronously acquire two-dimensional images of different time points in the butt joint process of the apron bridge and the airplane cabin door in real time and transmit the two-dimensional images to the terminal processor.
In the step 4), the method for preprocessing the image of the docking process of the apron bridge and the airplane cabin door to obtain the corrected image comprises the following steps:
firstly, stereo correction is carried out on distortion parameters in an image through epipolar constraint; then some extraneous information is filtered out, wherein the corrected image is obtained by applying the techniques including filtering and de-drying, contrast increasing and pseudo-color processing of the image.
In step 5), the method for detecting the feature points in the corrected image and acquiring the feature points includes:
on a terminal processor, circling out an airplane cabin door by a rectangle on the corrected image, and setting the rectangular area as a target area; then, carrying out feature point detection on the image in the target area by utilizing an ORB algorithm to obtain feature points and realize feature detection; detecting the feature points by adopting a FAST algorithm in an ORB algorithm, wherein the algorithm is based on the gray value of pixel points around the feature points, namely detecting a circle of pixel points around the candidate feature points, and if the gray difference value between enough pixel points and the candidate feature points in the region around the candidate feature points is large enough, considering the candidate feature points as one feature point;
in the above formula, I (x) is the gray value of any pixel point on the circumference, I (p) is the gray value of the candidate feature point p as the center of circle, and ∈ d is the gray difference threshold, and if the gray difference N is greater than the gray difference threshold, the candidate feature point p is considered as a feature point.
In step 6), the method for removing outliers from the feature points to obtain target feature points includes:
(1) Calculating the center points for all the feature points;
(2) Calculating the distance from each characteristic point to the central point, and counting the distance distribution;
(3) Setting a threshold, comparing the distance from each characteristic point to the central point with the threshold, and removing the characteristic points exceeding the threshold, namely outliers to obtain target characteristic points;
(4) And calculating the intra-class divergence distance to prove the reliability of the target characteristic point.
In step 7), the method for solving the spatial three-dimensional coordinates of the target feature point includes:
the two cameras and the measured object form a triangular relation in space, and the space three-dimensional coordinates of the target characteristic points are solved by utilizing the coordinates of the imaging points of the target characteristic points on the image planes of the two cameras.
In step 9), the method for applying the three-dimensional model to the real-time evaluation of the docking process of the platform bridge and the aircraft hatch by the terminal processor to guide the docking process of the apron platform bridge and the aircraft hatch includes:
the terminal processor judges whether the apron gallery bridge operated in a certain direction collides with the cabin door of the airplane or not by using the three-dimensional model; if the parking apron bridge operated in the direction is judged to collide with the airplane cabin door, an anti-collision early warning program in the terminal processor sends out warning information to remind workers of the parking apron bridge to plan the operation direction again; if judge that the air park gallery bridge of this direction operation can not bump with aircraft hatch door, judge again whether can accurately realize the butt joint of air park gallery bridge and aircraft hatch door, if can not realize, the staff of air park gallery bridge can adjust according to terminal processor's instruction, until realizing the successful butt joint of air park gallery bridge and aircraft hatch door.
The method for measuring the docking error of the apron corridor bridge based on the stereoscopic Vision utilizes the advantages of non-contact property, full-field measurement, high precision, difficult influence from the outside, strong stability and the like of the stereoscopic Vision (Stereo Vision), can measure the docking error of the apron corridor bridge and the airplane cabin door in a high-precision, high-efficiency and full-field non-contact manner, can accurately measure the error, and can realize the high-precision connection of the apron corridor bridge and the airplane cabin door. The method has the advantages that accurate connection and accident rate reduction can be realized by using a stereoscopic vision high-precision method, timely monitoring and early warning can be realized, and the safety and comfort level of passengers who take airplanes can be improved; for an airline company, the risk is reduced, the cost is saved, and the satisfaction degree of a user is improved.
Drawings
Fig. 1 is a structural diagram of a system for measuring docking errors between an apron veranda bridge and an airplane door adopted in the method for measuring docking errors between the apron veranda bridge based on stereoscopic vision.
Fig. 2 is a flow chart of the method for measuring docking errors of the apron bridge based on the stereoscopic vision.
Fig. 3 is a schematic diagram of observation space points of the binocular camera in the invention.
Detailed Description
The method for measuring the docking error of the apron bridge based on the stereoscopic vision provided by the invention is described in detail below with reference to the accompanying drawings and specific embodiments.
The method for measuring the docking error of the apron bridge based on the stereoscopic vision integrates the advantages of the vision measurement and the vision image, can perform stable and high-precision detection and identification on the docking process of the apron bridge and the airplane cabin door, guarantees the smooth docking operation of the apron bridge and the airplane cabin door, and improves the safety of the operating area of the apron bridge.
As shown in fig. 2, the method for measuring the docking error of the apron corridor bridge based on the stereoscopic vision provided by the invention comprises the following steps in sequence:
step 1) establishing a docking error measuring system of an apron bridge and an airplane cabin door as shown in figure 1;
the system comprises two cameras 1, a synchronous strobe control device 2 and a terminal processor 3; wherein: the two cameras 1 are image acquisition devices comprising an illumination light source and cameras, and are respectively arranged at two sides of an airport connecting port of the parking apron bridge, and the cameras face the door of the airplane; the synchronous strobe control device 2 is an image acquisition synchronous control device, is respectively connected with the two cameras 1, and is used for realizing the synchronous acquisition of images of the two cameras 1; the terminal processor 3 is respectively connected with the two cameras 1, and acquires and processes images of the docking process of the apron bridge and the airplane cabin door through the two cameras 1; the two cameras 1 adopt industrial digital cameras with the same model;
step 2) calibrating two cameras 1 in the system for measuring the butting error of the apron bridge and the airplane cabin door to obtain the internal and external parameters of the cameras 1;
according to a binocular stereo vision measurement model, two cameras 1 and a measured object need to form a triangular relation in space, and the included angle of the main optical axes of the two cameras 1 is set to be 60 degrees; according to the binocular stereoscopic vision principle, the two cameras 1 are subjected to system calibration based on a Zhang Zhengyou calibration method to obtain initial internal and external parameters of the cameras 1, and then the cameras 1 are calibrated by adopting a BP neural network camera internal parameter calibration method to further improve the calibration precision of the cameras 1, reduce uncertainty in the subsequent measurement process and obtain the internal and external parameters of the cameras 1.
The camera 1 in the invention adopts manual calibration during calibration, and can also adopt automatic calibration according to different used calibration tools, such as a calibration tool kit of OpenCV, so as to finish automatic calibration. Or an automatic calibration program can be written in the computer in advance, so that the calibration of the camera 1 is completed. The calibration sequence of the camera 1 of the invention is as follows: first, two cameras 1 are numbered, and then the two cameras 1 are stereoscopically calibrated using a stereoscopic vision system calibration tool in the Matlab toolbox. The calibration plate is placed in the public view area of the two cameras 1, 10 groups of photos with different calibration plate positions are shot, and internal parameters of the two cameras 1 are obtained after the photos are processed by a Matlab tool: focal length, aperture, principal point, etc.; external parameters: relative distance, angle, rotation, etc. of the two cameras 1;
step 3) acquiring images of the docking process of the apron bridge and the airplane cabin door by using the two cameras 1;
under the control of the terminal processor 3, the two cameras 1 are controlled by the synchronous strobe control device 2 to synchronously acquire two-dimensional images of different time points in the butt joint process of the apron bridge and the airplane cabin door in real time and transmit the two-dimensional images to the terminal processor 3;
step 4) preprocessing the image of the docking process of the apron bridge and the airplane cabin door to obtain a corrected image;
images acquired from real environments often have various noise interferences or distortion phenomena, so that the images must be preprocessed before subsequent operations are performed. Firstly, stereo correction is carried out on distortion parameters in an image through epipolar constraint; then filtering some irrelevant information, emphasizing key information, improving image quality and highlighting image features; wherein, the technique including filtering and drying, contrast increasing and pseudo-color processing of the image is applied to obtain a corrected image; the higher the image quality is, the more accurate the recording of the docking process of the apron bridge and the airplane cabin door is;
step 5) detecting the characteristic points in the corrected image to obtain the characteristic points;
on the terminal processor 3, the cabin door of the airplane is circled by a rectangle on the corrected image, and the rectangular area is defined as a target area; and then, carrying out feature point detection on the image in the target region by utilizing an ORB (Oriented Brief) algorithm, acquiring feature points and realizing feature detection. Compared to the KLT algorithm, SIFT, SURF, harris, etc., the overall performance of the ORB algorithm is the best among the various evaluations over other feature extraction algorithms. The ORB algorithm is introduced into the feature point detection, so that the high-precision and high-efficiency detection of the feature points in the target area can be realized; the invention adopts FAST (features from averaged segment test) algorithm in ORB algorithm to detect the feature points, the algorithm is based on the gray value of pixel points around the feature points, namely, detects a circle of pixel points around the candidate feature points, if the gray difference value between enough pixel points and the candidate feature points in the surrounding area of the candidate feature points is large enough, the candidate feature points are considered as a feature point.
In the above formula, I (x) is the gray value of any pixel point on the circumference, I (p) is the gray value of the candidate feature point p as the center of circle, epsilon d And if the gray difference value N is greater than the gray difference value threshold, the candidate feature point p is considered as a feature point.
Step 6) removing outliers from the feature points to obtain target feature points;
the outlier refers to a feature point that is significantly deviated from other feature points among all the detected feature points. The method comprises the following steps of eliminating a few poor characteristic points, namely outliers, far away from a central point by counting the distance distribution from the characteristic points to the central point by adopting a distance-based central point elimination method so as to enhance the robustness of the algorithm; if no outlier exists, continuing to the next step; the specific method comprises the following steps:
(1) Calculating the central points for all the feature points;
(2) Calculating the distance from each characteristic point to the central point, and counting the distance distribution;
(3) And setting a threshold, comparing the distance from each characteristic point to the central point with the threshold, and removing the characteristic points exceeding the threshold, namely outliers to obtain the target characteristic points.
(4) And calculating the divergence distance in the class to prove the reliability of the target characteristic point.
Step 7) solving the space three-dimensional coordinates of the target feature points;
the binocular stereovision simulates the human eyes to acquire three-dimensional information and consists of two cameras 1, as shown in fig. 3. The two cameras 1 form a triangular relation with the measured object in space, and the space three-dimensional coordinates of the target characteristic points can be solved by utilizing the coordinates of the imaging points of the target characteristic points on the image planes of the two cameras 1.
Wherein, O L X L Y L Z L Is the coordinate system of the left camera 1, the image plane coordinate system is o l x l y l In the direction of the optical axis of Z L (ii) a In the same way, O R X R Y R Z R The coordinate system of the right camera 1, the image plane coordinate system o R x R y R In the direction of the optical axis Z R ;P L ,P R The coordinates of imaging points of the target characteristic point on the image plane of the left camera 1 and the right camera 1 are respectively, and the intersection point P of two rays in the figure W Namely the target characteristic point in the world coordinate system X W Y W Z W The coordinates of the following.
With reference to fig. 3, the positional relationship between the two cameras 1 can be expressed as:
wherein,a rotation matrix representing the right camera 1 coordinate system to the left camera 1 coordinate system; t = (T) 1 t 2 t 3 ) T A translation matrix from the right camera 1 coordinate system to the left camera 1 coordinate system is indicated.
According to the camera perspective transformation model, the corresponding transformation relationship between the target feature point expressed in the sensor coordinate system and the imaging points on the image planes of the two cameras 1 is as follows:
therefore, the spatial three-dimensional coordinates of the target feature point can be found:
step 8) reconstructing a three-dimensional model of the docking process of the apron corridor bridge and the airplane cabin door according to the spatial three-dimensional coordinates of the target characteristic points;
step 9) applying the three-dimensional model to real-time evaluation of the docking process of the terminal processor 3 to the bridge and the airplane hatch so as to guide the docking process of the apron bridge and the airplane hatch;
the terminal processor 3 judges whether the apron gallery bridge operated in a certain direction collides with the cabin door of the airplane or not by using the three-dimensional model; if the parking apron bridge operated in the direction is judged to collide with the airplane cabin door, an anti-collision early warning program in the terminal processor 3 sends out warning information to remind workers of the parking apron bridge to plan the operation direction again; if judge that the air park gallery bridge of this direction operation can not bump with the aircraft hatch door, judge again whether can accurately realize the butt joint of air park gallery bridge and aircraft hatch door, if can not realize, the staff of air park gallery bridge can adjust according to terminal processor 3's instruction, until realizing the successful butt joint of air park gallery bridge and aircraft hatch door.
While the invention has been described in connection with specific embodiments thereof, it will be understood that these should not be construed as limiting the scope of the invention, which is defined in the following claims, and any variations which fall within the scope of the claims are intended to be embraced thereby.
Claims (9)
1. A method for measuring docking errors of a apron gallery bridge based on stereoscopic vision is characterized by comprising the following steps: the method for measuring the docking error of the apron corridor bridge based on the stereoscopic vision comprises the following steps in sequence:
step 1) establishing a docking error measuring system of an apron bridge and an airplane cabin door;
the system comprises two cameras (1), a synchronous strobe control device (2) and a terminal processor (3); wherein: the two cameras (1) are image acquisition devices comprising an illumination light source and cameras and are respectively arranged at two sides of the airport bridge of the parking apron corridor, and the cameras face the door of the airplane; the synchronous strobe control device (2) is an image acquisition synchronous control device, is respectively connected with the two cameras (1) and is used for realizing the synchronous acquisition of the images of the two cameras (1); the terminal processor (3) is respectively connected with the two cameras (1), and is used for collecting and processing images of the docking process of the apron bridge and the airplane cabin door through the two cameras (1);
step 2) calibrating two cameras (1) in the system for measuring the butting error of the apron bridge and the airplane cabin door to obtain the internal and external parameters of the cameras (1);
step 3) acquiring images of the docking process of the apron bridge and the airplane cabin door by using the two cameras (1);
step 4) preprocessing the image of the docking process of the apron bridge and the airplane cabin door to obtain a corrected image;
step 5) detecting the characteristic points in the corrected image to obtain the characteristic points;
step 6) removing outliers from the feature points to obtain target feature points;
step 7) solving the space three-dimensional coordinates of the target characteristic points;
step 8) reconstructing a three-dimensional model of the docking process of the apron gallery bridge and the airplane cabin door according to the space three-dimensional coordinates of the target characteristic points;
and 9) applying the three-dimensional model to the real-time evaluation of the butt joint process of the corridor bridge and the airplane hatch by the terminal processor (3) so as to guide the butt joint process of the parking apron corridor bridge and the airplane hatch.
2. The stereo vision based tarmac corridor bridge docking error measurement method of claim 1, wherein: in the step 1), the two cameras (1) adopt industrial digital cameras with the same model.
3. The method of claim 1 for measuring apron galley bridge docking errors based on stereo vision, wherein: in the step 2), the method for calibrating the two cameras (1) in the system for measuring the docking error of the apron bridge and the airplane cabin door to obtain the internal and external parameters of the cameras (1) comprises the following steps: setting the included angle of the main optical axes of the two cameras (1) to be 60 degrees; according to the binocular stereoscopic vision principle, the two cameras 1 are subjected to system calibration on the basis of a Zhang Zhengyou calibration method to obtain initial internal and external parameters of the cameras (1), and then the cameras (1) are calibrated by adopting a BP neural network camera internal parameter calibration method to obtain the internal and external parameters of the cameras (1).
4. The stereo vision based tarmac corridor bridge docking error measurement method of claim 1, wherein: in the step 3), the method for acquiring the images of the docking process of the apron bridge and the airplane cabin door by using the two cameras (1) comprises the following steps:
under the control of the terminal processor (3), the two cameras (1) are controlled by the synchronous strobe control device (2) to synchronously acquire two-dimensional images of different time points in the butt joint process of the apron bridge and the airplane cabin door in real time and transmit the two-dimensional images to the terminal processor (3).
5. The method of claim 1 for measuring apron galley bridge docking errors based on stereo vision, wherein: in the step 4), the method for preprocessing the image of the docking process of the apron bridge and the airplane cabin door to obtain the corrected image comprises the following steps:
firstly, stereo correction is carried out on distortion parameters in an image through epipolar constraint; then some extraneous information is filtered out, wherein the corrected image is obtained by applying the techniques including filtering and de-drying, contrast increasing and pseudo-color processing of the image.
6. The method of claim 1 for measuring apron galley bridge docking errors based on stereo vision, wherein: in step 5), the method for detecting the feature points in the corrected image and acquiring the feature points includes:
on a terminal processor (3), an airplane cabin door is circled out by a rectangle on the corrected image, and the rectangular area is defined as a target area; then, carrying out feature point detection on the image in the target area by utilizing an ORB algorithm to obtain feature points and realize feature detection; detecting the feature points by adopting a FAST algorithm in an ORB algorithm, wherein the algorithm is based on the gray value of pixel points around the feature points, namely detecting a circle of pixel points around the candidate feature points, and if the gray difference value between enough pixel points and the candidate feature points in the region around the candidate feature points is large enough, considering the candidate feature points as one feature point;
in the above formula, I (x) is the gray value of any pixel point on the circumference, I (p) is the gray value of the candidate feature point as the center of the circle, epsilon d And if the gray difference value N is greater than the gray difference value threshold, the pixel point p is considered as a characteristic point.
7. The stereo vision based tarmac corridor bridge docking error measurement method of claim 1, wherein: in step 6), the method for removing outliers from the feature points to obtain target feature points includes:
(1) Calculating the center points for all the feature points;
(2) Calculating the distance from each characteristic point to the central point, and counting the distance distribution;
(3) Setting a threshold, comparing the distance from each characteristic point to the central point with the threshold, and removing the characteristic points which exceed the threshold, namely outliers to obtain target characteristic points;
(4) And calculating the intra-class divergence distance to prove the reliability of the target characteristic point.
8. The method of claim 1 for measuring apron galley bridge docking errors based on stereo vision, wherein: in step 7), the method for solving the spatial three-dimensional coordinates of the target feature point includes:
the two cameras (1) form a triangular relation with the measured object in space, and the space three-dimensional coordinates of the target characteristic points are solved by utilizing the coordinates of the imaging points of the target characteristic points on the image planes of the two cameras (1).
9. The method of claim 1 for measuring apron galley bridge docking errors based on stereo vision, wherein: in step 9), the method for applying the three-dimensional model to the real-time evaluation of the docking process of the platform bridge and the aircraft hatch by the terminal processor to guide the docking process of the apron platform bridge and the aircraft hatch includes:
the terminal processor (3) judges whether the apron corridor bridge operated in a certain direction collides with the cabin door of the airplane or not by utilizing the three-dimensional model; if the parking apron bridge operated in the direction is judged to collide with the airplane cabin door, an anti-collision early warning program in the terminal processor (3) sends out warning information to remind workers of the parking apron bridge to plan the operation direction again; if judge that the air park corridor bridge of this direction operation can not bump with the aircraft hatch door, judge again whether can accurately realize the butt joint of air park corridor bridge and aircraft hatch door, if can not realize, the staff of air park corridor bridge can adjust according to the instruction of terminal processor (3), until realizing the successful butt joint of air park corridor bridge and aircraft hatch door.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910313291.2A CN110047111B (en) | 2019-04-18 | 2019-04-18 | Parking apron corridor bridge butt joint error measuring method based on stereoscopic vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910313291.2A CN110047111B (en) | 2019-04-18 | 2019-04-18 | Parking apron corridor bridge butt joint error measuring method based on stereoscopic vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110047111A CN110047111A (en) | 2019-07-23 |
CN110047111B true CN110047111B (en) | 2022-12-27 |
Family
ID=67277834
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910313291.2A Active CN110047111B (en) | 2019-04-18 | 2019-04-18 | Parking apron corridor bridge butt joint error measuring method based on stereoscopic vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110047111B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110641721B (en) * | 2019-10-16 | 2021-02-02 | 北京天睿空间科技股份有限公司 | Boarding bridge parking method |
CN110750101A (en) * | 2019-10-16 | 2020-02-04 | 北京天睿空间科技股份有限公司 | Boarding bridge parking position setting method oriented to automatic operation |
CN111598950A (en) * | 2020-04-23 | 2020-08-28 | 四川省客车制造有限责任公司 | Automatic passenger train hinging method and system based on machine vision |
CN112528729B (en) * | 2020-10-19 | 2024-09-27 | 浙江大华技术股份有限公司 | Video-based aircraft bridge event detection method and device |
CN114559131A (en) * | 2020-11-27 | 2022-05-31 | 北京颖捷科技有限公司 | Welding control method and device and upper computer |
CN114708422B (en) * | 2022-02-14 | 2024-06-28 | 清华大学 | Cabin door coordinate calculation method and device based on binocular images |
CN118092478A (en) * | 2024-04-28 | 2024-05-28 | 浙江省圣翔协同创新发展研究院 | Method and system for controlling return voyage based on mobile parking apron |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2373669A1 (en) * | 2002-02-27 | 2003-08-27 | Indal Technologies Inc. | Imaging system for a passenger bridge of the like for docking automatically with an aircraft |
CN103419944A (en) * | 2012-05-25 | 2013-12-04 | 深圳中集天达空港设备有限公司 | Air bridge and automatic abutting method therefor |
CN105487092A (en) * | 2014-12-30 | 2016-04-13 | 大连现代高技术集团有限公司 | Navigation system for airport lounge bridge docking airplane hatch |
CN105812733A (en) * | 2016-03-15 | 2016-07-27 | 中国民用航空总局第二研究所 | Civil aviation air traffic control scene monitoring and guiding system |
CN106448275A (en) * | 2014-12-30 | 2017-02-22 | 大连现代高技术集团有限公司 | Visualization-based airplane berth real-time guiding system |
-
2019
- 2019-04-18 CN CN201910313291.2A patent/CN110047111B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2373669A1 (en) * | 2002-02-27 | 2003-08-27 | Indal Technologies Inc. | Imaging system for a passenger bridge of the like for docking automatically with an aircraft |
CN103419944A (en) * | 2012-05-25 | 2013-12-04 | 深圳中集天达空港设备有限公司 | Air bridge and automatic abutting method therefor |
CN105487092A (en) * | 2014-12-30 | 2016-04-13 | 大连现代高技术集团有限公司 | Navigation system for airport lounge bridge docking airplane hatch |
CN106448275A (en) * | 2014-12-30 | 2017-02-22 | 大连现代高技术集团有限公司 | Visualization-based airplane berth real-time guiding system |
CN105812733A (en) * | 2016-03-15 | 2016-07-27 | 中国民用航空总局第二研究所 | Civil aviation air traffic control scene monitoring and guiding system |
Non-Patent Citations (1)
Title |
---|
机场廊桥设备监测系统的设计与应用;费春国等;《测控技术》;20160518(第05期);第75-78页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110047111A (en) | 2019-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110047111B (en) | Parking apron corridor bridge butt joint error measuring method based on stereoscopic vision | |
WO2022061945A1 (en) | Power line safe distance measurement method | |
CN105335733B (en) | Unmanned aerial vehicle autonomous landing visual positioning method and system | |
AU2019222803A1 (en) | Volume measurement apparatus and method | |
CN109084724A (en) | A kind of deep learning barrier distance measuring method based on binocular vision | |
CN108876744B (en) | Large-scale point cloud noise denoising method based on region segmentation | |
CN102798350B (en) | Method, device and system for measuring deflection of arm support | |
CN104215239A (en) | Vision-based autonomous unmanned plane landing guidance device and method | |
CN110926474A (en) | Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method | |
CN110132226B (en) | System and method for measuring distance and azimuth angle of unmanned aerial vehicle line patrol | |
CN108681337B (en) | Unmanned aerial vehicle special for inspection of culverts or bridges and unmanned aerial vehicle inspection method | |
KR101160896B1 (en) | Discriminating system of the aircraft type using laser scanner and conforming system of aircraft self position | |
CN108986070A (en) | A kind of rock fracture way of extensive experimentation monitoring method based on high-speed video measurement | |
CN110641721B (en) | Boarding bridge parking method | |
CN106996748A (en) | Wheel diameter measuring method based on binocular vision | |
Aldao et al. | Metrological comparison of LiDAR and photogrammetric systems for deformation monitoring of aerospace parts | |
CN111707668A (en) | Tunnel detection and image processing method based on sequence image | |
CN106352817A (en) | Non-contact four-wheel positioner and positioning method thereof | |
CN111179335A (en) | Standing tree measuring method based on binocular vision | |
CN111310740B (en) | Pedestrian luggage volume measuring device under motion condition | |
WO2020244591A1 (en) | Method for docking boarding bridge with aircraft, electronic equipment and storage medium | |
CN209230716U (en) | A kind of volume measurement device | |
TWI632347B (en) | Method for integrating three-dimensional image and laser scanning ranging | |
Trisiripisal et al. | Stereo analysis for vision-based guidance and control of aircraft landing | |
CN205607332U (en) | Measuring device is striden to bridge crane crane span structure based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |