CN108225273B - Real-time runway detection method based on sensor priori knowledge - Google Patents
Real-time runway detection method based on sensor priori knowledge Download PDFInfo
- Publication number
- CN108225273B CN108225273B CN201611153162.4A CN201611153162A CN108225273B CN 108225273 B CN108225273 B CN 108225273B CN 201611153162 A CN201611153162 A CN 201611153162A CN 108225273 B CN108225273 B CN 108225273B
- Authority
- CN
- China
- Prior art keywords
- runway
- image
- point
- template
- search area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 20
- 238000004364 calculation method Methods 0.000 claims abstract description 18
- 238000003384 imaging method Methods 0.000 claims description 20
- 238000000034 method Methods 0.000 claims description 18
- 240000004282 Grewia occidentalis Species 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000000875 corresponding Effects 0.000 claims description 4
- VAYOSLLFUXYJDT-RDTXWAMCSA-N LSD Chemical compound C1=CC(C=2[C@H](N(C)C[C@@H](C=2)C(=O)N(CC)CC)C2)=C3C2=CNC3=C1 VAYOSLLFUXYJDT-RDTXWAMCSA-N 0.000 claims description 3
- 229950002454 Lysergide Drugs 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000001276 controlling effect Effects 0.000 claims description 3
- 238000005096 rolling process Methods 0.000 claims description 3
- 230000003068 static Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 3
- 230000002708 enhancing Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/182—Network patterns, e.g. roads or rivers
Abstract
The invention belongs to the technical field of embedded computer image processing, and particularly relates to a real-time runway detection method based on sensor priori knowledge. The existing runway detection algorithm mainly aims at satellite images and downward-looking high-altitude aerial images, is inconsistent with forward-looking aerial images required by an airborne synthetic vision system, generally aims at static images for detection, is long in calculation time and cannot meet the real-time requirement. Therefore, the invention provides a real-time runway detection algorithm based on sensor priori knowledge, and the method also includes the calculation of the flight position, attitude and other information provided by an airborne sensor, so that the search area for runway detection is greatly reduced, and the runway position in the image is quickly and accurately detected.
Description
Technical Field
The invention belongs to the technical field of embedded computer image processing, and particularly relates to a real-time runway detection method based on sensor priori knowledge.
Background
With the development of airborne avionics technology, many flight assistance systems have emerged. The enhanced vision system can improve the perception of the pilot to the external environment and enhance the flight safety. Runway detection techniques play an important role in enhancing vision systems. However, the conventional runway detection method mainly aims at satellite images and downward-looking high-altitude aerial images, and is not consistent with forward-looking aerial images required by an airborne synthetic vision system, and generally aims at static images for detection, so that the calculation time is long, and the real-time requirement cannot be met. There is a need for a real-time and efficient runway detection method that can accurately and quickly detect a runway in a real-time image.
The invention provides a real-time runway detection method based on sensor priori knowledge to solve the problems.
Disclosure of Invention
The purpose of the invention is:
the runway detection problem in the airborne real-time image is solved.
The technical solution of the invention is as follows:
the prior runway detection method mostly takes a runway as one of targets, adopts a general target identification method, and has the main defects that: 1) the computational complexity is high. The searching, matching and other processes of the algorithm are carried out on the whole image, and the operation complexity is high; 2) template-based methods have low and inaccurate template acquisition efficiency. Most of the templates are extracted by manual calibration and a template library, the former has low efficiency, and the latter is not accurate enough.
However, in an airborne system, there are a lot of prior knowledge, such as camera attitude, camera position, imaging parameters, etc., and these information can guide us to generate the template shape of the runway and locate the template position in the image, thereby narrowing the search range and reducing the computation.
In the enhanced vision system, there are different kinds of sensors that provide flight information including attitude, position, etc. The aircraft attitude sensors can provide the pitch, roll and yaw angles of the aircraft, and the GPS can provide the coordinates of the aircraft in the three-dimensional world. In addition, the parameters of the onboard camera and the coordinates of the corner points of the airport runway are also fixed and known when taking images.
In summary, the algorithm provided by the patent can calculate the template shape and position of the airport runway in an ideal state in the first step, so as to determine a search range and reduce the calculation amount of runway detection. The specific calculation process is as follows:
a real-time runway detection method based on sensor priori knowledge comprises the following steps:
the first step is as follows: suppose a runway corner point P in the world coordinate systemW=(xW,yW,zW)TNeeds to be converted to a point P in the camera coordinate systemC=(xC,yC,zC)TThen the conversion process requires one revolutionPlus one translation TWThe corner point positions in the camera coordinate system are obtained as follows:
wherein T isWThe following can be obtained directly from the GPS data:
TW=(xW,yW,zW)T
and rotating the matrixThe calculation process of (a) is slightly complicated; three attitude angles through the aircraft: rolling phi, pitching theta and yawing psi, and finally completing a complete rotation process after three rotations; so that the rotation matrixIs obtained by multiplying three parts of rotation matrixes:
the second step is that: according to the imaging principle of the camera and the internal and external parameters of the shooting camera, the coordinate position of the runway corner point on the final image is calculated, and the specific imaging calculation process is as follows according to the equivalent geometric relationship;
one point P in the target plane is projected to the point P' of the imaging plane and finally converges with other points at the focusPolymerizing; p calculated from the first stepC=(xC,yC,zC)TFrom the similarity relationship of triangles:
wherein x isimgAnd yimgIs the coordinate of the target point on the imaging plane, PsIs the pixel side length of the imaging plane, H0And W0The number of pixel points in the height direction and the width direction of the imaging plane respectively; thereby solving for ximgAnd yimg;
The third step: determining the shape and position of a runway template according to the coordinate position of the runway corner point on the image, and defining a search area according to the shape and position of the runway template; the specific calculation process is as follows:
respectively calculating coordinates of four corner points of the airport runway on the final image, and sequentially connecting the coordinates corresponding to the four corner points to obtain an airport runway template in an ideal state; in practical application, data obtained by an airplane attitude sensor and a GPS have certain errors, a search area is defined outside the calculated position of an airport runway template, and two parameters HE and WE are used for controlling the size of the search area, wherein the HE and the WE respectively take 1/2 lengths of the height and the width of the runway template;
the fourth step: in the search area, carrying out straight line extraction on the image to obtain a straight line image with more obvious characteristics; specifically, the LSD algorithm is adopted for straight line extraction, and the steps are as follows: 1) calculating image gradient; 2) pseudo-ordering of gradients; 3) growing or merging gradient neighborhoods; 4) fitting a straight line;
the fifth step: matching the runway in the search area by using the generated line graph and the runway template, and finally calculating the position of the runway; specifically, a directional Chamfer Matching method is adopted to match a runway in a search area, and the calculation formula is as follows:
wherein U is { U ═iIs the edge line image of the template, V ═ VjThe directional Chamfer distance d between U and V is the edge straight line image of the image to be detectedDCN(U, V) is defined as the average distance of each point in U to the nearest edge in V, [ phi ] (x) represents the edge direction at the edge point x, and [ lambda ] is the weight parameter between the position term and the direction term; dDCMAnd the position with the minimum (U, V) is the final search result.
The invention has the advantages that:
the algorithm aims at the forward-looking aerial image of the aircraft in the approach landing stage, and is convenient to apply to an airborne enhanced vision system. The algorithm can pre-judge the shape of the runway template and the position of the runway in the airborne real-time image by using the prior knowledge provided by the airborne sensor, thereby reducing the search area, and only matching the runway in the search area, thereby greatly improving the detection accuracy, reducing the calculation amount and achieving the effect of accurately detecting the runway in real time.
Drawings
Fig. 1 is a schematic block diagram of the present invention, and the algorithm is divided into five steps.
Fig. 2 is a simplified schematic diagram of the imaging principle of the camera in the present invention.
Fig. 3 is a schematic diagram illustrating the definition of a search area according to a runway template in the present invention.
Detailed Description
In an airborne enhanced synthetic vision system, runway information in a forward-looking visible light video needs to be detected and identified in real time. Taking the system as an example, the method described in the patent is applied specifically. The airborne enhanced composite visual system is provided with one path of video input and one path of video output, and the specific runway detection steps for each frame of input image are as follows:
the first step is as follows: suppose a runway corner point P in the world coordinate systemW=(xW,yW,zW)TNeeds to be converted to a point P in the camera coordinate systemC=(xC,yC,zC)TThen the conversion process requires one revolutionPlus one translation TWThe corner point positions in the camera coordinate system are obtained as follows:
where Tw can be obtained directly from GPS data:
TW=(xW,yW,zW)T(4-10)
and rotating the matrixThe calculation process of (a) is slightly complicated; three attitude angles through the aircraft: rolling phi, pitching theta and yawing psi, and finally completing a complete rotation process after three rotations; so that the rotation matrixIs obtained by multiplying three parts of rotation matrixes:
the second step is that: according to the imaging principle of the camera and the internal and external parameters of the shooting camera, the coordinate position of the runway corner point on the final image is calculated, and the specific imaging calculation process is as follows according to the equivalent geometric relationship;
projecting a point P in the target plane to a point P' of the imaging plane, and finally converging the point P with other points at the focus; p calculated from the first stepC=(xC,yC,zC)TFrom the similarity relationship of triangles:
wherein x isimgAnd yimgIs the coordinate of the target point on the imaging plane, PsIs the pixel side length of the imaging plane, H0And W0The number of pixel points in the height direction and the width direction of the imaging plane respectively; thereby solving for ximgAnd yimg;
The third step: determining the shape and position of a runway template according to the coordinate position of the runway corner point on the image, and defining a search area according to the shape and position of the runway template; the specific calculation process is as follows:
respectively calculating coordinates of four corner points of the airport runway on the final image, and sequentially connecting the coordinates corresponding to the four corner points to obtain an airport runway template in an ideal state; in practical application, data obtained by an airplane attitude sensor and a GPS have certain errors, a search area is defined outside the calculated position of an airport runway template, and two parameters HE and WE are used for controlling the size of the search area, wherein the HE and the WE respectively take 1/2 lengths of the height and the width of the runway template;
the fourth step: in the search area, carrying out straight line extraction on the image to obtain a straight line image with more obvious characteristics; specifically, the LSD algorithm is adopted for straight line extraction, and the steps are as follows: 1) calculating image gradient; 2) pseudo-ordering of gradients; 3) growing or merging gradient neighborhoods; 4) fitting a straight line;
the fifth step: matching the runway in the search area by using the generated line graph and the runway template, and finally calculating the position of the runway; specifically, a directional Chamfer Matching method is adopted to match a runway in a search area, and the calculation formula is as follows:
wherein U is { U ═iIs the edge line image of the template, V ═ VjThe directional Chamfer distance d between U and V is the edge straight line image of the image to be detectedDCM(U, V) is defined as the average distance of each point in U to the nearest edge in V, [ phi ] (x) represents the edge direction at the edge point x, and [ lambda ] is the weight parameter between the position term and the direction term; dDCM(U, V) the minimum position is the final search result; and marking the corresponding position of the input image with a markNote that the image is output as an output image.
Claims (1)
1. A real-time runway detection method based on sensor priori knowledge comprises the following steps:
the first step is as follows: suppose a runway corner point P in the world coordinate systemW=(xW,yW,zW)TNeeds to be converted to a point P in the camera coordinate systemC=(xC,yC,zC)TThen the conversion process requires one revolutionPlus one translation TWThe corner point positions in the camera coordinate system are obtained as follows:
wherein T isWThe following can be obtained directly from the GPS data:
TW=(xW,yW,zW)T
and rotating the matrixThe calculation process of (a) is slightly complicated; three attitude angles through the aircraft: rolling phi, pitching theta and yawing psi, and finally completing a complete rotation process after three rotations; so that the rotation matrixIs obtained by multiplying three parts of rotation matrixes:
the second step is that: according to the imaging principle of the camera and the internal and external parameters of the shooting camera, the coordinate position of the runway corner point on the final image is calculated, and the specific imaging calculation process is as follows according to the equivalent geometric relationship;
projecting a point P in the target plane to a point P' of the imaging plane, and finally converging the point P with other points at the focus; p calculated from the first stepC=(xC,yC,zC)TFrom the similarity relationship of triangles:
wherein x isimgAnd yimgIs the coordinate of the target point on the imaging plane, PsIs the pixel side length of the imaging plane, H0And W0The number of pixel points in the height direction and the width direction of the imaging plane respectively; thereby solving for ximgAnd yimg;
The third step: determining the shape and position of a runway template according to the coordinate position of the runway corner point on the image, and defining a search area according to the shape and position of the runway template; the specific calculation process is as follows:
respectively calculating coordinates of four corner points of the airport runway on the final image, and sequentially connecting the coordinates corresponding to the four corner points to obtain an airport runway template in an ideal state; in practical application, data obtained by an airplane attitude sensor and a GPS have certain errors, a search area is defined outside the calculated position of an airport runway template, and two parameters HE and WE are used for controlling the size of the search area, wherein the HE and the WE respectively take 1/2 lengths of the height and the width of the runway template;
the fourth step: in the search area, carrying out straight line extraction on the image to obtain a straight line image with more obvious characteristics; specifically, the LSD algorithm is adopted for straight line extraction, and the steps are as follows: 1) calculating image gradient; 2) pseudo-ordering of gradients; 3) growing or merging gradient neighborhoods; 4) fitting a straight line;
the fifth step: matching the runway in the search area by using the generated line graph and the runway template, and finally calculating the position of the runway; specifically, a directional Chamfer Matching method is adopted to match a runway in a search area, and the calculation formula is as follows:
wherein U is { U ═iIs the edge line image of the template, V ═ VjThe directional Chamfer distance d between U and V is the edge straight line image of the image to be detectedDCM(U, V) is defined as the average distance of each point in U to the nearest edge in V, [ phi ] (x) represents the edge direction at the edge point x, and [ lambda ] is the weight parameter between the position term and the direction term; dDCMAnd the position with the minimum (U, V) is the final search result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611153162.4A CN108225273B (en) | 2016-12-14 | 2016-12-14 | Real-time runway detection method based on sensor priori knowledge |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611153162.4A CN108225273B (en) | 2016-12-14 | 2016-12-14 | Real-time runway detection method based on sensor priori knowledge |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108225273A CN108225273A (en) | 2018-06-29 |
CN108225273B true CN108225273B (en) | 2020-06-30 |
Family
ID=62639019
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611153162.4A Active CN108225273B (en) | 2016-12-14 | 2016-12-14 | Real-time runway detection method based on sensor priori knowledge |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108225273B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109238265B (en) * | 2018-07-20 | 2020-08-11 | 民航中南空管设备工程公司 | Airport runway position measuring method |
CN109341685B (en) * | 2018-12-04 | 2023-06-30 | 中国航空工业集团公司西安航空计算技术研究所 | Fixed wing aircraft vision auxiliary landing navigation method based on homography transformation |
CN109341700B (en) * | 2018-12-04 | 2023-06-30 | 中国航空工业集团公司西安航空计算技术研究所 | Visual auxiliary landing navigation method for fixed-wing aircraft under low visibility |
CN110274622B (en) * | 2019-06-13 | 2021-08-27 | 北京环奥体育发展有限公司 | Method for calculating difficulty degree of track during marathon competition by field measurement and calculation |
CN113295164B (en) * | 2021-04-23 | 2022-11-04 | 四川腾盾科技有限公司 | Unmanned aerial vehicle visual positioning method and device based on airport runway |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777181A (en) * | 2010-01-15 | 2010-07-14 | 西安电子科技大学 | Ridgelet bi-frame system-based SAR image airfield runway extraction method |
WO2013051967A2 (en) * | 2011-08-31 | 2013-04-11 | Kirillov Andrey Porfir Evich | Method for visual landing and kirillov device for visualizing takeoff or landing of an aircraft |
CN103162669A (en) * | 2013-03-01 | 2013-06-19 | 西北工业大学 | Detection method of airport area through aerial shooting image |
CN103577697A (en) * | 2013-11-12 | 2014-02-12 | 中国民用航空总局第二研究所 | FOD detection method based on road surface point cloud data |
CN105302146A (en) * | 2014-07-25 | 2016-02-03 | 空中客车运营简化股份公司 | Method and system for automatic autonomous landing of an aircraft |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5858741B2 (en) * | 2011-11-15 | 2016-02-10 | キヤノン株式会社 | Automatic tracking camera system |
-
2016
- 2016-12-14 CN CN201611153162.4A patent/CN108225273B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101777181A (en) * | 2010-01-15 | 2010-07-14 | 西安电子科技大学 | Ridgelet bi-frame system-based SAR image airfield runway extraction method |
WO2013051967A2 (en) * | 2011-08-31 | 2013-04-11 | Kirillov Andrey Porfir Evich | Method for visual landing and kirillov device for visualizing takeoff or landing of an aircraft |
CN103162669A (en) * | 2013-03-01 | 2013-06-19 | 西北工业大学 | Detection method of airport area through aerial shooting image |
CN103577697A (en) * | 2013-11-12 | 2014-02-12 | 中国民用航空总局第二研究所 | FOD detection method based on road surface point cloud data |
CN105302146A (en) * | 2014-07-25 | 2016-02-03 | 空中客车运营简化股份公司 | Method and system for automatic autonomous landing of an aircraft |
Non-Patent Citations (3)
Title |
---|
Runway incursion prevention systems: A review of runway incursion avoidance and alerting system approaches;J. Schonefeld etc.;《Progress in Aerospace Sciences》;20120407(第51期);全文 * |
Vision-Based Road-Following Using a Small Autonomous Aircraft;Eric Frew etc.;《2004 IEEE Aerospace Conference Proceedings》;20041231;第5卷;全文 * |
提取直线特征实现机场跑道实时检测;邸男 等;《光学精密工程》;20090930;第17卷(第9期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108225273A (en) | 2018-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108225273B (en) | Real-time runway detection method based on sensor priori knowledge | |
CN111326023A (en) | Unmanned aerial vehicle route early warning method, device, equipment and storage medium | |
CN107390704B (en) | IMU attitude compensation-based multi-rotor unmanned aerial vehicle optical flow hovering method | |
CN103559711A (en) | Motion estimation method based on image features and three-dimensional information of three-dimensional visual system | |
CN109724586B (en) | Spacecraft relative pose measurement method integrating depth map and point cloud | |
CN106500699B (en) | A kind of position and orientation estimation method suitable for Autonomous landing in unmanned plane room | |
Xu et al. | Use of land’s cooperative object to estimate UAV’s pose for autonomous landing | |
CN114415736B (en) | Multi-stage visual accurate landing method and device for unmanned aerial vehicle | |
CN107831776A (en) | Unmanned plane based on nine axle inertial sensors independently makes a return voyage method | |
Eynard et al. | Real time UAV altitude, attitude and motion estimation from hybrid stereovision | |
CN104422425A (en) | Irregular-outline object space attitude dynamic measuring method | |
Fan et al. | Vision algorithms for fixed-wing unmanned aerial vehicle landing system | |
CN109978954A (en) | The method and apparatus of radar and camera combined calibrating based on cabinet | |
CN107886541B (en) | Real-time monocular moving target pose measuring method based on back projection method | |
Yan et al. | SensorX2car: Sensors-to-car calibration for autonomous driving in road scenarios | |
Zhuang et al. | Method of pose estimation for UAV landing | |
Zhigui et al. | Review on vision-based pose estimation of UAV based on landmark | |
Zhang et al. | Tracking and position of drogue for autonomous aerial refueling | |
Kim et al. | Target detection and position likelihood using an aerial image sensor | |
CN112577463B (en) | Attitude parameter corrected spacecraft monocular vision distance measuring method | |
CN105389819A (en) | Robust semi-calibrating down-looking image epipolar rectification method and system | |
Zhang et al. | Vision-based UAV Positioning Method Assisted by Relative Attitude Classification | |
CN113295171A (en) | Monocular vision-based attitude estimation method for rotating rigid body spacecraft | |
CN109754412B (en) | Target tracking method, target tracking apparatus, and computer-readable storage medium | |
Zhai et al. | Target detection of low-altitude UAV based on improved YOLOv3 network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |