CN112348105B - Unmanned aerial vehicle image matching optimization method - Google Patents
Unmanned aerial vehicle image matching optimization method Download PDFInfo
- Publication number
- CN112348105B CN112348105B CN202011289613.3A CN202011289613A CN112348105B CN 112348105 B CN112348105 B CN 112348105B CN 202011289613 A CN202011289613 A CN 202011289613A CN 112348105 B CN112348105 B CN 112348105B
- Authority
- CN
- China
- Prior art keywords
- matrix
- image
- unmanned aerial
- points
- aerial vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000005457 optimization Methods 0.000 title claims abstract description 11
- 239000011159 matrix material Substances 0.000 claims abstract description 86
- 238000012216 screening Methods 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 3
- 238000012545 processing Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
The application discloses an unmanned aerial vehicle image matching optimization method, which comprises the following steps: detecting characteristic points of an unmanned aerial vehicle aerial image A and an unmanned aerial vehicle aerial image B to be matched by adopting a SURF algorithm, setting a Hessian matrix threshold of the SURF algorithm, and obtaining N pairs of matching points; traversing the matching points in the image A and the image B, screening M pairs of matching points uniformly distributed in the image A and the image B, and constructing to obtain an H matrix; respectively obtaining H by adopting a RANSAC algorithm and an LMEDS algorithm 1 Matrix sum H 2 A matrix; for H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix; and checking the optimal matrix to obtain the most reasonable H matrix. Through the scheme, the method and the device have the advantages of simple logic, less calculation workload, accurate matching and the like, and have high practical value and popularization value in the technical field of image processing.
Description
Technical Field
The application relates to the technical field of image processing, in particular to an unmanned aerial vehicle image matching optimization method.
Background
With the rapid development and popularization of multi-axis unmanned aerial vehicles, unmanned aerial vehicle aerial photographing technology is applied to more and more fields, and matching and recognition of unmanned aerial vehicle aerial photographing images are also hot spots for research. Image matching and recognition are comprehensive operations of various technologies, and are widely applied to various fields such as artificial intelligence, safety protection, unmanned aerial vehicle aerial photography, auxiliary driving, image remote sensing, computer vision and the like.
At present, unmanned aerial vehicle images in the prior art are mostly matched by adopting characteristic points, for example, patent application number is 201810735162.8, and Chinese patent application name is 'a rapid unmanned aerial vehicle image matching method based on fusion local characteristics', which adopts: 3*3 grid partitioning is carried out on the reference image and the image to be matched respectively, one image is divided into 9 subregions, and the extraction of invariant features is carried out in the subregions; extracting feature vectors from the invariant feature regions using feature descriptors; judging initial homonymous features by comparing the similarity between the feature vectors to obtain stable initial matching; counting the number of matching points in each grid, and carrying out characteristic matching of regional MSERs for the regions with the number of the matching points in the grids smaller than a threshold value; the mismatching point pairs are deleted by affine invariance of the mahalanobis distance. The technology performs direct image matching by matching the feature points. In addition, in Wang Zhenhua, "a matching method applied to aerial images of unmanned aerial vehicles", firstly, image data and GPS information transmitted by the unmanned aerial vehicle through 4G are received; secondly, reading corresponding geographic information feature point data in a database according to GPS information; finally, the feature points are used for completing image matching. In order to accelerate the matching speed, SURF features are adopted to replace SIFT features to complete the operation.
The technology adopts characteristic points to carry out direct matching, and the mismatching points are deleted; because the matched characteristic points are random, the characteristic points which are uniformly distributed cannot be obtained, and the matched images have the problems of repetition and no matching. If the H matrix is calculated only by using local feature points, it is very easy to solve a more abnormal value, resulting in excessive distortion of the image. The coverage rate of 70% -80% is required to be achieved when the unmanned aerial vehicle collects images, and the characteristic points are distributed in various places of the images.
Therefore, it is urgently required to provide an unmanned aerial vehicle image matching optimization method which is simple in logic, small in calculation workload and accurate in matching.
Disclosure of Invention
Aiming at the problems, the application aims to provide an unmanned aerial vehicle image matching optimization method, which adopts the following technical scheme:
an unmanned aerial vehicle image matching optimization method comprises the following steps:
detecting characteristic points of an unmanned aerial vehicle aerial image A and an unmanned aerial vehicle aerial image B to be matched by adopting a SURF algorithm, setting a Hessian matrix threshold of the SURF algorithm, and obtaining N pairs of matching points; the N is a natural number greater than or equal to 4;
traversing the matching points in the image A and the image B, screening M pairs of matching points uniformly distributed in the image A and the image B, and constructing to obtain an H matrix; m is less than or equal to N;
the expression of the H matrix is as follows:
h under homogeneous coordinates 22 =1, and H is calculated by RANSAC algorithm and LMEDS algorithm 1 Matrix sum H 2 A matrix;
for H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix;
and checking the optimal matrix to obtain the most reasonable H matrix.
Further, the pair H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix, wherein the method comprises the following steps of:
if H 1 Matrix sum H 2 One of the matrices satisfies |h 20 +h 21 The I is more than 0.0005, and the corresponding matrix is eliminated;
if H 1 Matrix sum H 2 The matrix is not eliminated, and then the matrix is according to the h 00 |+|h 01 |+|h 10 |+|h 11 And selecting the matrix with the smallest value from the I-2I to obtain the optimal matrix.
Further, the verifying the optimal matrix to obtain the most reasonable H matrix includes the following steps:
according to the optimal matrix, the conversion relation before and after coordinate transformation of any image is obtained, and the expression is as follows:
P’ i =H*P i
wherein, four points of the original image are aligned with coordinates P 1 (0,0,1),P 2 (w,0,1),P 3 (w,h,1),P 4 (0, h, 1); w represents the original width of the image, and h represents the original height of the image; the four points after transformation are P' i (i=1..4);
The lengths of four edges after image transformation are obtained, and the length is expressed as follows:
d 1 =||P’ 1 -P’ 2 ||,d 2 =||P’ 2 -P’ 3 ||,d 3 =||P’ 3 -P’ 4 ||,d 4 =||P’ 4 -P’ 1 ||
wherein, the term "| | | represents the euclidean distance between two points;
if the length change value of the four edges after the image transformation is more than 2, eliminating the H matrix to obtain the most reasonable H matrix.
Compared with the prior art, the application has the following beneficial effects:
(1) The method comprises the steps of obtaining pairs of matching points by setting a Hessian matrix threshold parameter; traversing and selecting uniformly distributed feature matching points in the matched feature point pairs to ensure reliable matching;
(2) The application skillfully adopts a plurality of algorithms to calculate the H matrix, and evaluates and checks the H matrix, wherein the algorithms adopted to calculate the H matrix are all iterative fitting and are not necessarily optimal solutions, so that the H matrix is calculated by adopting the thought of integrated learning, the best among the best can be achieved, and the accuracy of final matching is improved. In addition, if the H matrix solved by adopting a single algorithm is used for projection transformation, larger deformation can be generated, which obviously does not accord with the scene of the image acquired by the unmanned aerial vehicle, and larger error can be brought to the subsequent unmanned aerial vehicle splicing; according to different application scenes, different evaluation means are adopted to evaluate the H matrix, so that more excellent H matrix can be conveniently screened.
In conclusion, the method has the advantages of simple logic, less calculation workload, accurate matching and the like, and has high practical value and popularization value in the technical field of image processing.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, the drawings to be used in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope of protection, and other related drawings may be obtained according to these drawings without the need of inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of feature point screening according to the present application (a).
Fig. 2 is a schematic diagram of feature point screening according to the present application (ii).
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described with reference to the accompanying drawings and examples, which include, but are not limited to, the following examples. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Examples
As shown in fig. 1 to 2, the present embodiment provides an unmanned aerial vehicle image matching optimization method, which includes the following steps:
firstly, matching by adopting a pyramid SURF method:
in this embodiment, graphs a and B are matched, and first, more stringent SURF feature point detection is used, where the Hessian matrix threshold parameter is set to 1200. If there are fewer feature points, then looser SURF feature point detection is used and the Hessian matrix threshold parameter is gradually reduced, e.g., set to 800, 400. In this embodiment, there are a total of N pairs of matching points. Pts1 represents the feature point set of the map A, pts2 represents the feature point set of the map B. If N is greater than or equal to 4, continuing. N < 4 is re-matched by lowering the Heisson threshold.
Step two, screening M pairs of matching points to ensure that the characteristic points are uniformly distributed on the image: dividing the image into 16 x 16 regions; each time these areas are traversed, the matching points are fetched, and if the areas have no points, the points are skipped; until M points are taken. In this embodiment, m=50 is selected, and if N < M, all N are selected.
Thirdly, calculating an H matrix by a multi-algorithm:
wherein, the expression of the H matrix is:
h under homogeneous coordinates 22 =1, and H is calculated by RANSAC algorithm and LMEDS algorithm 1 Matrix sum H 2 A matrix.
Fourth, evaluating an H matrix, and screening the optimal H:
(1) Shear deformation size
If |h of a certain H matrix 20 +h 21 I > 0.0005, reject;
(2) Torsional deformation
If H 1 Matrix sum H 2 The matrix is not eliminated, and then the matrix is according to the h 00 |+|h 01 |+|h 10 |+|h 11 And selecting the matrix with the smallest value from the I-2I to obtain the optimal matrix.
Fifthly, checking an H matrix:
(1) According to the optimal matrix, the conversion relation before and after coordinate transformation of any image is obtained, and the expression is as follows:
P’ i =H*P i
wherein, four points of the original image are aligned with coordinates P 1 (0,0,1),P 2 (w,0,1),P 3 (w,h,1),P 4 (0, h, 1); w represents the original width of the image, and h represents the original height of the image; the four transformed points are P i '(i=1..4);
(2) The lengths of four edges after image transformation are obtained, and the length is expressed as follows:
d 1 =||P’ 1 -P’ 2 ||,d 2 =||P’ 2 -P’ 3 ||,d 3 =||P’ 3 -P’ 4 ||,d 4 =||P’ 4 -P’ 1 ||
wherein, the term "| | | represents the euclidean distance between two points;
(3) If the length change value of the four edges after the image transformation is more than 2, eliminating the H matrix to obtain the most reasonable H matrix. I.e.
d 1 W > 2 or d 1 Eliminating if w is less than 0.5;
d 2 /h > 2 or d 2 /h<0.5, eliminating;
d 3 w > 2 or d 3 Eliminating if w is less than 0.5;
d 4 /h > 2 or d 4 Eliminating if the ratio of the ratio to the ratio of the ratio is less than 0.5;
in this embodiment, through the first step to the fifth step, the most reasonable H matrix is finally obtained.
The above embodiments are only preferred embodiments of the present application and are not intended to limit the scope of the present application, but all changes made by adopting the design principle of the present application and performing non-creative work on the basis thereof shall fall within the scope of the present application.
Claims (3)
1. The unmanned aerial vehicle image matching optimization method is characterized by comprising the following steps of:
detecting characteristic points of an unmanned aerial vehicle aerial image A and an unmanned aerial vehicle aerial image B to be matched by adopting a SURF algorithm, setting a Hessian matrix threshold of the SURF algorithm, and obtaining N pairs of matching points; the N is a natural number greater than or equal to 4;
traversing the matching points in the image A and the image B, screening M pairs of matching points uniformly distributed in the image A and the image B, and constructing to obtain an H matrix; m is less than or equal to N;
the expression of the H matrix is as follows:
h under homogeneous coordinates 22 =1, and respectively obtained by using RANSAC algorithm and LMEDS algorithmH 1 Matrix sum H 2 A matrix;
for H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix;
and checking the optimal matrix to obtain the most reasonable H matrix.
2. The unmanned aerial vehicle image matching optimization method of claim 1, wherein the pair H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix, wherein the method comprises the following steps of:
if H 1 Matrix sum H 2 One of the matrices satisfies |h 20 +h 21 The I is more than 0.0005, and the corresponding matrix is eliminated;
if H 1 Matrix sum H 2 The matrix is not eliminated, and then the matrix is according to the h 00 |+|h 01 |+|h 10 |+|h 11 And selecting the matrix with the smallest value from the I-2I to obtain the optimal matrix.
3. The unmanned aerial vehicle image matching optimization method according to claim 1, wherein the verifying the optimal matrix to obtain the most reasonable H matrix comprises the following steps:
according to the optimal matrix, the conversion relation before and after coordinate transformation of any image is obtained, and the expression is as follows:
P' i =H*P i
wherein, four points of the original image are aligned with coordinates P 1 (0,0,1),P 2 (w,0,1),P 3 (w,h,1),P 4 (0, h, 1); w represents the original width of the image, and h represents the original height of the image; the four transformed points are P i ',i=1,...,4;
The lengths of four edges after image transformation are obtained, and the length is expressed as follows:
d 1 =||P' 1 -P' 2 ||,d 2 =||P' 2 -P' 3 ||,d 3 =||P' 3 -P' 4 ||,d 4 =||P' 4 -P' 1 ||
wherein, the term "| | | represents the euclidean distance between two points;
if the length change value of the four edges after the image transformation is more than 2, eliminating the H matrix to obtain the most reasonable H matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011289613.3A CN112348105B (en) | 2020-11-17 | 2020-11-17 | Unmanned aerial vehicle image matching optimization method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011289613.3A CN112348105B (en) | 2020-11-17 | 2020-11-17 | Unmanned aerial vehicle image matching optimization method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112348105A CN112348105A (en) | 2021-02-09 |
CN112348105B true CN112348105B (en) | 2023-09-01 |
Family
ID=74364041
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011289613.3A Active CN112348105B (en) | 2020-11-17 | 2020-11-17 | Unmanned aerial vehicle image matching optimization method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112348105B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104574421A (en) * | 2015-01-29 | 2015-04-29 | 北方工业大学 | Large-breadth small-overlapping-area high-precision multispectral image registration method and device |
CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
CN108961162A (en) * | 2018-03-12 | 2018-12-07 | 北京林业大学 | A kind of unmanned plane forest zone Aerial Images joining method and system |
CN109697692A (en) * | 2018-12-29 | 2019-04-30 | 安徽大学 | One kind being based on the similar feature matching method of partial structurtes |
CN110458183A (en) * | 2019-06-25 | 2019-11-15 | 上海圭目机器人有限公司 | A kind of characteristic matching optimization algorithm of image adaptive |
WO2020199424A1 (en) * | 2019-04-01 | 2020-10-08 | 苏州中晟宏芯信息科技有限公司 | Optimal h-matrix generation method and device |
-
2020
- 2020-11-17 CN CN202011289613.3A patent/CN112348105B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104574421A (en) * | 2015-01-29 | 2015-04-29 | 北方工业大学 | Large-breadth small-overlapping-area high-precision multispectral image registration method and device |
CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
CN108961162A (en) * | 2018-03-12 | 2018-12-07 | 北京林业大学 | A kind of unmanned plane forest zone Aerial Images joining method and system |
CN109697692A (en) * | 2018-12-29 | 2019-04-30 | 安徽大学 | One kind being based on the similar feature matching method of partial structurtes |
WO2020199424A1 (en) * | 2019-04-01 | 2020-10-08 | 苏州中晟宏芯信息科技有限公司 | Optimal h-matrix generation method and device |
CN110458183A (en) * | 2019-06-25 | 2019-11-15 | 上海圭目机器人有限公司 | A kind of characteristic matching optimization algorithm of image adaptive |
Non-Patent Citations (1)
Title |
---|
结合SURF算法和单应性矩阵的无人机影像匹配;王晓红等;测绘通报(07);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112348105A (en) | 2021-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108960211B (en) | Multi-target human body posture detection method and system | |
WO2022126377A1 (en) | Traffic lane line detection method and apparatus, and terminal device and readable storage medium | |
CN110414507B (en) | License plate recognition method and device, computer equipment and storage medium | |
CN109960742B (en) | Local information searching method and device | |
CN109035292B (en) | Moving target detection method and device based on deep learning | |
CN110175615B (en) | Model training method, domain-adaptive visual position identification method and device | |
CN111950453A (en) | Optional-shape text recognition method based on selective attention mechanism | |
CN110689043A (en) | Vehicle fine granularity identification method and device based on multiple attention mechanism | |
CN112733885A (en) | Point cloud identification model determining method and point cloud identification method and device | |
CN113095152A (en) | Lane line detection method and system based on regression | |
CN108230330B (en) | Method for quickly segmenting highway pavement and positioning camera | |
CN116188999B (en) | Small target detection method based on visible light and infrared image data fusion | |
CN113723377A (en) | Traffic sign detection method based on LD-SSD network | |
CN110852327A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
Farag | A lightweight vehicle detection and tracking technique for advanced driving assistance systems | |
CN111898428A (en) | Unmanned aerial vehicle feature point matching method based on ORB | |
US20200005078A1 (en) | Content aware forensic detection of image manipulations | |
CN115375917B (en) | Target edge feature extraction method, device, terminal and storage medium | |
CN116246119A (en) | 3D target detection method, electronic device and storage medium | |
CN113989604A (en) | Tire DOT information identification method based on end-to-end deep learning | |
Wang et al. | MashFormer: A novel multiscale aware hybrid detector for remote sensing object detection | |
CN103336964A (en) | SIFT image matching method based on module value difference mirror image invariant property | |
CN110705695B (en) | Method, device, equipment and storage medium for searching model structure | |
CN112348105B (en) | Unmanned aerial vehicle image matching optimization method | |
CN116721396A (en) | Lane line detection method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |