CN109146936B - Image matching method, device, positioning method and system - Google Patents

Image matching method, device, positioning method and system Download PDF

Info

Publication number
CN109146936B
CN109146936B CN201810783004.XA CN201810783004A CN109146936B CN 109146936 B CN109146936 B CN 109146936B CN 201810783004 A CN201810783004 A CN 201810783004A CN 109146936 B CN109146936 B CN 109146936B
Authority
CN
China
Prior art keywords
image
matching
tensor
pixel point
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810783004.XA
Other languages
Chinese (zh)
Other versions
CN109146936A (en
Inventor
罗世彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Airtops Intelligent Technology Co ltd
Original Assignee
Hunan Airtops Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Airtops Intelligent Technology Co ltd filed Critical Hunan Airtops Intelligent Technology Co ltd
Priority to CN201810783004.XA priority Critical patent/CN109146936B/en
Publication of CN109146936A publication Critical patent/CN109146936A/en
Application granted granted Critical
Publication of CN109146936B publication Critical patent/CN109146936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image matching method, an image matching device, a positioning method and an image matching system, wherein the matching method comprises the steps of calculating the tensor direction of each pixel point in a first image to obtain a first tensor direction diagram of the first image; calculating the tensor direction of each pixel point in a reference image, dividing the reference image into a plurality of sub-images with the same size as the first image, and determining a second tensor directional diagram of each sub-image; and calculating the matching values of the first tensor direction diagram and the second tensor direction diagram of the subgraph one by one, and determining the subgraph with the highest matching degree as a matching result according to the matching values. The matching method and the device have the advantages of being widely applicable to matching among different heterogeneous images, high in matching accuracy, high in matching speed and the like. The positioning method and the positioning system have the advantages of high precision, low cost, small environmental influence, low dependence on external signals and the like.

Description

Image matching method, device, positioning method and system
Technical Field
The invention relates to the technical field of image matching and positioning, in particular to an image matching method, an image matching device, a positioning method and a positioning system.
Background
The current unmanned aircraft industry develops rapidly, and target detection based on unmanned aerial vehicles and unmanned hole finder platforms is widely applied to various fields such as military, industry, agriculture, emergency rescue and the like, and not only can clear images and videos of targets be shot and obtained, but also intelligent analysis and automatic positioning can be carried out on the targets.
The aerial photography ground target positioning technology is always a popular technology concerned by various researchers and engineers, and is mainly based on GPS, inertial navigation, laser radar ranging and photogrammetry technologies, and the technologies are generally comprehensively used for completing final target positioning. At present, the related technologies mainly include a combined positioning technology based on GPS/inertial navigation/laser radar ranging and a combined positioning technology based on GPS/inertial navigation/photogrammetry. The latter can be further subdivided into combined positioning techniques based on GPS/inertial navigation/rendezvous measurements and combined positioning techniques based on GPS/inertial navigation/back-projection measurements.
(1) The combined positioning technology based on the GPS/inertial navigation/laser radar ranging is characterized in that the GPS is used for providing the position of an airborne platform, the inertial navigation is used for providing the attitude and the course of the platform, the laser radar is used for measuring the distance between a target and the airborne platform, and then the position of the target is calculated according to the position, the attitude, the course, the pointing direction of the laser radar and the distance between the target and the airborne platform. The research results of the prior art on the technology comprise: summer Jing, Jiang Li xing, fangxiao faithful: airborne laser radar earth positioning error analysis [ J ] mapping science and technology report 2011, 28(5): 365-; and the wangjian army, xuli army, li xiao lu: influence of random measurement errors of attitude angles on point cloud positioning accuracy and three-dimensional imaging accuracy of airborne laser radar [ C ], advanced academic seminar of national laser radar earth observation, 2010. The combined positioning technology based on GPS/inertial navigation/laser radar ranging has higher positioning precision, can realize sub-meter positioning, but still has the following defects: the laser radar needs a photoelectric pod to accurately control the direction and calculate the included angle between the target direction and a flying platform, and in addition, an inertial navigation component needs to accurately measure the course and the attitude of the flying platform, otherwise, under the remote measurement condition, a small angle deviation can cause a large position deviation. Therefore, the technology needs a photoelectric pod and an inertial navigation part with high cost and high precision, and the hardware cost of the laser radar is high, so that the overall cost of the system is high, and the system is not beneficial to being used on low-cost platforms such as unmanned planes. In addition, laser radar ranging is mainly used for measuring point targets, the use is not as convenient as a method based on aerial image measurement, and the method based on aerial image measurement can be used for measuring targets at any positions on one aerial image.
(2) The combined positioning technology based on GPS/inertial navigation/rendezvous measurement utilizes GPS to provide the position of an airborne platform, utilizes inertial navigation to provide the attitude and the course of the platform, utilizes a camera to shoot a target from a plurality of angles to rendezvous and measure the relative position of the target and an airplane, and then calculates the absolute position of the target according to the position, the attitude, the course of the airborne platform and the relative position between the target and the airborne platform. The research results of the prior art on the technology comprise: ease, with bright stroke, plum in male: the multi-target intersection positioning method for automatically sorting the measured data [ J ] television technology, 2014, 38(11): 219-; sun shine, Li Zhi Qiang, Zhang Jianhua: target rendezvous positioning [ J ] of the airborne photoelectric platform, China optics, 2015, 8(6): 988-; starting from peak, shanghai: principles and applications of photogrammetry research [ M ]. scientific press, 2009. The technology still has the following defects: because the intersection measurement needs multi-angle shooting of the target, the flight path is limited, and sometimes a large intersection angle cannot be obtained, so that the measurement precision is affected. Furthermore, the telephoto measurement is susceptible to atmospheric refraction interference, which also severely degrades the positioning accuracy of the intersection measurement. Typically the target positioning error is about tens of meters out of a few kilometers.
(3) The combined positioning technology based on GPS/inertial navigation/back projection measurement utilizes GPS to provide the position of an airborne platform, utilizes inertial navigation to provide the attitude and the course of the platform, utilizes a camera on a photoelectric pod to shoot the direction included angle between a measured target and an airplane, and then calculates the absolute position of the target according to the position, the attitude, the course, the flying height of the airborne platform and the direction included angle between the target and the airplane. The research results of the prior art on the technology comprise: merino L, Caballero F, Mart i nez-de Dios J R, et al a cooperative performance system for multiple UAVs Application to automatic detection of format fire [ J ]. Journal of Field Robotics, 2006, 23 (3-4): 165-184; merino L, Cabillero F, Martinez-de Dios J R, et al, cosmetic fire detection using and unamended atmospheric vehicles [ C ]// Robotics and Automation, 2005.ICRA 2005.Proceedings of the 2005 IEEE International Conference on. IEEE, 2005:1884 1889. The technology still has the following defects: the technique utilizes measurement data: the attitude, the camera pointing direction and the flying height directly solve the target position, so that attitude errors, pointing errors and height errors have great influence on the target positioning result. Usually, the target positioning error beyond the logarithm kilometer can reach more than one hundred meters.
In the prior art, methods for performing image matching (especially heterogeneous image matching) mainly include PQ-HOG, HOPC, MI, MTM, NCC, GO, ImpGO, and the like, but these matching methods still have a space for improving the adaptability and matching accuracy of different heterogeneous images, and are especially applied to the field of positioning based on image matching.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides an image matching method and device which are widely applicable to matching among different heterogeneous images, have high matching accuracy and high matching speed, and realize positioning method and system which have high precision, low cost, small influence of environment and low dependence on external signals on the basis of the image matching method.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: an image matching method comprises the steps of calculating the tensor direction of each pixel point in a first image to obtain a first tensor direction diagram of the first image; calculating the tensor direction of each pixel point in a reference image, dividing the reference image into a plurality of sub-images with the same size as the first image, and determining a second tensor directional diagram of each sub-image; and calculating the matching values of the first tensor direction diagram and the second tensor direction diagram of the subgraph one by one, and determining the subgraph with the highest matching degree as a matching result according to the matching values.
Further, the tensor direction is determined by calculation according to the formula shown in formula (1):
Figure GDA0002810867390000031
in the formula (1), θ is the tensor direction value of the pixel point obtained by calculation, t11、t12、t22Are all tensor values of the pixel point.
Further, the tensor value is calculated and determined according to a formula shown in an equation (2):
Figure GDA0002810867390000032
t in formula (2)11、t12、t22Is the tensor value, G, of the pixel pointσFor a predetermined Gaussian filter parameter with standard deviation of sigma, IxIs a bagPartial derivative, I, of a predetermined local image area containing the pixel point in the X directionyAnd performing partial derivation on a preset local image area containing the pixel point in the Y direction.
Further, the matching value is calculated according to the formula shown in equation (3):
Figure GDA0002810867390000033
in the formula (3), S (O) (t), O (w)) is the matching value of the first tensor direction diagram O (t) and the second tensor direction diagram O (w), and thetaiIs the tensor direction, theta, of the pixel point i in the first tensor direction diagramiThe' is the tensor direction of a pixel point i in the second tensor directional diagram, and n and m are the number of pixel points in the X direction and the Y direction in the first tensor directional diagram respectively.
An image matching apparatus comprising a processor for executing a program stored on a memory and a memory having stored thereon a program which when executed implements a method as claimed in any one of the preceding claims.
An image matching positioning method comprises the following steps: acquiring a first image, wherein the first image is obtained by photographing a region to be positioned including a target to be positioned;
matching the first image with a predetermined reference image with coordinates according to the matching method of any one of claims 1 to 4, and determining the matching result of the first image in the reference image;
and determining the coordinates of the first image according to the matching result, and determining the coordinates of the target to be positioned in the first image.
Further, the method also comprises the step of correcting the image obtained by photographing; the correcting comprises performing direct downward-looking correction processing on the image.
Further, the method also comprises the step of correcting the image obtained by photographing; the correcting comprises correcting the orientation of the image so that the orientation of the image coincides with the orientation of the reference image, and/or: and adjusting the resolution of the image so that the resolution of the image is consistent with the resolution of the reference image.
An image matching positioning system comprises an image acquisition module, a matching module and a positioning module;
the image acquisition module is used for acquiring a first image, wherein the first image is obtained by photographing a region to be positioned containing a target to be positioned;
the matching module is used for matching the first image with a predetermined reference image with coordinates according to any one of the matching methods, and determining a matching result of the first image in the reference image;
and the positioning module is used for determining the coordinates of the first image according to the matching result and determining the coordinates of the target to be positioned in the first image.
Further, the image acquisition module is further configured to: correcting and/or revising the image obtained by photographing; the correction comprises the direct downward-looking correction processing of the image; the correcting comprises correcting the orientation of the image so that the orientation of the image coincides with the orientation of the reference image, and/or: and adjusting the resolution of the image so that the resolution of the image is consistent with the resolution of the reference image.
Compared with the prior art, the invention has the advantages that:
1. according to the matching method, the tensor direction diagram is obtained by calculating the tensor direction of the first image and the reference image, the matching between the first image and the reference image is realized through the tensor direction diagram, the fast and accurate matching between the first image and the reference image which are heterogeneous images can be realized, and the matching method can be widely suitable for the matching between various heterogeneous images.
2. According to the method, the aerial image (the first image) is matched with the reference image with the predetermined coordinates, the coordinate information of the aerial image is determined through matching, and the coordinates of the target to be positioned in the aerial image are determined according to the coordinates, so that on one hand, the positioning process does not depend on a Global Navigation Satellite System (GNSS) and does not depend on external information such as waypoints and the like, the positioning process can be completed through the aerial image and the pre-stored reference image, the dependence degree on external information is small, the influence of external factors such as environment is small, the limitation is small, and the reliability is good; on the other hand, by storing a reference image having high accuracy and high resolution in advance and matching the aerial image with the reference image, high-accuracy positioning can be realized.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention.
Fig. 2 is a matching diagram according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a matching test case according to an embodiment of the present invention.
FIG. 4 is a graph illustrating a comparative analysis of matching results according to an embodiment of the present invention.
FIG. 5 is a schematic positioning diagram according to an embodiment of the present invention.
FIG. 6 is a diagram illustrating image rectification according to an embodiment of the present invention.
FIG. 7 is a graph comparing the technical effect of the embodiment of the present invention with the effect of the prior art.
Detailed Description
The invention is further described below with reference to the drawings and specific preferred embodiments of the description, without thereby limiting the scope of protection of the invention.
As shown in a dashed box in fig. 1, the image matching method of the present embodiment includes: calculating the tensor direction of each pixel point in the first image to obtain a first tensor direction diagram of the first image; calculating the tensor direction of each pixel point in the reference image, dividing the reference image into a plurality of sub-images with the same size as the first image, and determining a second tensor directional diagram of each sub-image; and calculating the matching values of the first tensor direction diagram and the second tensor direction diagram of the subgraphs one by one, and determining the subgraph with the highest matching degree as a matching result according to the matching values. In the present embodiment, the first image is preferably a rectangular image having a side length of 64 pixels to 96 pixels.
In the present embodiment, the tensor direction is computationally determined according to the formula shown in equation (1):
Figure GDA0002810867390000051
in the formula (1), θ is the tensor direction value of the pixel point obtained by calculation, t11、t12、t22Are all tensor values of the pixel point.
The tensor value is calculated and determined according to a formula shown in an equation (2):
Figure GDA0002810867390000052
t in formula (2)11、t12、t22Is the tensor value, G, of the pixel pointσFor a predetermined Gaussian filter parameter with standard deviation of sigma, IxFor the partial derivatives in the X direction of the preset local image area containing the pixel point, IyAnd performing partial derivation on a preset local image area containing the pixel point in the Y direction. In this embodiment, the preset local image area including the pixel point is an area with a side length of a preset pixel point, such as an area of 3 pixels by 3 pixels or an area of 5 pixels by 5 pixels, with the pixel point as a center.
Calculating a matching value according to the formula shown in formula (3):
Figure GDA0002810867390000053
in the formula (3), S (O) (t), O (w)) is the matching value of the first tensor direction diagram O (t) and the second tensor direction diagram O (w), and thetaiIs the tensor direction, theta, of the pixel point i in the first tensor direction diagramiThe' is the tensor direction of a pixel point i in the second tensor directional diagram, and n and m are the number of pixel points in the X direction and the Y direction in the first tensor directional diagram respectively. The size of the first tensor direction diagram is the same as that of the second tensor direction diagram, namely, the first tensor direction diagram and the second tensor direction diagram are both tensor direction diagrams obtained by calculating images of n multiplied by m pixel points. According to formula (3)According to the formula, the smaller the matching value is, the smaller the difference between the first tensor direction diagram and the second tensor direction diagram is, and the higher the matching degree is.
In the present embodiment, as shown in fig. 2, the real-time image is a real-time image of a certain ground area obtained by aerial photography, that is, the first image in the method, and the reference image is a reference image which has been obtained before aerial photography and whose coordinate position has been determined by processing, and the reference image may be an image obtained by satellite remote sensing and whose coordinates (such as longitude and latitude) have been determined by accurate positioning. In this embodiment, the first image is an image of n × n pixels. The first vector direction diagram of the first image can be obtained by the calculation of the above equations (1) and (2). In the same manner, the tensor direction of each pixel point in the reference image can also be calculated, and the reference image is divided into n × n pixel point regions, each region corresponds to one sub-image, as shown by a dotted line box in the reference image in fig. 2, the sub-image corresponding to each n × n pixel point region can obtain a corresponding second tensor directional diagram. And comparing the first tensor direction diagram of the first image with the second tensor direction diagram corresponding to each sub-image in the reference image, namely calculating the matching value according to the formula shown in the formula (3), so that the second tensor direction diagram corresponding to the sub-image in the region shown by the solid line box in the reference image in fig. 2 and the first tensor direction diagram are obtained, the matching degree is the highest, the first image is matched to the region corresponding to the solid line box in the reference image, and the image matching is completed. After the matching is completed, since the coordinates of the reference image are already determined, the coordinates of the upper left corner of the first image can be determined to be (rx, ry) through the matching, and then any point (x, y) in the first image can be determined to be (rx + x, ry + y).
In this embodiment, the matching method (denoted as PG) of the present invention is compared and analyzed with matching algorithms in the prior art, such as PQ-HOG, HOPC, MI, GO, ImpGO, and the like, through hundreds of different images, the images used in the comparison and analysis include, but are not limited to, several heterogeneous images shown in fig. 3, the comparison result of the average matching accuracy is shown in fig. 4, and the average matching accuracy refers to performing matching analysis on the images used and calculating the average value of the matching accuracy when different images are used. The comparison can confirm that the average matching accuracy of the method is higher than that of the matching method in the prior art, particularly obviously superior to algorithms such as GO, MI, PQ-HOG and the like, and compared with algorithms such as HOPC, ImpGO and the like, the method has obvious improvement on the rectangular image with the template size (the size of the first image) being in the range of 64-96 pixels in side length. The value of the template size refers to the size of a square area of pixels with corresponding values of the length and the width of the template. Since the accuracy of image matching can seriously affect the accuracy of positioning based on image matching, the improvement of the accuracy of image matching still has very important significance. Among them, the PQ-HOG matching method is described in the prior art documents: sibiryakov, "Fast and high-performance testing method," in CVPR, 2011, pp.1417-1424. HOPC matching methods are described in prior art literature: Y.Ye and L.Shen, "Hopc: a Novel silicon Metric Based on geographic Structural Properties for Multi-Modal Remote Sensing Image Matching," in ISPRS Annals of photographic mapping, Remote Sensing and Spatial Information science, 2016, vol.3, pp.9-16. MI matching methods are described in prior art documents: viola and W.Wells, "Alignment by knowledge of mutual information," International journel of computer vision, vol.24, No.2, pp.137-154, 1977.
The image matching device of the embodiment comprises a processor and a memory, wherein the processor is used for executing the program stored in the memory, and the memory is used for storing the program which can realize the method.
As shown in fig. 1, the image matching positioning method of the present embodiment includes: acquiring a first image, wherein the first image is obtained by photographing a region to be positioned containing a target to be positioned;
matching the first image with a predetermined reference image with coordinates according to any one matching method, and determining a matching result of the first image in the reference image; and determining the coordinates of the first image according to the matching result, and determining the coordinates of the target to be positioned in the first image.
In the embodiment, the method further comprises the steps of correcting the image obtained by photographing; the correction includes a direct downward-looking correction process on the image. The method also comprises the step of correcting the image obtained by photographing; the correcting includes correcting the orientation of the image so that the orientation of the image coincides with the orientation of the reference image, and/or: the resolution of the image is adjusted so that the resolution of the image matches the resolution of the reference image.
In this embodiment, a specific implementation process of the positioning method is described by a specific positioning process. As shown in fig. 5, the target to be located is a vehicle a, which travels along a road with buildings and other topographical features (e.g., trees, rivers, hills, etc.) on both sides of the road. When the vehicle is driven to the position shown in the actual ground situation of fig. 5, a positioning is required. The aerial image of the area to be positioned containing the vehicle A is shot by lifting off the aerial vehicle (such as an unmanned aerial vehicle) carrying the photographing equipment (camera), and the area to be positioned is shown as a dotted line in an actual ground situation graph. The aerial image shown in fig. 5 (i.e., the first image) is obtained. The aircraft is also equipped with devices such as an attitude sensor and an altimeter, and the attitude sensor captures an aerial image and records camera state parameters including attitude, direction, and altitude at the time of capturing the image.
In this embodiment, the aerial image captured by the photographing apparatus is not necessarily directly suitable for matching with the reference image due to the influence of the flight state of the aircraft and the installation angle of the photographing apparatus. Therefore, it is necessary to perform image rectification processing on the aerial image according to the internal parameters (such as focal length and principal point) of the photographing apparatus to rectify the aerial image into a direct downward view image. The image rectification process may be performed by existing sophisticated image processing algorithms. Through the rectification process, the left image in fig. 6 can be rectified into the right image in fig. 6, so that the matching accuracy can be further improved. Meanwhile, the direction, the resolution, the scaling size and the like of the aerial image are further adjusted according to the parameters such as the direction, the resolution and the like of the reference image, so that the direction and the resolution of the aerial image after adjustment are close to or consistent with those of the reference image, and the subsequent image matching is facilitated. The adjustment process may also be performed by existing sophisticated image processing algorithms.
In the present embodiment, a reference image with coordinates that is obtained by aerial photography or satellite remote sensing in advance and for which coordinates are determined in advance is shown as a reference image in fig. 5. The coordinates of the reference image are accurately divided as in fig. 5 in the form of longitude and latitude (indicated by the dashed straight lines in the reference image of fig. 5). Of course, the coordinate information may also be described in the form of other coordinate systems. Of course, since the reference image is obtained in advance, the ground attachments (such as woods, buildings, and the like) in the reference image may not be consistent with the current actual ground conditions, that is, the current aerial image obtained through aerial photography is not consistent, but the same features (such as terrain, buildings without changes, roads, and the like) still exist between the two, so that by using the image matching method of the present invention, the aerial image can be accurately matched to the reference image by calculating the tensor of the image, as shown in the area indicated by the dotted line square in the reference image of fig. 5, and since the reference image has the determined coordinate information, the coordinate information of the aerial image can also be accurately determined.
In the embodiment, the position of the target to be positioned (i.e. the vehicle a) in the aerial image is identified through an image identification technology, and the coordinate information of the aerial image is obtained through matching, so that the coordinate of the target to be positioned (the vehicle a) can be obtained through simple calculation. Thereby completing the positioning of the target to be positioned.
As shown in fig. 7, in the above positioning process of this embodiment, there is no need to rely on positioning signals of a global navigation satellite system (such as GPS, beidou, etc.), nor to rely on waypoints, etc. in the positioning method of this embodiment. As long as the reference image of the area is stored in advance, the positioning of the target can be conveniently and quickly completed after the aerial image is acquired through aerial photography. And on the basis of adopting a reference image with high coordinate precision and high resolution, the precision of positioning through image matching can correspondingly reach very high, and the positioning precision can reach a sub-meter level. In addition, the positioning process does not depend on external information, so that the anti-interference capability is strong, the stability is high, and the reliability is good.
The image matching positioning system comprises an image acquisition module, a matching module and a positioning module; the image acquisition module is used for acquiring a first image, wherein the first image is obtained by photographing a region to be positioned containing a target to be positioned; the matching module is used for matching the first image with a predetermined reference image with coordinates according to any one of the matching methods, and determining the matching result of the first image in the reference image; the positioning module is used for determining the coordinates of the first image according to the matching result and determining the coordinates of the target to be positioned in the first image. The image acquisition module is further configured to: correcting and/or revising the image obtained by photographing; the correction comprises the correction processing of direct downward vision of the image; the correcting includes correcting the orientation of the image so that the orientation of the image coincides with the orientation of the reference image, and/or: the resolution of the image is adjusted so that the resolution of the image matches the resolution of the reference image.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.

Claims (6)

1. An image matching positioning method is characterized in that:
acquiring a first image, wherein the first image is obtained by photographing a region to be positioned including a target to be positioned;
matching the first image with a predetermined reference image with coordinates according to a preset matching method, and determining a matching result of the first image in the reference image;
determining the coordinates of the first image according to the matching result, and determining the coordinates of the target to be positioned in the first image;
the preset matching method comprises the following steps: calculating the tensor direction of each pixel point in a first image to obtain a first tensor direction diagram of the first image; calculating the tensor direction of each pixel point in a reference image, dividing the reference image into a plurality of sub-images with the same size as the first image, and determining a second tensor directional diagram of each sub-image; calculating the matching values of the first tensor direction diagram and a second tensor direction diagram of the subgraph one by one, and determining the subgraph with the highest matching degree as a matching result according to the matching values;
the tensor direction is calculated and determined according to a formula shown in an equation (1):
Figure FDA0002950796800000011
in the formula (1), θ is the tensor direction value of the pixel point obtained by calculation, t11、t12、t22All the tensor values of the pixel points are obtained;
calculating a matching value according to the formula shown in formula (3):
Figure FDA0002950796800000012
in the formula (3), S (O) (t), O (w)) is the matching value of the first tensor direction diagram O (t) and the second tensor direction diagram O (w), and thetaiIs the tensor direction, theta, of the pixel point i in the first tensor direction diagramiThe' is the tensor direction of a pixel point i in the second tensor directional diagram, and n and m are the number of pixel points in the X direction and the Y direction in the first tensor directional diagram respectively.
2. The image matching positioning method according to claim 1, wherein: further comprising correcting the image obtained by the photographing; the correcting comprises performing direct downward-looking correction processing on the image.
3. The image matching positioning method according to claim 2, wherein: the method also comprises the step of correcting the image obtained by photographing; the correcting comprises correcting the orientation of the image so that the orientation of the image coincides with the orientation of the reference image, and/or: and adjusting the resolution of the image so that the resolution of the image is consistent with the resolution of the reference image.
4. The image matching positioning method according to claim 3, wherein: the tensor value is calculated and determined according to a formula shown in an equation (2):
Figure FDA0002950796800000013
t in formula (2)11、t12、t22Is the tensor value, G, of the pixel pointσFor a predetermined Gaussian filter parameter with standard deviation of sigma, IxFor the partial derivatives in the X direction of the preset local image area containing the pixel point, IyAnd performing partial derivation on a preset local image area containing the pixel point in the Y direction.
5. An image-matching localization system, characterized by: the system comprises an image acquisition module, a matching module and a positioning module;
the image acquisition module is used for acquiring a first image, wherein the first image is obtained by photographing a region to be positioned containing a target to be positioned;
the matching module is used for matching the first image with a predetermined reference image with coordinates according to the matching method of any one of claims 1 to 4, and determining the matching result of the first image in the reference image;
and the positioning module is used for determining the coordinates of the first image according to the matching result and determining the coordinates of the target to be positioned in the first image.
6. The image-matched localization system of claim 5, wherein: the image acquisition module is further configured to: correcting and/or revising the image obtained by photographing; the correction comprises the direct downward-looking correction processing of the image; the correcting comprises correcting the orientation of the image so that the orientation of the image coincides with the orientation of the reference image, and/or: and adjusting the resolution of the image so that the resolution of the image is consistent with the resolution of the reference image.
CN201810783004.XA 2018-07-17 2018-07-17 Image matching method, device, positioning method and system Active CN109146936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810783004.XA CN109146936B (en) 2018-07-17 2018-07-17 Image matching method, device, positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810783004.XA CN109146936B (en) 2018-07-17 2018-07-17 Image matching method, device, positioning method and system

Publications (2)

Publication Number Publication Date
CN109146936A CN109146936A (en) 2019-01-04
CN109146936B true CN109146936B (en) 2021-04-27

Family

ID=64800745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810783004.XA Active CN109146936B (en) 2018-07-17 2018-07-17 Image matching method, device, positioning method and system

Country Status (1)

Country Link
CN (1) CN109146936B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111238488A (en) * 2020-03-18 2020-06-05 湖南云顶智能科技有限公司 Aircraft accurate positioning method based on heterogeneous image matching
CN114612555A (en) * 2022-03-17 2022-06-10 杭州弥深智能科技有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN115932823B (en) * 2023-01-09 2023-05-12 中国人民解放军国防科技大学 Method for positioning aircraft to ground target based on heterogeneous region feature matching

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2417629A (en) * 2004-08-26 2006-03-01 Sharp Kk Data processing to detect transformation
WO2006099339A2 (en) * 2005-03-14 2006-09-21 The Board Of Trustees Of The University Of Illinois A fiber coherence index apparatus and method for imaging and characterizing fibrous structures
CN102609949A (en) * 2012-02-16 2012-07-25 南京邮电大学 Target location method based on trifocal tensor pixel transfer
CN104463098B (en) * 2014-11-04 2018-01-30 中国矿业大学(北京) With the structure tensor direction histogram feature recognition coal petrography of image
CN106407315B (en) * 2016-08-30 2019-08-16 长安大学 A kind of vehicle autonomic positioning method based on street view image database
CN107609507B (en) * 2017-09-08 2020-11-13 哈尔滨工业大学 Remote sensing image target identification method based on characteristic tensor and support tensor machine

Also Published As

Publication number Publication date
CN109146936A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
US10860871B2 (en) Integrated sensor calibration in natural scenes
CN105335733B (en) Unmanned aerial vehicle autonomous landing visual positioning method and system
US20110282580A1 (en) Method of image based navigation for precision guidance and landing
CN109146936B (en) Image matching method, device, positioning method and system
CN109341686B (en) Aircraft landing pose estimation method based on visual-inertial tight coupling
CN107607091A (en) A kind of method for measuring unmanned plane during flying flight path
CN111426320A (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
Pecho et al. UAV usage in the process of creating 3D maps by RGB spectrum
Lo et al. The direct georeferencing application and performance analysis of UAV helicopter in GCP-free area
Kinnari et al. GNSS-denied geolocalization of UAVs by visual matching of onboard camera images with orthophotos
CN114820793A (en) Target detection and target point positioning method and system based on unmanned aerial vehicle
Hosseinpoor et al. Pricise target geolocation based on integeration of thermal video imagery and rtk GPS in UAVS
CN112950671A (en) Real-time high-precision parameter measurement method for moving target by unmanned aerial vehicle
Kim et al. Target detection and position likelihood using an aerial image sensor
Curro et al. Automated aerial refueling position estimation using a scanning LiDAR
Cheng et al. High precision passive target localization based on airborne electro-optical payload
Jingjing et al. Research on autonomous positioning method of UAV based on binocular vision
KR102392258B1 (en) Image-Based Remaining Fire Tracking Location Mapping Device and Method
Kang et al. Positioning Errors of Objects Measured by Convolution Neural Network in Unmanned Aerial Vehicle Images
Ishii et al. Autonomous UAV flight using the Total Station Navigation System in Non-GNSS Environments
Jung et al. Vision based navigation using road-intersection image
CN110887475B (en) Static base rough alignment method based on north polarization pole and polarized solar vector
CN113551671B (en) Real-time high-precision measurement method for attitude and position of unmanned aerial vehicle
Jensen et al. Using aerial images to calibrate the inertial sensors of a low-cost multispectral autonomous remote sensing platform (AggieAir)
CN108489467A (en) A kind of bulilt-up area domain aerial survey unmanned plane photo control point coordinate measuring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant