CN113624231A - Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft - Google Patents
Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft Download PDFInfo
- Publication number
- CN113624231A CN113624231A CN202110783533.1A CN202110783533A CN113624231A CN 113624231 A CN113624231 A CN 113624231A CN 202110783533 A CN202110783533 A CN 202110783533A CN 113624231 A CN113624231 A CN 113624231A
- Authority
- CN
- China
- Prior art keywords
- image
- real
- projection
- coordinate system
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000013139 quantization Methods 0.000 claims abstract description 29
- 238000012937 correction Methods 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims description 36
- 238000001914 filtration Methods 0.000 claims description 17
- 238000009434 installation Methods 0.000 claims description 13
- 230000003287 optical effect Effects 0.000 claims description 13
- 230000000007 visual effect Effects 0.000 claims description 12
- 238000005259 measurement Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 4
- 230000004807 localization Effects 0.000 claims 2
- 230000008569 process Effects 0.000 description 10
- 238000009499 grossing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003702 image correction Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 241000772991 Aira Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an inertial vision integrated navigation positioning method based on heterogeneous image matching and an aircraft, wherein the method comprises the following steps: respectively carrying out orthoimage correction on the real-time image and the reference image, and unifying the scaling sizes of the real-time image and the reference image; respectively acquiring a first projection and a quantized gradient direction histogram of a real-time image and a second projection and a quantized gradient direction histogram of a reference image; detecting the similarity of the first projection and the second projection and the histogram of the quantization gradient direction, wherein the position of the maximum Hamming distance value of the second projection and the histogram of the quantization gradient direction is the image matching position of the real-time image in the reference image; and converting the image matching position into the position of the unmanned aerial vehicle as an observed quantity, and constructing a Kalman filter based on the observed quantity to realize the inertial vision integrated navigation positioning of the unmanned aerial vehicle. By applying the technical scheme of the invention, the technical problem that accurate and stable matching between the heterogeneous images is difficult to realize in the prior art is solved.
Description
Technical Field
The invention relates to the technical field of computer vision research, in particular to an inertial vision integrated navigation positioning method based on heterogeneous image matching and an aircraft.
Background
The reference image used for image matching is a visible light satellite image, the real-time image generally adopts an infrared image which can work all day long, and the method belongs to a typical heterogeneous image matching technology. Different source images have different imaging mechanisms, and even images shot in the same scene have large differences, so that special processing needs to be performed for matching different source images. Meanwhile, with the change of the flight attitude of the aircraft, the camera shoots a real-time image and does not look down, and the formed real-time image and an orthographic reference image pre-loaded on the aircraft have geometric transformation such as scale, rotation, affine, perspective and the like, so that before matching, the image needs to be subjected to certain preprocessing correction operation, and meanwhile, the invariance of the designed matching algorithm to the geometric deformation of the corrected image to a certain degree is also considered. However, in the existing visual matching, accurate and stable matching between heterogeneous images is difficult to achieve, and therefore the autonomy of the combined navigation algorithm is poor.
Disclosure of Invention
The invention provides an inertial vision integrated navigation positioning method based on heterogeneous image matching and an aircraft, which can solve the technical problem that accurate and stable matching is difficult to realize among heterogeneous images in the prior art.
According to one aspect of the invention, an inertial visual integrated navigation positioning method based on heterogeneous image matching is provided, and comprises the following steps: respectively carrying out orthoimage correction on the real-time image and the reference image by utilizing inertial navigation attitude information and laser ranging height information, and unifying the scaling sizes of the real-time image and the reference image; respectively acquiring a first projection and a quantized gradient direction histogram of a real-time image and a second projection and a quantized gradient direction histogram of a reference image; detecting the similarity of the first projection and quantization gradient direction histogram and the second projection and quantization gradient direction histogram based on a similarity measurement principle, and searching the position of the maximum Hamming distance value of the first projection and quantization gradient direction histogram in the second projection and quantization gradient direction histogram, wherein the position of the maximum Hamming distance value is the image matching position of the real-time image in the reference image; and converting the image matching position into the position of the unmanned aerial vehicle and taking the image matching position as an observed quantity, and constructing a Kalman filter based on the observed quantity to realize the inertial vision integrated navigation positioning of the unmanned aerial vehicle.
Further, acquiring the first projection and the quantized gradient direction histogram of the real-time image specifically includes: performing Gaussian filtering on the real-time image; extracting gray image gradient characteristics of the real-time image after Gaussian filtering by adopting a Sobel operator to obtain a gradient image of the real-time image; counting a gradient histogram of the real-time image based on the gradient image of the real-time image; projecting and quantizing the gradient histogram of the real-time image to obtain a first projected and quantized gradient direction histogram; the obtaining of the second projection and the quantized gradient direction histogram of the reference map specifically includes: performing Gaussian filtering on the reference map; extracting the gray image gradient feature of the reference image after Gaussian filtering by adopting a Sobel operator to obtain a gradient image of the reference image; counting a gradient histogram of the reference map based on the gradient image of the reference map; the gradient histogram of the reference map is projected and quantized to obtain a second projected and quantized gradient direction histogram.
Further, both the real-time map and the reference map can be based onThe orthoimage correction is realized, wherein, is a non-orthoimage, and is,in order to obtain an ortho-image,is a homogeneous coordinate of the spatial point P in the world coordinate system, zcIs the z-component of the coordinates of the spatial point P in the camera coordinate system, f is the optical system principal distance, scIs the distance, s, between adjacent pixels in the column direction on the image sensorrIs the distance between adjacent pixels in the up direction of the image sensor, [ c ]c,cr]TIs the main point of the image and is,is a rotation matrix between two cameras with identical internal references,for the rotation matrix converted from the world coordinate system to the camera coordinate system,is a translation vector converted from the world coordinate system to the camera coordinate system.
Further, the scaling factor k of the real-time map and the reference map may be based onAnd acquiring, wherein mu is the size of the image element, f is the focal length of the camera, and l is the distance between the optical center of the camera and the shooting point.
Further, the similarity S (x, y) of the first projection and quantization gradient direction histogram and the second projection and quantization gradient direction histogram may be based onWhere d (a, b) is a difference metric, f is a monotonic function, (u, v) is a set of pixel coordinates in the real-time graph, I (x + u, y + v) represents a similarity metric value of the real-time graph at the reference graph (x, y), T is the real-time graph, and I is the reference graph.
Further, the conversion relation between the image matching position and the position of the unmanned aerial vehicle isWherein h is the flying height of the airplane, x and y are the position relation between the position right below the camera and the shot image, and rdFor the projection of the center point of the camera shooting position in the coordinate system of the camera body, rnIs rdThe projection is carried out under the geographic coordinate system,is a matrix transformed from the camera body coordinate system to the geographic coordinate system.
Further, in the kalman filter, the state vector includes a north speed error, a sky speed error, an east speed error, a latitude error, an altitude error, a longitude error, a north misalignment angle, a sky misalignment angle, an east misalignment angle, an x-axis installation error, a y-axis installation error, a z-axis installation error, and a laser ranging scale factor error.
Further, in the kalman filter, the observation matrix is H (k) ═ H1 H2 H3 H4 H5]WhereinrNis the north distance, rUIs the distance in the sky, rEIs the east distance.
According to another aspect of the invention, an aircraft is provided, and the aircraft uses the inertial vision combined navigation positioning method based on heterogeneous image matching for combined navigation positioning.
The technical scheme of the invention is applied, and provides an inertial vision integrated navigation positioning method based on heterogeneous image matching, which develops the research of heterogeneous image matching positioning technology aiming at the problem of full-time navigation positioning of an unmanned aerial vehicle under the condition of GPS rejection, firstly, a mode of capturing flight altitude information by using a laser ranging sensor is provided, the problem that scale information cannot be initialized under the condition of small maneuvering of a cruising segment is solved, and the unification of the scales of a real-time image and a reference image is realized through the fusion of inertial/laser ranging/image information; secondly, extracting structural features and performing binary coding by adopting a method based on projection and quantization gradient direction histograms, so that the problem of mismatching caused by inconsistent image information in the process of matching heterogeneous images is solved; finally, a geometric positioning method based on an inertia/laser range finder is provided, the problem that the attitude error is large in the process of resolving the PNP pose of the two-dimensional plane control point during high-altitude flight is solved, stable matching between different source images can be achieved through the method, and the autonomy of a combined navigation algorithm is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 illustrates a side view of a coordinate system provided in accordance with a particular embodiment of the present invention;
FIG. 2 illustrates a top view of a coordinate system provided in accordance with a particular embodiment of the present invention;
FIG. 3 illustrates a schematic diagram of a captured image structure provided in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the structure of an image rectification result provided by an embodiment of the invention;
FIG. 5 illustrates a schematic diagram of a comparison of infrared and visible structural features provided in accordance with an embodiment of the present invention;
FIG. 6 illustrates a block flow diagram for projection and quantized gradient direction histogram generation provided in accordance with a specific embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating a two-dimensional Gaussian function curve provided in accordance with an embodiment of the present invention;
FIG. 8 shows a schematic diagram of a cell unit provided in accordance with an embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating the structure of the position of a real-time map in a reference map according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram illustrating a positional relationship between a drone location point and an image matching point according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
As shown in fig. 1 to 10, according to a specific embodiment of the present invention, there is provided an inertial visual integrated navigation positioning method based on heterogeneous image matching, the inertial visual integrated navigation positioning method based on heterogeneous image matching includes: respectively carrying out orthoimage correction on the real-time image and the reference image by utilizing inertial navigation attitude information and laser ranging height information, and unifying the scaling sizes of the real-time image and the reference image; respectively acquiring a first projection and a quantized gradient direction histogram of a real-time image and a second projection and a quantized gradient direction histogram of a reference image; detecting the similarity of the first projection and quantization gradient direction histogram and the second projection and quantization gradient direction histogram based on a similarity measurement principle, and searching the position of the maximum Hamming distance value of the first projection and quantization gradient direction histogram in the second projection and quantization gradient direction histogram, wherein the position of the maximum Hamming distance value is the image matching position of the real-time image in the reference image; and converting the image matching position into the position of the unmanned aerial vehicle and taking the image matching position as an observed quantity, and constructing a Kalman filter based on the observed quantity to realize the inertial vision integrated navigation positioning of the unmanned aerial vehicle.
By applying the configuration mode, an inertial vision integrated navigation positioning method based on heterogeneous image matching is provided, and the research of a heterogeneous image matching positioning technology is developed aiming at the problem of full-time navigation positioning of an unmanned aerial vehicle under the condition of GPS rejection; secondly, extracting structural features and performing binary coding by adopting a method based on projection and quantization gradient direction histograms, so that the problem of mismatching caused by inconsistent image information in the process of matching heterogeneous images is solved; finally, a geometric positioning method based on an inertia/laser range finder is provided, the problem that the attitude error is large in the process of resolving the PNP pose of the two-dimensional plane control point during high-altitude flight is solved, stable matching between different source images can be achieved through the method, and the autonomy of a combined navigation algorithm is improved.
In the invention, in order to realize the inertial vision integrated navigation positioning based on the heterogeneous image matching, firstly, the inertial navigation attitude information and the laser ranging height information are utilized to respectively perform orthoscopic image correction on the real-time image and the reference image, and the scaling sizes of the real-time image and the reference image are unified.
Specifically, the coordinate system includes a camera coordinate system (c system), an ortho camera coordinate system (c system)System), inertial navigation carrier coordinate system (system b) and geographic coordinate system (system n), each defined as follows.
Camera coordinate system (series c): using the image space principal point of the optical system as the origin oc(ii) a When viewed directly against the optical system, xcThe axis is parallel to the transverse axis (long side) of the imaging plane coordinate system, and the left direction is positive; y iscThe axis is parallel to the longitudinal axis (short side) of the imaging plane coordinate system, and the downward direction is positive; z is a radical ofcThe axis pointing towards the viewer and being parallel to xcAxis and ycThe axes constitute a right-hand coordinate system.
Orthographic camera coordinate system (Is as follows): assuming that there is an orthographic projection in the airA camera, wherein the image generated by the camera is an ortho image without correction, so that the image can be knownThe three axes of the coordinate system point to the east, south and ground, respectively.
Inertial navigation carrier coordinate system (b series): inertial navigation is installed on an aircraft carrier and has a coordinate system origin ObTaken as the inertial navigation center of mass, XbForward is positive along the longitudinal axis of the carrier; y isbAlong the vertical axis direction of the carrier, the upward direction is positive; zbThe carrier side axis direction is positive to the right.
Geographic coordinate system (n series): origin O of coordinate systemnTaken as the aircraft center of mass, XnThe axis indicates north, YnAxis indicates sky, ZnThe axis refers to the east.
From the homogeneous coordinates of the spatial point P in the world coordinate system according to the pinhole camera modelHomogeneous coordinates projected into an image coordinate systemCan be described as
in the formula, c and r are the row coordinate value and the column coordinate value of the space point P in the image coordinate system, and z iscIs the z-component of the coordinates of the spatial point P in the camera coordinate system, f is the optical system principal distance, scIs the distance, s, between adjacent pixels in the column direction on the image sensorrIs the distance between adjacent pixels in the up direction of the image sensor, [ c ]c,cr]TIs the main point of the image, the optical system is designed based on a pinhole camera model, and the image sensor is part of the optical system. Image sensingA cmos light sensing component is arranged on the sensor, each pixel of the image corresponds to a dot matrix of the sensor, and the distance between two points is called as the pixel size, namely the distance between adjacent pixels.Is a rotation matrix describing the rotation process from the world coordinate system to the camera coordinate system.Is a translation vector converted from a world coordinate system to a camera coordinate system, namely the coordinates of the world coordinate system origin under the camera coordinate system,is the homogeneous coordinate of the spatial point P in the world coordinate system.
Suppose two cameras with identical internal parameters are provided, denoted c andthe two cameras image the ground at different angles in the same place, whereinThe generated image is an ortho image. According to the pinhole camera imaging model, the image coordinates of the space point P in the world coordinate system formed by the two cameras are respectively
The position and attitude matrix can be transformed as follows
Namely obtaining the internal parameter K and the rotation matrix of the cameraUnder the condition (2), the non-orthoimage can be processedConverted into an ortho-image
Thus, both the real-time map and the reference map can be based onThe orthoimage correction is realized, wherein, is a non-orthoimage, and is,in order to obtain an ortho-image,is a homogeneous coordinate of the spatial point P in the world coordinate system, zcIs the z-component of the coordinates of the spatial point P in the camera coordinate system, f is the optical system principal distance, scIs the distance, s, between adjacent pixels in the column direction on the image sensorrIs the distance between adjacent pixels in the up direction of the image sensor, [ c ]c,cr]TIs the main point of the image and is,is a rotation matrix between two cameras with identical internal references,for the rotation matrix converted from the world coordinate system to the camera coordinate system,is a translation vector converted from the world coordinate system to the camera coordinate system.
After the real-time image and the reference image are respectively subjected to orthoimage correction, the scaling sizes of the real-time image and the reference image need to be unified. In order to unify the pixel resolution of the real-time image and the reference image, both are scaled to 1 m/pixel. The reference map is directly obtained through commercial map software, and the real-time map is obtained by utilizing a laser range finder to measure the distance between the optical center of the camera and the shooting point and processing the distance in real time. The scaling factor of the real-time map and the reference map may be based onAnd acquiring, wherein mu is the size of the image element, f is the focal length of the camera, and l is the distance between the optical center of the camera and the shooting point.
Further, after the scaling sizes of the real-time map and the reference map are unified, the first projection and quantized gradient direction histogram of the real-time map and the second projection and quantized gradient direction histogram of the reference map can be obtained respectively. In the present invention, the difference in gray scale between the images of different sources is relatively large, but the structural features in the images are still relatively close, as shown in fig. 5(a) and 5 (b). The gradient of the image reflects the structural features in the image well, as shown in fig. 5(c) and 5(d), so that the real-time image and the reference image can be well compared by describing the real-time image and the reference image by using the HOG features.
Specifically, in the present invention, acquiring the first projection and the quantized gradient direction histogram of the real-time image specifically includes: performing Gaussian filtering on the real-time image; extracting gray image gradient characteristics of the real-time image after Gaussian filtering by adopting a Sobel operator to obtain a gradient image of the real-time image; counting a gradient histogram of the real-time image based on the gradient image of the real-time image; projecting and quantizing the gradient histogram of the real-time image to obtain a first projected and quantized gradient direction histogram; the obtaining of the second projection and the quantized gradient direction histogram of the reference map specifically includes: performing Gaussian filtering on the reference map; extracting the gray image gradient feature of the reference image after Gaussian filtering by adopting a Sobel operator to obtain a gradient image of the reference image; counting a gradient histogram of the reference map based on the gradient image of the reference map; the gradient histogram of the reference map is projected and quantized to obtain a second projected and quantized gradient direction histogram. The specific flow of projection and quantized gradient direction histogram acquisition is described in detail below with respect to a real-time map or a reference map.
The image is first denoised by gaussian filtering. In image processing, a two-dimensional gaussian function is used to construct a dimensional convolution kernel and process pixel values, and the two-dimensional gaussian function can be described as follows:
The image local pixels are represented as:
when the image is filtered, convolution operation is carried out on each pixel point, and pixel point P (mu) is filtered1,μ2) Using a Gauss function meterThe calculated convolution kernel performs convolution on the pixel point, and the convolution kernel can be expressed as:
K(μ1-1,μ2-1) | K(μ1,μ2-1) | K(μ1+1,μ2-1) |
K(μ1-1,μ2) | K(μ1,μ2) | K(μ1+1,μ2) |
K(μ1-1,μ2+1) | K(μ1,μ2+1) | K(μ1+1,μ2+1) |
where K is the result after normalization.
After convolution, the values of the corresponding pixel points are:
this operation will be performed on all pixel values of the image to obtain a gaussian filtered image, which is a sliding convolution kernel since it is usually always deconvolved sequentially, from left to right and from top to bottom. When sliding to the boundary, the position corresponding to the convolution kernel has no pixel value, so before convolution, the edge of the original image is usually expanded first, and then convolution operation is performed. The augmented portion may be dropped or automatically zero-filled when convolved.
The properties show that a gaussian smoothing filter is a very efficient low-pass filter, both in the spatial and frequency domain, and is used efficiently in actual image processing. The method comprises the following specific steps:
(1) generally, the edge direction of an image is not known a priori, and therefore, it is not possible to determine that one direction requires more smoothing than the other before filtering.
(2) The gaussian function is a single valued function. This property is important because the edge is a local feature of the image, and if the smoothing operation still has a large effect on pixels that are far from the center of the operator, the smoothing operation will distort the image.
(3) The fourier transform spectrum of the gaussian function is single-lobed. This property is a direct consequence of the fact that the gaussian fourier transform is equal to the gaussian itself. The single lobe of the Gaussian Fourier transform means that the smoothed image is not contaminated by unwanted high frequency signals, while retaining most of the desired signal.
(4) The width of the gaussian filter (which determines the degree of smoothing) is characterized by parameters and is very simple to relate to the degree of smoothing. The larger the gaussian filter, the wider the band and the better the smoothing. By adjusting the smoothness parameter, a trade-off can be made between excessive blurring of image features (over-smoothing) and excessive undesirable amount of abrupt changes in the smoothed image due to noise and fine texture (under-smoothing).
(5) Due to the separability of the gaussian function, a large size gaussian filter can be effectively implemented. The two-dimensional gaussian convolution can be performed in two steps, first convolving the image with a one-dimensional gaussian and then convolving the result with the same one-dimensional gaussian in a perpendicular direction. Thus, the computational burden of two-dimensional gaussian filtering grows linearly with the filter template width rather than squarely.
Secondly, after Gaussian filtering is carried out on the image, a Sobel operator is adopted to extract the gradient characteristics of the gray level image, and the specific calculation is as follows:
Gx=(-1)×f(x-1,y-1)+0×f(x,y-1)+1×f(x+1,y-1)
+(-2)×f(x-1,y)+0×f(x,y)+2×f(x+1,y)
+(-1)×f(x-1,y+1)+0×f(x,y+1)+1×f(x+1,y+1)
=[f(x+1,y-1)+2×f(x+1,y)+f(x+1,y+1)]-[f(x-1,y-1)+2×f(x-1,y)+f(x-1,y+1)]
Gy=1×f(x-1,y-1)+2×f(x,y-1)+1×f(x+1,y-1)
+0×f(x-1,y)+0×f(x,y)+0×f(x+1,y)
+(-1)×f(x-1,y+1)+(-2)×f(x,y+1)+(-1)×f(x+1,y+1)
=[f(x-1,y-1)+2×f(x,y-1)+f(x+1,y-1)]-[f(x-1,y+1)+2×f(x,y+1)+f(x+1,y+1)]
where f (x, y) represents the gray scale value of the image (x, y) point. Gradient amplitudeDirection of gradientPractical range of gradient direction is [ -pi [ ]]The actually solved theta range is mapped toThe gradient directions of the image caused by the fact that the image is from light to dark and from dark to light are possibly in a mutually complementary angle relationship, so that the mapping can ensure that the infrared image is not influenced by light and shade conversion. After obtaining the gradient image, counting the gradient histogram, and performing projection and quantization.
The image is divided into small connected regions, called cells (cells), where the size of the cell is: 8 × 8 pixels. Histogram of directional gradient of each cell unit is counted. The statistical method comprises the following steps: the gradient direction is divided into 8 sections (bins) fromToAs shown in fig. 8, then, according to which bin the gradient direction of each pixel point in the cell falls into, the corresponding interval count is increased by one, so that a weighted projection (mapping to a fixed angle range) is performed on each pixel in the cell unit in the histogram by using the gradient direction, and then a Histogram of Oriented Gradients (HOG) of the cell unit can be obtained. Thus, the statistics result that w × h × 8 dimensional feature vector is recorded as THOGWhere w and h are the column width and row width of the image, respectively, and T isHOGRepresented in the form of a matrix of w x h rows and 8 columns.
Finally, after the image feature description is obtained, projecting the w × h × 8 dimensional feature description by using a projection matrix, quantizing the feature description into 24-bit binary code, wherein the projection matrix is as follows:
This yields the statistical matrix T 'of the histogram of gradient directions after projection'HOGSetting all numbers greater than 0 in the matrix to 1, and setting all numbers less than 0 in the matrix to 0, each row of the matrix is the 24-bit binary code after quantization, and the projected and quantized gradient direction histogram statistical result is obtained.
Further, in the present invention, after the projection and the quantized gradient direction histogram of the real-time map and the reference map are obtained, the similarity of the first projection and quantized gradient direction histogram and the similarity of the second projection and quantized gradient direction histogram are detected based on the similarity measurement principle, and the position of the maximum hamming distance from the first projection and quantized gradient direction histogram is retrieved in the second projection and quantized gradient direction histogram, where the position of the maximum hamming distance is the image matching position of the real-time map in the reference map. Wherein the similarity S (x, y) of the first projection and quantization gradient direction histogram and the second projection and quantization gradient direction histogram can be determined according toWhere d (a, b) is a difference metric, f is a monotonic function, (u, v) is a set of pixel coordinates in the real-time graph, I (x + u, y + v) represents a similarity metric value of the real-time graph at the reference graph (x, y), T is the real-time graph, and I is the reference graph
Specifically, let a set of real-time image template pixel coordinates RT={(ui,vi)|i∈[1,N]N is the number of pixels in the template, (u)i,vi) Is the pixel point coordinate. Reference image RI={(x,y)|x∈[0,Nx-1],y∈[0,Ny-1]},Nx、NyThe column number and the row number of the reference image are respectively. Wherein the template isT, the reference image is I. The degree of similarity describing T and I is denoted by S (x, y).
Wherein d (a, b) is a difference metric, f is a monotonic function, (u, v) represents a pixel coordinate set of the real-time image, and I (x + u, y + v) represents a similarity metric value of the real-time image at the reference image (x, y), and the pixel-by-pixel sliding computation of the similarity matrix S of the real-time image in the reference image is completed by changing the value of (x, y).
In computing the similarity measure, the difference between the 24-bit binary features generated by the projection is computed using hamming distance instead of directly using the gray values of the image:
in the above formula, BitCount is the number of 1 in the string after the xor between binary string a and binary string b is obtained. When d (a, b) is 0, it is said that a and b are the same, and when d (a, b) is 24, it is said that a and b are the most different. It is clear that d (a, b) is a measure of difference.
Binary strings a and b may also be compared by similarity
When S (a, b) is 24, it is said that a and b are the same, and when S (a, b) is 0, it is said that a and b are the most different. If used, all binary strings of the two images are compared and summed. The larger the sum, the higher the similarity between the two figures.
Further, after the image matching position of the real-time image in the reference image is obtained, the image matching position can be converted to the position of the unmanned aerial vehicle and used as a conversion relation between the observation quantity image matching position and the position of the unmanned aerial vehicleWherein h is the flying height of the airplane, x and y are the position relation between the position right below the camera and the shot image, and rdFor the projection of the center point of the camera shooting position in the camera coordinate system, rnIs rdThe projection is carried out under the geographic coordinate system,is a matrix transformed from the camera body coordinate system to the geographic coordinate system. And then, constructing a Kalman filter based on the observed quantity to realize inertial vision integrated navigation and positioning of the unmanned aerial vehicle.
Specifically, in the present invention, a position relationship between the positioning point of the unmanned aerial vehicle and the image matching point is shown in fig. 10, and a coordinate system d (front upper right) of the camera body is defined with a camera center point as an origin, where dx is fixedly connected with a coordinate system (system b) of the inertial navigation carrier, x and b are in the same direction, y and z are in the same direction. The rotation angle existing between the d-system and the b-system is the installation error between the camera and the inertial navigation system. The geographic coordinate system n (north heaven east) and the projection of the central point of the camera shooting position under the camera coordinate system arel is the distance from the center point of the camera to the center point of the shooting position along the optical axis. r isdProjected under a geographic coordinate system asWhereinh is the flying height of the airplane, x and y are the position relation between the position right below the camera and the position of the shot image,is a matrix converted from a camera body coordinate system (system d) to a geographic coordinate system (system n),is a matrix converted from an inertial navigation carrier coordinate system (b system) to a geographic coordinate system (n system),is a matrix converted from a camera body coordinate system (system d) to an inertial navigation carrier coordinate system (system b). Therefore, the laser ranging can be utilized to obtain the l and inertial navigation posturesAngle of installation error with pre-calibrationAnd converting the image matching position into the position of the camera, namely the unmanned aerial vehicle positioning point.
And then, constructing a Kalman filter based on the observed quantity to realize inertial vision integrated navigation and positioning of the unmanned aerial vehicle. In the Kalman filtering model, the system continuous state equation isWherein F (t) is a continuous state equation state transition matrix at time t,is the random noise vector of the system at the time t. The filter state vector comprises 13-dimension north heaven-east speed error (unit: m/s), latitude error (unit: rad), altitude error (unit: m), longitude error (unit: rad), north heaven-east misalignment angle (unit: rad), installation error (unit: rad) between the camera and the inertial navigation and laser ranging scale coefficient error. Is defined as:
wherein, δ VNIs the error of north speed, delta VUDelta V being the error in the speed of dayEIs east speed error, δ L is latitude error, δ H is altitude error, δ λ is longitude error,is the angle of the north misalignment,is the angle of the antenna misalignment,angle of east misalignment, αxFor x-axis mounting error, αyFor mounting errors in the y-axis, αzAnd delta k is the error of the laser ranging scale coefficient.
wherein, VNIs the north velocity, VUIs the speed in the direction of the sky, VEEast speed, Rm,RnTwo dimensions of the spherical radius, L is latitude, H is height, omegaieIs the speed of the rotation of the earth,is a projection of the acceleration of the x-axis camera body coordinate system (b system) relative to the inertial coordinate system (i system) in the geographic coordinate system (n system),is a projection of the acceleration of the camera body coordinate system (system b) relative to the inertial coordinate system (system i) along the y-axis in the geographic coordinate system (system n),is the projection of the acceleration of the camera body coordinate system (system b) along the z-axis relative to the inertial coordinate system (system i) in the geographic coordinate system (system n).
Image matching position and laser ranging are used as observation information (3-dimensional)
Where δ L is the latitude error and δ H is the altitude errorDifference, δ λ is longitude error, LINSLatitude, L, measured for inertial navigationREFFor latitude of image matching, HINSHeight, H, of inertial navigationREFHeight, λ, for image matchingINSLongitude, λ, of inertial navigation measurementREFLongitude, δ L, for image matchingINSFor latitude error of inertial navigation measurement, δ LREFFor latitude error of image matching, δ HINSFor height error of inertial navigation measurement, δ HREFFor height error of image matching, δ λINSFor longitude errors of inertial measurements, δ λREFIs the longitude error of the image match.
δrl nthe position error introduced from the image matching position to the position right below the airplane is converted according to the laser ranging and inertial navigation attitude information.
Where phi is the attitude error angle, eta is the installation error angle, rl nFor the projection of the image matching location in the geographic coordinate system (n system),for the projection of the image matching location with errors in the geographical coordinate system (n system),is a matrix converted from an inertial navigation carrier coordinate system (b system) to a geographic coordinate system (n system),for slave inertial navigation carrier with errorA matrix of transformation of the coordinate system (system b) into the geographic coordinate system (system n),is a matrix converted from a camera coordinate system (c system) to an inertial navigation carrier coordinate system (b system),for the matrix transformed from the camera coordinate system to the inertial navigation carrier coordinate system with errors, rl cThe projection of the image matching positions in the camera coordinate system (c-system) is used.
substituting the formula four into the formula one
Wherein phi isNIs the north attitude error angle, phiUIs the attitude error angle of the sky direction, phiEIs an east attitude error angle, rNIs the north distance, rUIs the distance in the sky, rEIs east distance, ηNIs a north installation error angle, ηUIs a natural installation error angle, etaEIs an east installation error corner.
From which an observation matrix H can be written
H(k)=[H1 H2 H3 H4 H5],
According to another aspect of the invention, an aircraft is provided, and the aircraft uses the inertial vision combined navigation positioning method based on heterogeneous image matching for combined navigation positioning. The combined navigation positioning method provided by the invention can solve the problem of large attitude error in the calculation process of the PNP pose of the two-dimensional plane control point in high-altitude flight, realizes stable matching between different source images and improves the autonomy of a combined navigation algorithm. Therefore, the combined navigation positioning method is applied to the aircraft, and the working performance of the aircraft can be greatly improved.
For further understanding of the present invention, the following describes the inertial visual integrated navigation positioning method based on heterogeneous image matching provided by the present invention in detail with reference to fig. 1 to 10.
As shown in fig. 1 to 10, according to an embodiment of the present invention, an inertial visual integrated navigation positioning method based on heterogeneous image matching is provided, which specifically includes the following steps.
Step one, performing orthoscopic image correction on the real-time image and the reference image respectively by using inertial navigation attitude information and laser ranging height information, and unifying the scaling sizes of the real-time image and the reference image. In this embodiment, the real-time image and the ortho-image have viewing angle inconsistency and dimension inconsistency, and cannot be directly used for matching and positioning, so that inertial navigation attitude information and laser ranging height information are used for fusion, and further, the matching precision is greatly improved while the effect of scale-invariant feature transformation is satisfied.
And step two, respectively acquiring a first projection and a quantized gradient direction histogram of the real-time image and a second projection and a quantized gradient direction histogram of the reference image. In this embodiment, a histogram of gradient directions (HOG) is a description operator based on shape edge features, which can detect an object in computer vision and digital image processing, and its basic idea is to use gradient information to well reflect edge information of an image target and characterize the local appearance and shape of an image according to the size of local gradient, but because the descriptor generation process is long and cannot meet the requirement of real-time matching, the gradient features are projected and quantized to be binary description, thereby increasing the matching speed.
And thirdly, detecting the similarity of the first projection and quantization gradient direction histogram and the second projection and quantization gradient direction histogram based on a similarity measurement principle, and searching the position of the maximum Hamming distance value of the first projection and quantization gradient direction histogram in the second projection and quantization gradient direction histogram, wherein the position of the maximum Hamming distance value is the image matching position of the real-time image in the reference image. In this embodiment, after the HOG features are calculated for the real-time image and the reference image respectively, the hamming distance retrieval method is used to detect the similarity between the two images, and the maximum hamming distance value matched with the real-time image is found in the reference image, that is, the position of the real-time image in the reference image.
And step four, converting the image matching position into the position of the unmanned aerial vehicle and taking the position as an observed quantity, and constructing a Kalman filter based on the observed quantity to realize the inertial vision integrated navigation positioning of the unmanned aerial vehicle. In this embodiment, the position point obtained by image matching is the position shot by the camera, and when the aircraft has pitch or roll, the position is not the position of the unmanned aerial vehicle, so that the position is directly below the point where the unmanned aerial vehicle is located, and then the visual position information is used as the observed quantity to construct a kalman filter, thereby realizing a continuous and autonomous navigation and positioning function.
In summary, the invention provides an inertial vision integrated navigation positioning method based on heterogeneous image matching, which is used for researching a heterogeneous image matching positioning technology aiming at the problem of full-time navigation positioning of an unmanned aerial vehicle under the condition of GPS rejection, and firstly, a mode of capturing flight altitude information by using a laser ranging sensor is provided, so that the problem that scale information cannot be initialized under the condition of small maneuvering in a cruising segment is solved, and the unification of the scales of a real-time image and a reference image is realized through the fusion of inertia/laser ranging/image information; secondly, extracting structural features and performing binary coding by adopting a method based on projection and quantization gradient direction histograms, so that the problem of mismatching caused by inconsistent image information in the process of matching heterogeneous images is solved; finally, a geometric positioning method based on an inertia/laser range finder is provided, the problem that the attitude error is large in the process of resolving the PNP pose of the two-dimensional plane control point during high-altitude flight is solved, stable matching between different source images can be achieved through the method, and the autonomy of a combined navigation algorithm is improved.
Spatially relative terms, such as "above … …," "above … …," "above … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial relationship to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is turned over, devices described as "above" or "on" other devices or configurations would then be oriented "below" or "under" the other devices or configurations. Thus, the exemplary term "above … …" can include both an orientation of "above … …" and "below … …". The device may be otherwise variously oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
It should be noted that the terms "first", "second", and the like are used to define the components, and are only used for convenience of distinguishing the corresponding components, and the terms have no special meanings unless otherwise stated, and therefore, the scope of the present invention should not be construed as being limited.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. An inertial visual integrated navigation positioning method based on heterogeneous image matching is characterized by comprising the following steps:
respectively carrying out orthoimage correction on the real-time image and the reference image by utilizing inertial navigation attitude information and laser ranging height information, and unifying the scaling sizes of the real-time image and the reference image;
respectively acquiring a first projection and a quantized gradient direction histogram of the real-time image and a second projection and a quantized gradient direction histogram of the reference image;
detecting the similarity of the first projection and quantization gradient direction histogram and the second projection and quantization gradient direction histogram based on a similarity measurement principle, and searching the position of the maximum Hamming distance value of the first projection and quantization gradient direction histogram in the second projection and quantization gradient direction histogram, wherein the position of the maximum Hamming distance value is the image matching position of the real-time image in the reference image;
and converting the image matching position into the position of the unmanned aerial vehicle and taking the image matching position as an observed quantity, and constructing a Kalman filter based on the observed quantity to realize the inertial vision integrated navigation positioning of the unmanned aerial vehicle.
2. The method according to claim 1, wherein obtaining the first projection and quantized gradient direction histogram of the live view specifically comprises: performing Gaussian filtering on the real-time image; extracting the gray image gradient feature of the real-time image after Gaussian filtering by adopting a Sobel operator to obtain a gradient image of the real-time image; counting a gradient histogram of the real-time image based on the gradient image of the real-time image; projecting and quantizing the gradient histogram of the real-time image to obtain a first projected and quantized gradient direction histogram; the obtaining of the second projection and the quantized gradient direction histogram of the reference map specifically includes: performing Gaussian filtering on the reference map; extracting the gray image gradient feature of the reference image after Gaussian filtering by adopting a Sobel operator to obtain a gradient image of the reference image; counting a gradient histogram of the reference map based on the gradient image of the reference map; projecting and quantizing the gradient histogram of the reference map to obtain a second projected and quantized gradient direction histogram.
3. The method for inertial visual integrated navigation and positioning based on heterogeneous image matching according to claim 1, wherein the real-time map and the reference map are both based onThe orthoimage correction is realized, wherein, is a non-orthoimage, and is,in order to obtain an ortho-image,is a homogeneous coordinate of the spatial point P in the world coordinate system, zcIs the z-component of the coordinates of the spatial point P in the camera coordinate system, f is the optical system principal distance, scIs the distance, s, between adjacent pixels in the column direction on the image sensorrIs the distance between adjacent pixels in the up direction of the image sensor, [ c ]c,cr]TIs the main point of the image and is,is a rotation matrix between two cameras with identical internal references,for the rotation matrix converted from the world coordinate system to the camera coordinate system,is a translation vector converted from the world coordinate system to the camera coordinate system.
4. The method according to claim 3, wherein the scaling factor k of the real-time image and the reference image is determined according to the relationship between the real-time image and the reference imageAnd acquiring, wherein mu is the size of the image element, f is the focal length of the camera, and l is the distance between the optical center of the camera and the shooting point.
5. The integrated inertial visual navigation positioning method based on heterogeneous image matching according to any one of claims 1 to 4, wherein the similarity S (x, y) of the first projection and quantization gradient direction histogram and the second projection and quantization gradient direction histogram is determined according toWhere d (a, b) is a difference metric, f is a monotonic function, (u, v) is a set of pixel coordinates in the real-time graph, I (x + u, y + v) represents a similarity metric value of the real-time graph at the reference graph (x, y), T is the real-time graph, and I is the reference graph.
6. The method for inertial visual integrated navigation and positioning based on heterogeneous image matching according to claim 1, wherein a conversion relationship between the image matching position and the position of the unmanned aerial vehicle isWherein h is the flying height of the airplane, x and y are the position relation between the position right below the camera and the shot image, and rdFor the projection of the center point of the camera shooting position in the coordinate system of the camera body, rnIs rdThe projection is carried out under the geographic coordinate system,is a matrix transformed from the camera body coordinate system to the geographic coordinate system.
7. The method according to any one of claims 1 to 6, wherein in the Kalman filter, the state vector comprises a north speed error, a sky speed error, an east speed error, a latitude error, an altitude error, a longitude error, a north misalignment angle, a sky misalignment angle, an east misalignment angle, an x-axis installation error, a y-axis installation error, a z-axis installation error and a laser ranging scale factor error.
9. An aircraft, characterized in that the aircraft uses the inertial visual integrated navigation localization method based on heterogeneous image matching according to any one of claims 1 to 8 for integrated navigation localization.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110783533.1A CN113624231B (en) | 2021-07-12 | 2021-07-12 | Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110783533.1A CN113624231B (en) | 2021-07-12 | 2021-07-12 | Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113624231A true CN113624231A (en) | 2021-11-09 |
CN113624231B CN113624231B (en) | 2023-09-12 |
Family
ID=78379514
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110783533.1A Active CN113624231B (en) | 2021-07-12 | 2021-07-12 | Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113624231B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114111795A (en) * | 2021-11-24 | 2022-03-01 | 航天神舟飞行器有限公司 | Unmanned aerial vehicle self-navigation based on terrain matching |
CN115127554A (en) * | 2022-08-31 | 2022-09-30 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle autonomous navigation method and system based on multi-source vision assistance |
CN115932823A (en) * | 2023-01-09 | 2023-04-07 | 中国人民解放军国防科技大学 | Aircraft ground target positioning method based on heterogeneous region feature matching |
CN116518981B (en) * | 2023-06-29 | 2023-09-22 | 中国人民解放军国防科技大学 | Aircraft visual navigation method based on deep learning matching and Kalman filtering |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110150324A1 (en) * | 2009-12-22 | 2011-06-23 | The Chinese University Of Hong Kong | Method and apparatus for recognizing and localizing landmarks from an image onto a map |
CN102788579A (en) * | 2012-06-20 | 2012-11-21 | 天津工业大学 | Unmanned aerial vehicle visual navigation method based on SIFT algorithm |
CN103644904A (en) * | 2013-12-17 | 2014-03-19 | 上海电机学院 | Visual navigation method based on SIFT (scale invariant feature transform) algorithm |
KR20150005253A (en) * | 2013-07-05 | 2015-01-14 | 충남대학교산학협력단 | Camera Data Generator for Landmark-based Vision Navigation System and Computer-readable Media Recording Program for Executing the Same |
CN104966281A (en) * | 2015-04-14 | 2015-10-07 | 中测新图(北京)遥感技术有限责任公司 | IMU/GNSS guiding matching method of multi-view images |
CN107527328A (en) * | 2017-09-01 | 2017-12-29 | 扆冰蕾 | A kind of unmanned plane image geometry processing method for taking into account precision and speed |
CN108320304A (en) * | 2017-12-18 | 2018-07-24 | 广州亿航智能技术有限公司 | A kind of automatic edit methods and system of unmanned plane video media |
CN108763263A (en) * | 2018-04-03 | 2018-11-06 | 南昌奇眸科技有限公司 | A kind of trade-mark searching method |
CN108805906A (en) * | 2018-05-25 | 2018-11-13 | 哈尔滨工业大学 | A kind of moving obstacle detection and localization method based on depth map |
WO2020059220A1 (en) * | 2018-09-21 | 2020-03-26 | 日立建機株式会社 | Coordinate conversion system and work machine |
CN111238488A (en) * | 2020-03-18 | 2020-06-05 | 湖南云顶智能科技有限公司 | Aircraft accurate positioning method based on heterogeneous image matching |
CN111504323A (en) * | 2020-04-23 | 2020-08-07 | 湖南云顶智能科技有限公司 | Unmanned aerial vehicle autonomous positioning method based on heterogeneous image matching and inertial navigation fusion |
CN112837353A (en) * | 2020-12-29 | 2021-05-25 | 北京市遥感信息研究所 | Heterogeneous image matching method based on multi-order characteristic point-line matching |
-
2021
- 2021-07-12 CN CN202110783533.1A patent/CN113624231B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110150324A1 (en) * | 2009-12-22 | 2011-06-23 | The Chinese University Of Hong Kong | Method and apparatus for recognizing and localizing landmarks from an image onto a map |
CN102788579A (en) * | 2012-06-20 | 2012-11-21 | 天津工业大学 | Unmanned aerial vehicle visual navigation method based on SIFT algorithm |
KR20150005253A (en) * | 2013-07-05 | 2015-01-14 | 충남대학교산학협력단 | Camera Data Generator for Landmark-based Vision Navigation System and Computer-readable Media Recording Program for Executing the Same |
CN103644904A (en) * | 2013-12-17 | 2014-03-19 | 上海电机学院 | Visual navigation method based on SIFT (scale invariant feature transform) algorithm |
CN104966281A (en) * | 2015-04-14 | 2015-10-07 | 中测新图(北京)遥感技术有限责任公司 | IMU/GNSS guiding matching method of multi-view images |
CN107527328A (en) * | 2017-09-01 | 2017-12-29 | 扆冰蕾 | A kind of unmanned plane image geometry processing method for taking into account precision and speed |
CN108320304A (en) * | 2017-12-18 | 2018-07-24 | 广州亿航智能技术有限公司 | A kind of automatic edit methods and system of unmanned plane video media |
CN108763263A (en) * | 2018-04-03 | 2018-11-06 | 南昌奇眸科技有限公司 | A kind of trade-mark searching method |
CN108805906A (en) * | 2018-05-25 | 2018-11-13 | 哈尔滨工业大学 | A kind of moving obstacle detection and localization method based on depth map |
WO2020059220A1 (en) * | 2018-09-21 | 2020-03-26 | 日立建機株式会社 | Coordinate conversion system and work machine |
CN111238488A (en) * | 2020-03-18 | 2020-06-05 | 湖南云顶智能科技有限公司 | Aircraft accurate positioning method based on heterogeneous image matching |
CN111504323A (en) * | 2020-04-23 | 2020-08-07 | 湖南云顶智能科技有限公司 | Unmanned aerial vehicle autonomous positioning method based on heterogeneous image matching and inertial navigation fusion |
CN112837353A (en) * | 2020-12-29 | 2021-05-25 | 北京市遥感信息研究所 | Heterogeneous image matching method based on multi-order characteristic point-line matching |
Non-Patent Citations (1)
Title |
---|
王民钢;孙传新;: "基于图像匹配的飞行器导航定位算法及仿真", 计算机仿真, no. 05, pages 86 - 89 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114111795A (en) * | 2021-11-24 | 2022-03-01 | 航天神舟飞行器有限公司 | Unmanned aerial vehicle self-navigation based on terrain matching |
CN115127554A (en) * | 2022-08-31 | 2022-09-30 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle autonomous navigation method and system based on multi-source vision assistance |
CN115932823A (en) * | 2023-01-09 | 2023-04-07 | 中国人民解放军国防科技大学 | Aircraft ground target positioning method based on heterogeneous region feature matching |
CN116518981B (en) * | 2023-06-29 | 2023-09-22 | 中国人民解放军国防科技大学 | Aircraft visual navigation method based on deep learning matching and Kalman filtering |
Also Published As
Publication number | Publication date |
---|---|
CN113624231B (en) | 2023-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10515458B1 (en) | Image-matching navigation method and apparatus for aerial vehicles | |
CN113624231B (en) | Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft | |
CN110930508B (en) | Two-dimensional photoelectric video and three-dimensional scene fusion method | |
CN101598556B (en) | Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment | |
CN109708649B (en) | Attitude determination method and system for remote sensing satellite | |
CN112419374B (en) | Unmanned aerial vehicle positioning method based on image registration | |
CN107560603B (en) | Unmanned aerial vehicle oblique photography measurement system and measurement method | |
CN115187798A (en) | Multi-unmanned aerial vehicle high-precision matching positioning method | |
CN108917753B (en) | Aircraft position determination method based on motion recovery structure | |
CN113313659B (en) | High-precision image stitching method under multi-machine cooperative constraint | |
CN112710311A (en) | Automatic planning method for three-dimensional live-action reconstruction aerial camera points of terrain adaptive unmanned aerial vehicle | |
Chen et al. | Real-time geo-localization using satellite imagery and topography for unmanned aerial vehicles | |
CN114926552B (en) | Method and system for calculating Gaussian coordinates of pixel points based on unmanned aerial vehicle image | |
Sai et al. | Geometric accuracy assessments of orthophoto production from uav aerial images | |
Xinmei et al. | Passive measurement method of tree height and crown diameter using a smartphone | |
CN112950671A (en) | Real-time high-precision parameter measurement method for moving target by unmanned aerial vehicle | |
CN110160503B (en) | Unmanned aerial vehicle landscape matching positioning method considering elevation | |
CN112927294B (en) | Satellite orbit and attitude determination method based on single sensor | |
Zhou et al. | Automatic orthorectification and mosaicking of oblique images from a zoom lens aerial camera | |
Kikuya et al. | Attitude determination algorithm using Earth sensor images and image recognition | |
CN117073669A (en) | Aircraft positioning method | |
Božić-Štulić et al. | Complete model for automatic object detection and localisation on aerial images using convolutional neural networks | |
CN116309821A (en) | Unmanned aerial vehicle positioning method based on heterologous image registration | |
CN114494039A (en) | Underwater hyperspectral push-broom image geometric correction method | |
Liu et al. | Adaptive re-weighted block adjustment for multi-coverage satellite stereo images without ground control points |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |