CN114842074A - Unmanned aerial vehicle image positioning method based on model matching - Google Patents

Unmanned aerial vehicle image positioning method based on model matching Download PDF

Info

Publication number
CN114842074A
CN114842074A CN202210646966.7A CN202210646966A CN114842074A CN 114842074 A CN114842074 A CN 114842074A CN 202210646966 A CN202210646966 A CN 202210646966A CN 114842074 A CN114842074 A CN 114842074A
Authority
CN
China
Prior art keywords
point cloud
aerial vehicle
unmanned aerial
dimensional model
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210646966.7A
Other languages
Chinese (zh)
Inventor
石昌俊
吕品
蔡小飞
沙建东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Sensi Technology Co ltd
Original Assignee
Nanjing Sensi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Sensi Technology Co ltd filed Critical Nanjing Sensi Technology Co ltd
Priority to CN202210646966.7A priority Critical patent/CN114842074A/en
Publication of CN114842074A publication Critical patent/CN114842074A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an unmanned aerial vehicle image positioning method based on model matching, which comprises the following steps: loading a three-dimensional model high-precision map of an unmanned aerial vehicle positioning space, if the loading is successful, acquiring laser radar point cloud data in the high-precision map at the moment k, matching the laser radar point cloud data at the moment k with the high-precision map, and resolving the pose of the unmanned aerial vehicle; if the loading fails, matching the laser radar point cloud data in the high-precision map at the time k with the sub-map of the space three-dimensional model established at the time k-1, and resolving the pose of the unmanned aerial vehicle; based on the installation relation of the laser radar and the vision sensor, the holder angle and the unmanned aerial vehicle pose, the position of the shot image in the three-dimensional model is calculated, and the shot image and the position information of the shot image in the three-dimensional model are output. By the method and the device, unmanned detection can be realized in a satellite rejection environment, images in the three-dimensional model and the positions of the images in the three-dimensional model are obtained, and the detection efficiency and accuracy can be improved.

Description

Unmanned aerial vehicle image positioning method based on model matching
Technical Field
The invention relates to the technical field of intelligent detection of unmanned aerial vehicle systems, in particular to an unmanned aerial vehicle image positioning method based on model matching.
Background
The enclosed space detection means that a detection mechanism carries out inspection, audit, test and the like on internal structures and equipment of the space. The necessary technical certificate can only be obtained by a corresponding check.
The traditional manual detection method has the defects of low efficiency, high cost and the like, and the unmanned system can perform autonomous positioning in a satellite signal rejection environment based on the laser radar and associate a shot image with a shooting position, so that the detection efficiency and accuracy are effectively improved.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle image positioning method based on model matching, which can realize high-precision autonomous positioning based on a laser radar under the condition of existence of a three-dimensional model high-precision map, and can calculate the position of a shot in the three-dimensional model through the installation relation between the laser radar and a vision sensor and the angle of a holder.
In order to achieve the purpose, the invention provides the following scheme:
an unmanned aerial vehicle image positioning method based on model matching comprises the following steps:
loading a three-dimensional model high-precision map of an unmanned aerial vehicle positioning space, if the three-dimensional model high-precision map is successfully loaded, acquiring laser radar point cloud data in the high-precision map at the moment k, matching the laser radar point cloud data at the moment k with the high-precision map, and resolving the pose of the unmanned aerial vehicle;
if the loading fails, matching the laser radar point cloud data in the high-precision map at the time k with a sub-map of a space three-dimensional model established at the time k-1, and resolving the pose of the unmanned aerial vehicle;
based on the installation relation of the laser radar and the vision sensor, the holder angle and the unmanned aerial vehicle pose, calculating the position of the shot image in the three-dimensional model, and outputting the shot image and the position information of the shot image in the three-dimensional model.
Preferably, the high-precision map of the three-dimensional model comprises point cloud data and a fast point feature value histogram (FPFH) of the point cloud.
Preferably, matching the laser radar point cloud data at the moment k with the high-precision map, and resolving the pose of the unmanned aerial vehicle, wherein the method comprises the following steps:
calculating a fast point characteristic value histogram (FPFH) of each point cloud in the laser radar point cloud data at the moment k; searching corresponding points of a plurality of sampling points based on the FPFH, and calculating a transformation matrix; and performing projection transformation on the sampling points, calculating distance errors between the sampling points and the corresponding points after projection transformation through the transformation matrix, and calculating the pose of the unmanned aerial vehicle according to the distance errors and the transformation matrix.
Preferably, the method for calculating the fast point feature value histogram FPFH is as follows:
Figure BDA0003686321020000021
wherein p is a point to be calculated, p ki Neighborhood points for point p, ki the number of neighborhood points, SPFH is a point reduced feature histogram, ω ki Denotes p and p ki The distance between them.
Preferably, calculating the distance error between the sampling point and the corresponding point after projective transformation includes:
Figure BDA0003686321020000031
wherein m is l To a preset value,/ i The distance difference between the ith sample point and the corresponding point after projection transformation is obtained.
Preferably, when the distance error and the score are smaller than a set threshold or reach a set calculation number, selecting the transformation matrix when the score is the minimum value as a point cloud registration result; wherein the calculation method of the score is as follows:
Figure BDA0003686321020000032
l i the distance difference between the ith sample point and the corresponding point after projection transformation is obtained.
Preferably, based on the transformation matrix when score is the minimum value, projection transformation is performed on the laser radar point cloud data at the time k, the transformed point cloud is put into a space grid with the nearest distance, and the transformation matrix is optimized; the constructed optimization function is as follows:
Figure BDA0003686321020000033
T * in order to be the optimal transformation matrix,
Figure BDA0003686321020000034
is a probability density function, T is a pose transformation matrix, x k And (4) for inputting the point cloud, iterating the formula by a Newton optimization method to obtain an optimal solution.
Preferably, obtaining the spatial grid comprises:
carrying out gridding operation on the space occupied by the point cloud data in the high-precision map, and calculating the probability density function of the point cloud data in each grid:
Figure BDA0003686321020000035
wherein the content of the first and second substances,
Figure BDA0003686321020000041
as a function of probability density, x k For inputting a point cloud, d 1 、d 2 Normalizing constants for probability density functions, (. The)' denotes transpose operation, mu, sigma k The calculation is made by the following formula:
Figure BDA0003686321020000042
where m is the number of point clouds in the grid, y j Is the jth point cloud in the grid.
Preferably, matching the laser radar point cloud data in the high-precision map at the time k with the three-dimensional model sub-map constructed at the time k-1 includes:
carrying out gridding operation on the space occupied by the point cloud data at the moment k-1, and calculating the probability density function of the point cloud data in each grid:
Figure BDA0003686321020000043
wherein the content of the first and second substances,
Figure BDA0003686321020000044
is a probability density function;
performing projection transformation on the laser radar point cloud data in the high-precision map at the time k, putting the transformed point cloud into a space grid with the nearest distance, and optimizing a transformation matrix, wherein the constructed optimization function is as follows:
Figure BDA0003686321020000045
T * for the optimal transformation matrix, iteration is carried out through a Newton optimization method to obtain an optimal solution; based on the optimal transformation matrix T * And updating the three-dimensional model sub-map by the laser radar point cloud data at the moment k to obtain an updated three-dimensional model sub-map MS (k).
Preferably, calculating the position of the shot image in the three-dimensional model based on the pose of the unmanned aerial vehicle comprises:
based on the installation relation of the laser radar and the vision sensor, the holder angle and the unmanned aerial vehicle pose, calculating the position of the shot image in the three-dimensional model:
Figure BDA0003686321020000051
wherein
Figure BDA0003686321020000052
Is a rotation matrix and a translation vector between the laser radar and the vision sensor,
Figure BDA0003686321020000053
rotation matrix for the change in pan-tilt angle at time k, P c The coordinates of the captured image in the coordinate system of the vision sensor,
Figure BDA0003686321020000054
is a rotation matrix of the unmanned aerial vehicle under a navigation coordinate system,
Figure BDA0003686321020000055
for the translation vector of the unmanned plane in the navigation coordinate system, P n The position of the shot image in the three-dimensional model.
The invention has the beneficial effects that:
by the method, the problems of low unmanned detection accuracy caused by poor positioning accuracy of the unmanned aerial vehicle in a satellite signal rejection environment and low manual detection efficiency and high cost can be solved, images in the three-dimensional model and corresponding positions of the images in the three-dimensional model can be obtained by the method, and the detection efficiency and accuracy can be improved effectively.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The flow of the method of the invention is shown in figure 1, and the specific steps are as follows:
the method comprises the following steps: the high-precision map M loaded with the three-dimensional model includes both Point cloud data and a Fast Point Feature Histogram (FPFH) of the Point cloud. And jumping to the step two if the loading is successful, and jumping to the step three if the loading is failed.
Step two: and collecting laser radar point cloud data S (k) at the moment k, matching the point cloud data S (k) with the point cloud data M, and resolving the unmanned aerial vehicle pose at the moment k.
1) Calculating the FPFH of each point cloud in S (k):
Figure BDA0003686321020000061
wherein p is a point to be calculated, p ki Neighborhood points for point p, ki the number of neighborhood points, SPFH is a point reduced feature histogram, ω ki Denotes p and p ki The distance between them. In the step S (k), n sampling points are randomly selected, the distance between every two sampling points is larger than a set threshold value d, and the selected sampling points are ensured to have different FPFH (field programmable gate hydrofs) as much as possible.
2) Finding out the corresponding points of n sampling points in the point cloud data of M according to FPFH, and calculating the corresponding transformation matrix under the matching condition according to n pairs of sampling-corresponding points
Figure BDA0003686321020000062
N samples are collectedSampling point according to
Figure BDA0003686321020000071
Performing projection transformation, calculating the distance error between the point after projection transformation and the corresponding point, and judging the point cloud registration performance according to the distance error:
Figure BDA0003686321020000072
wherein, H (l) i ) Is specifically represented as follows:
Figure BDA0003686321020000073
wherein m is l To a preset value,/ i The distance difference between the ith sample point and the corresponding point after projection transformation is obtained.
3) Repeating 2) until the distance error and score are less than the set threshold or the set calculation times are reached, and selecting score to obtain the minimum value
Figure BDA0003686321020000074
As a result of the point cloud registration.
4) Gridding the space occupied by the point cloud data in the M, and calculating the probability density function of the point cloud data in each grid:
Figure BDA0003686321020000075
wherein the content of the first and second substances,
Figure BDA0003686321020000076
as a function of probability density, x k For inputting a point cloud, d 1 、d 2 Normalizing constants for probability density functions, mu, sigma k The calculation is made by the following formula:
Figure BDA0003686321020000077
where m is the number of point clouds in the grid, y j Is the jth point cloud in the grid.
5) According to calculation in 3)
Figure BDA0003686321020000081
And (k) performing projection transformation on the S (k), putting the transformed point cloud into a space grid with the closest distance, and optimizing a transformation matrix to maximize the possibility that the input point cloud is positioned on the target point cloud. The constructed optimization function is as follows:
Figure BDA0003686321020000082
T * the optimal transformation matrix is obtained, and the above formula is iterated by a Newton optimization method to obtain the optimal solution.
And 6, jumping to the step four.
Step three: and collecting laser radar point cloud data S (k) at the moment k, matching the S (k) with the three-dimensional model sub-map MS (k-1) constructed at the moment k-1, calculating the pose of the unmanned aerial vehicle at the moment k, and updating the three-dimensional model sub-map.
Gridding the space occupied by point cloud data in MS (k-1), and calculating the probability density function of the point cloud data in each grid:
Figure BDA0003686321020000083
wherein the content of the first and second substances,
Figure BDA0003686321020000084
is a probability density function.
And (k) performing projection transformation on the S (k), putting the transformed point cloud into a space grid with the closest distance, and optimizing a transformation matrix to maximize the possibility that the input point cloud is positioned on the target point cloud. The constructed optimization function is as follows:
Figure BDA0003686321020000085
T * the optimal transformation matrix is obtained, and the above formula is iterated by a Newton optimization method to obtain the optimal solution. According to the optimal transformation matrix T * And S (k) updating the three-dimensional model sub-map to obtain MS (k).
And 6, jumping to the step four.
Step four: the method comprises the steps that a visual sensor carried by the unmanned aerial vehicle is used for shooting a three-dimensional model image, and the position of the image in the three-dimensional model is calculated based on the installation relation of a laser radar and the visual sensor, the angle of a holder and the pose of the unmanned aerial vehicle at the moment k.
K time transformation matrix T of unmanned aerial vehicle * Including rotation matrix of unmanned aerial vehicle under navigation coordinate system
Figure BDA0003686321020000091
And translation vector
Figure BDA0003686321020000092
And (3) setting a laser radar coordinate system to be consistent with an unmanned aerial vehicle body coordinate system, and calculating the position of the image in the three-dimensional model according to the following formula:
Figure BDA0003686321020000093
wherein
Figure BDA0003686321020000094
Is a rotation matrix and a translation vector between the laser radar and the vision sensor,
Figure BDA0003686321020000095
rotation matrix for the change in pan-tilt angle at time k, P c Is the coordinate of the photographed image in the vision sensor coordinate system, P n The position of the shot image in the three-dimensional model is obtained.
Step five: and outputting the shot image and the position information of the shot image in the three-dimensional model, and jumping to the second step under the condition that the M is successfully loaded, or jumping to the third step.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (10)

1. The utility model provides an unmanned aerial vehicle image positioning method based on model matching which characterized in that includes:
loading a three-dimensional model high-precision map of an unmanned aerial vehicle positioning space, if the loading is successful, acquiring laser radar point cloud data in the high-precision map at the time k, matching the laser radar point cloud data at the time k with the high-precision map, and resolving the pose of the unmanned aerial vehicle;
if the loading fails, matching the laser radar point cloud data in the high-precision map at the time k with a sub-map of a space three-dimensional model established at the time k-1, and resolving the pose of the unmanned aerial vehicle;
based on the installation relation of the laser radar and the vision sensor, the holder angle and the unmanned aerial vehicle pose, calculating the position of the shot image in the three-dimensional model, and outputting the shot image and the position information of the shot image in the three-dimensional model.
2. The model matching-based unmanned aerial vehicle image localization method of claim 1, wherein the high-precision map of the three-dimensional model comprises point cloud data and a fast point feature value histogram (FPFH) of the point cloud.
3. The model matching-based unmanned aerial vehicle image positioning method of claim 1, wherein the matching of the lidar point cloud data at the time k with the high-precision map and the resolving of the unmanned aerial vehicle pose comprise:
calculating a fast point characteristic value histogram (FPFH) of each point cloud in the laser radar point cloud data at the moment k; searching corresponding points of a plurality of sampling points based on the FPFH, and calculating a transformation matrix; and performing projection transformation on the sampling points, calculating distance errors between the sampling points and the corresponding points after projection transformation through the transformation matrix, and calculating the pose of the unmanned aerial vehicle according to the distance errors and the transformation matrix.
4. The model matching-based unmanned aerial vehicle image positioning method of claim 3, wherein the fast point feature value histogram (FPFH) is calculated by:
Figure FDA0003686321010000021
wherein p is a point to be calculated, p ki Neighborhood points for point p, ki the number of neighborhood points, SPFH is a point reduced feature histogram, ω ki Denotes p and p ki The distance between them.
5. The unmanned aerial vehicle image positioning method based on model matching as claimed in claim 3, wherein calculating the distance error between the sampling point and the corresponding point after the projective transformation comprises:
Figure FDA0003686321010000022
wherein m is l To a preset value,/ i The distance difference between the ith sample point and the corresponding point after projection transformation is obtained.
6. The model matching-based unmanned aerial vehicle image positioning method according to claim 5, wherein when the distance error and score are smaller than a set threshold or reach a set number of calculations, the transformation matrix with the score at the minimum value is selected as a point cloud registration result; wherein the calculation method of the score is as follows:
Figure FDA0003686321010000023
l i the distance difference between the ith sample point and the corresponding point after projection transformation is obtained.
7. The unmanned aerial vehicle image positioning method based on model matching as claimed in claim 6, wherein based on the transformation matrix when score is the minimum value, projection transformation is performed on the lidar point cloud data at the k time, the transformed point cloud is put into the space grid closest to the k time, and the transformation matrix is optimized; the constructed optimization function is as follows:
Figure FDA0003686321010000031
T * in order to be the optimal transformation matrix,
Figure FDA0003686321010000032
is a probability density function, T is a pose transformation matrix, x k And (4) for inputting the point cloud, iterating the formula by a Newton optimization method to obtain an optimal solution.
8. The model matching-based unmanned aerial vehicle image positioning method of claim 7, wherein obtaining the spatial grid comprises:
carrying out gridding operation on the space occupied by the point cloud data in the high-precision map, and calculating the probability density function of the point cloud data in each grid:
Figure FDA0003686321010000033
wherein the content of the first and second substances,
Figure FDA0003686321010000034
as a function of probability density, x k For the input point cloud, d 1 、d 2 Normalizing constants for probability density functions, (. The)' denotes transpose operation, mu, sigma k The calculation is made by the following formula:
Figure FDA0003686321010000035
where m is the number of point clouds in the grid, y j Is the jth point cloud in the grid.
9. The unmanned aerial vehicle image positioning method based on model matching as claimed in claim 1, wherein matching with the laser radar point cloud data in the high-precision map at time k and the three-dimensional model sub-map constructed at time k-1 comprises:
carrying out gridding operation on the space occupied by the point cloud data at the moment k-1, and calculating the probability density function of the point cloud data in each grid:
Figure FDA0003686321010000041
wherein the content of the first and second substances,
Figure FDA0003686321010000042
is a probability density function;
performing projection transformation on the laser radar point cloud data in the high-precision map at the time k, putting the transformed point cloud into a space grid with the nearest distance, and optimizing a transformation matrix, wherein the constructed optimization function is as follows:
Figure FDA0003686321010000043
T * for the optimal transformation matrix, iteration is carried out through a Newton optimization method to obtain an optimal solution; based on the optimal transformation matrixT * And updating the three-dimensional model sub-map by the laser radar point cloud data at the moment k to obtain an updated three-dimensional model sub-map MS (k).
10. The method of claim 1, wherein calculating the position of the captured image in the three-dimensional model based on the pose of the drone comprises:
based on the installation relation of the laser radar and the vision sensor, the holder angle and the unmanned aerial vehicle pose, calculating the position of the shot image in the three-dimensional model:
Figure FDA0003686321010000044
wherein
Figure FDA0003686321010000045
Is a rotation matrix and a translation vector between the laser radar and the vision sensor,
Figure FDA0003686321010000046
rotation matrix for the change in pan-tilt angle at time k, P c The coordinates of the captured image in the coordinate system of the vision sensor,
Figure FDA0003686321010000047
is a rotation matrix of the unmanned aerial vehicle under a navigation coordinate system,
Figure FDA0003686321010000048
for the translation vector of the unmanned plane in the navigation coordinate system, P n The position of the shot image in the three-dimensional model.
CN202210646966.7A 2022-06-09 2022-06-09 Unmanned aerial vehicle image positioning method based on model matching Pending CN114842074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210646966.7A CN114842074A (en) 2022-06-09 2022-06-09 Unmanned aerial vehicle image positioning method based on model matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210646966.7A CN114842074A (en) 2022-06-09 2022-06-09 Unmanned aerial vehicle image positioning method based on model matching

Publications (1)

Publication Number Publication Date
CN114842074A true CN114842074A (en) 2022-08-02

Family

ID=82573696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210646966.7A Pending CN114842074A (en) 2022-06-09 2022-06-09 Unmanned aerial vehicle image positioning method based on model matching

Country Status (1)

Country Link
CN (1) CN114842074A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116203554A (en) * 2023-05-06 2023-06-02 武汉煜炜光学科技有限公司 Environment point cloud data scanning method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116203554A (en) * 2023-05-06 2023-06-02 武汉煜炜光学科技有限公司 Environment point cloud data scanning method and system

Similar Documents

Publication Publication Date Title
CN108871353B (en) Road network map generation method, system, equipment and storage medium
CN109270545B (en) Positioning true value verification method, device, equipment and storage medium
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN110930495A (en) Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium
CN112346104A (en) Unmanned aerial vehicle information fusion positioning method
CN112052807B (en) Vehicle position detection method, device, electronic equipment and storage medium
CN114820749A (en) Unmanned vehicle underground positioning method, system, equipment and medium
CN116206223A (en) Fire detection method and system based on unmanned aerial vehicle edge calculation
CN114842074A (en) Unmanned aerial vehicle image positioning method based on model matching
CN115343744A (en) Optical single-double-star combined on-satellite positioning method and system for aerial moving target
CN113012215A (en) Method, system and equipment for space positioning
CN112148817B (en) SLAM optimization method, device and system based on panorama
CN111735447A (en) Satellite-sensitive-simulation type indoor relative pose measurement system and working method thereof
CN113673288A (en) Idle parking space detection method and device, computer equipment and storage medium
CN117029817A (en) Two-dimensional grid map fusion method and system
CN111104965A (en) Vehicle target identification method and device
CN110927765A (en) Laser radar and satellite navigation fused target online positioning method
CN113850173A (en) Method, device and medium for eliminating repeated positioning of faulty photovoltaic module
CN111024063A (en) Star map recognition algorithm under large-mobility condition based on star point re-extraction
Tang et al. Automatic geo‐localization framework without GNSS data
CN114390270B (en) Real-time intelligent site panorama exploration method and device and electronic equipment
CN117115414B (en) GPS-free unmanned aerial vehicle positioning method and device based on deep learning
CN116704037B (en) Satellite lock-losing repositioning method and system based on image processing technology
CN116380023B (en) Land mapping system based on remote sensing technology
CN113920180B (en) Point cloud registration optimization method based on normal distribution transformation hypothesis verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination