CN113538579A - Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information - Google Patents

Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information Download PDF

Info

Publication number
CN113538579A
CN113538579A CN202110797651.8A CN202110797651A CN113538579A CN 113538579 A CN113538579 A CN 113538579A CN 202110797651 A CN202110797651 A CN 202110797651A CN 113538579 A CN113538579 A CN 113538579A
Authority
CN
China
Prior art keywords
map
point cloud
mobile robot
image
phase correlation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110797651.8A
Other languages
Chinese (zh)
Other versions
CN113538579B (en
Inventor
王越
许学成
陈泽希
熊蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110797651.8A priority Critical patent/CN113538579B/en
Publication of CN113538579A publication Critical patent/CN113538579A/en
Application granted granted Critical
Publication of CN113538579B publication Critical patent/CN113538579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a mobile robot positioning method based on unmanned aerial vehicle maps and ground binocular information, and belongs to the field of mobile robot positioning. According to the method, a binocular camera of the mobile robot is used for obtaining a ground point cloud image of the position of the mobile robot, meanwhile, a position rough estimation value determined by a vehicle-mounted sensor is used for intercepting a local image from an aerial view map of the unmanned aerial vehicle, the two images are subjected to phase correlation through a depth phase correlation network and converted into a probability distribution map, and therefore accurate positioning of the robot can be achieved through a particle filter positioning algorithm. The method can correct the position rough estimation value determined by vehicle-mounted sensors such as a GPS, a speedometer and the like, eliminates the adverse effect of external factors such as illumination, shelters and the like on the positioning result, and greatly improves the robustness of the autonomous positioning of the mobile robot.

Description

Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information
Technical Field
The invention belongs to the field of mobile robot positioning, and particularly relates to a mobile robot positioning method based on an unmanned aerial vehicle map and ground binocular information.
Background
Self-positioning technology is a very important part of mobile robotic systems. Especially in some rescue scenes, such as earthquake, landslide and other scenes. The ground mobile robot has great limitation in the motion process due to the characteristics of large load, motion mode and the like, and needs to avoid detour when encountering obstacles. However, in the rescue situation with many obstacles, it is difficult to position and navigate, because of the obstacles, the view of the robot is blocked, it is difficult to sense the environment in a large range, and navigation planning is impossible. Therefore, the cooperation of the unmanned aerial vehicle is very important. Unmanned aerial vehicle is comparatively nimble, and the field of vision is big, but the load is little. In the cooperation of ground robot and aerial robot, unmanned aerial vehicle can play the effect of exploring the way.
According to the introduction, a set of method for enabling the mobile robot to be positioned by means of a map constructed by the unmanned aerial vehicle and the vehicle-mounted binocular camera is designed, and the technical problem to be solved urgently in the prior art is solved.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a mobile robot positioning method based on an unmanned aerial vehicle map and binocular camera information.
In order to achieve the above purpose, the invention specifically adopts the following technical scheme:
a mobile robot positioning method based on unmanned plane maps and ground binocular information comprises the following steps:
s1: utilizing an unmanned aerial vehicle to carry out full coverage detection on a robot moving area, obtaining an image sequence of a downward-looking camera and parameters in a flight process, and recovering a bird-eye view map of the robot moving area through a sparse point cloud map;
s2: detecting an area in front of the position by using a binocular camera carried on the mobile robot to form ground point cloud rich in texture information, and observing the ground point cloud at a bird's-eye view angle to obtain a ground point cloud image;
s3: the mobile robot estimates the position of the mobile robot according to a sensor carried by the mobile robot, and a local aerial view map with the same size as the ground point cloud image is intercepted from the aerial view map by taking the position of the mobile robot as the center;
s4: inputting the ground point cloud image and the local aerial view map into a depth phase correlation network, extracting robust features in the ground point cloud image and the local aerial view map through convolution operation, converting the extracted features into feature maps with the same size as an original image through deconvolution operation, removing translation components of the feature maps of the ground point cloud image and the local aerial view map through Fourier transform operation, converting rotation components into translation through logarithmic polarity transform operation, and finally obtaining a phase correlation map through phase correlation operation;
S5: performing Softmax operation on the phase correlation diagram to convert the phase correlation diagram into 0-1 distribution to obtain a probability distribution diagram;
s6: and on the basis of the probability distribution map, positioning the accurate position of the mobile robot on the map based on a particle filter positioning method.
Preferably, in S1, the unmanned aerial vehicle first detects a moving area of the robot, flies a distance above the area to be detected and returns after covering the area, and returns the image sequence, the flying IMU, and the camera parameter information obtained by the unmanned aerial vehicle to the ground workstation, and the ground workstation first estimates the pose of each frame of image in the image sequence by means of SLAM technology, then constructs a sparse point cloud map of the ground by means of feature point matching, and finally interpolates the sparse point cloud by means of the image and constructs a Mesh surface, and recovers the bird view map of the area to be detected.
Preferably, in S2, the depth information of the area in front of the location is estimated by a binocular camera mounted on the mobile robot to form a point cloud, the image information of the left eye camera is added to the formed point cloud in the form of texture to form a ground point cloud rich in texture information, and the ground point cloud is observed at a bird 'S eye view angle to obtain a ground point cloud image at the bird' S eye view angle.
Preferably, in S3, the mobile robot estimates the position of the mobile robot according to GPS or odometer.
Preferably, in S4, the depth-phase correlation network includes 8 different U-Net networks, wherein a specific method for outputting the phase correlation map for the input ground point cloud image and the local bird' S-eye view map is as follows:
s401: taking a first U-Net network and a second U-Net network which are trained in advance as two feature extractors, respectively taking a local aerial view map and a ground point cloud image as respective original input images of the two feature extractors, and extracting isomorphic features in the two original input images to obtain an isomorphic first feature image and a isomorphic second feature image;
s402: respectively carrying out Fourier transform on the first characteristic diagram and the second characteristic diagram obtained in the S401, and then taking respective magnitude spectrums;
s403: respectively carrying out log-polar coordinate transformation on the two magnitude spectrums obtained in the S402 to convert the two magnitude spectrums from a Cartesian coordinate system to a log-polar coordinate system, so that the rotation transformation between the two magnitude spectrums under the Cartesian coordinate system is mapped to the translation transformation in the y direction in the log-polar coordinate system;
s404: performing phase correlation solving on the amplitude spectrums subjected to the coordinate transformation in the step S403 to obtain a translation transformation relation between the two amplitude spectrums, and performing retransformation according to a mapping relation between a Cartesian coordinate system and a logarithmic polar coordinate system in the step S403 to obtain a rotation transformation relation between the local aerial view map and the ground point cloud image;
S405: taking a third U-Net network and a fourth U-Net network which are trained in advance as two feature extractors, respectively taking a local aerial view map and a ground point cloud image as respective original input images of the two feature extractors, and extracting isomorphic features in the two original input images to obtain an isomorphic third feature image and a isomorphic fourth feature image;
s406: performing Fourier transform on the third characteristic diagram and the fourth characteristic diagram obtained in the step S405 respectively, and then taking respective magnitude spectrums;
s407: respectively carrying out log-polar coordinate transformation on the two magnitude spectrums obtained in the step S406 to convert the two magnitude spectrums from a Cartesian coordinate system to a log-polar coordinate system, so that scaling transformation under the Cartesian coordinate system between the two magnitude spectrums is mapped to translation transformation in the x direction in the log-polar coordinate system;
s408: performing phase correlation solving on the amplitude spectrums subjected to the coordinate transformation in the step S407 to obtain a translation transformation relation between the two amplitude spectrums, and performing retransformation according to a mapping relation between a Cartesian coordinate system and a logarithmic polar coordinate system in the step S407 to obtain a scaling transformation relation between the local aerial view map and the ground point cloud image;
s409: performing corresponding rotation and scaling transformation on the ground point cloud image according to the rotation transformation relation and the scaling transformation relation obtained in S404 and S408 to obtain a new ground point cloud image;
S410: taking a fifth U-Net network and a sixth U-Net network which are trained in advance as two feature extractors, respectively taking a local aerial view map and a new ground point cloud image as respective original input images of the two feature extractors, and extracting isomorphic features in the two original input images to obtain an isomorphic fifth feature map and a isomorphic sixth feature map;
s411: performing phase correlation solving on the fifth feature map and the sixth feature map obtained in the step S410 to obtain a first phase correlation map, which is used for further calculating a translation transformation relationship in the x direction between the local aerial view map and the ground point cloud image;
s412: taking a pre-trained seventh U-Net network and an eighth U-Net network as two feature extractors, respectively taking a local aerial view map and a new ground point cloud image as respective original input images of the two feature extractors, extracting isomorphic features in the two original input images, and obtaining a seventh feature map and an eighth feature map which are isomorphic and only keep the translation transformation relation between the original input images;
s413: performing phase correlation solving on the seventh feature map and the eighth feature map obtained in the step S412 to obtain a second phase correlation map, which is used for further calculating a translation transformation relation in the y direction between the local aerial view map and the ground point cloud image;
S414: and after superposition and summation, the first phase correlation diagram and the second phase correlation diagram are normalized, and the normalized phase correlation diagram is used as a final output phase correlation diagram for performing Softmax operation.
Preferably, in the deep phase correlation network, 8U-Net networks are independent of each other, and each U-Net network extracts a feature map with the same size as that of the input original image through 4 encoder layers which perform down-sampling through convolution operation and 4 decoder layers which perform up-sampling through deconvolution operation; and training the 8U-Net networks in advance, wherein the total trained loss function is the weighted sum of the rotation transformation relation loss, the scaling transformation relation loss, the translation transformation relation loss in the x direction and the translation transformation relation loss in the y direction between the local aerial view map and the ground point cloud image.
Preferably, the weighting weight of each of the four losses in the total loss function is 1, and the L1 loss is used for each of the four losses.
Preferably, in S6, the method for locating the accurate position of the mobile robot on the map based on the particle filter locating method is as follows:
s61: firstly, scattering a preset number of points near the current position of the mobile robot, wherein each point represents an assumed position of the mobile robot;
S62: mapping the points into the probability distribution map, wherein the probability value of one point in the probability distribution map represents the weight of the point, and the higher the weight is, the higher the possibility that the mobile robot is at the position is;
s63: after the weight of the particles is obtained, resampling operation is carried out according to the weight, so that the particles with large weight continuously exist, and the particles with small weight are gradually filtered;
s64: the mobile robot moves all the particles according to the estimated motion, and the particles perform weight updating calculation according to the probability distribution map;
s65: and continuously iterating and repeating the steps S63 and S64 to enable the particles to gradually gather near the real position, and determining the accurate position of the mobile robot on the map by using the position center of the final gathered particles after iteration is finished.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, a binocular camera of the mobile robot is used for obtaining a ground point cloud image of the position of the mobile robot, meanwhile, a position rough estimation value determined by a vehicle-mounted sensor is used for intercepting a local image from an aerial view map of the unmanned aerial vehicle, the two images are subjected to phase correlation through a depth phase correlation network and converted into a probability distribution map, and therefore accurate positioning of the robot can be achieved through a particle filter positioning algorithm. The method can correct the position rough estimation value determined by vehicle-mounted sensors such as a GPS, a speedometer and the like, eliminates the adverse effect of external factors such as illumination, shelters and the like on the positioning result, and greatly improves the robustness of the autonomous positioning of the mobile robot.
Drawings
FIG. 1 is a flow chart of steps of a mobile robot positioning method based on an unmanned aerial vehicle map and ground binocular information;
fig. 2 is a model framework diagram of a deep phase correlation network.
Fig. 3 is a map constructed by drones in one example.
Fig. 4 is a ground point cloud image obtained by a binocular camera and a local bird's-eye map cut at a corresponding position in one example.
Detailed Description
The invention will be further elucidated and described with reference to the drawings and the detailed description. The technical features of the embodiments of the present invention can be combined correspondingly without mutual conflict.
The invention designs a set of method for positioning in a mobile robot by means of unmanned aerial vehicle detection and a binocular camera, and the inventive concept is as follows: the method comprises the steps that an unmanned aerial vehicle is used for detecting a robot moving area to form a corresponding aerial view map, then the mobile robot builds a ground point cloud image based on detection data of a binocular camera, a local aerial view map of the position where the robot is located is captured from the aerial view map, and an end-to-end matching model is trained through the local aerial view map and the ground point cloud image, so that the model can be matched with two types of images, and the purpose of positioning is achieved. The model has certain generalization capability, in the practical application process, only a ground point cloud image constructed on the ground and a local aerial view map of a current detection area need to be input into the previously trained model to generate a phase correlation map, and further a probability distribution map for positioning is generated, so that the accurate positioning of the robot can be realized through a particle filter positioning algorithm.
The following is a detailed description of specific implementations of the above-described positioning method.
As shown in fig. 1, in a preferred embodiment of the present invention, a method for positioning a mobile robot based on a map of an unmanned aerial vehicle and ground binocular information is provided, which includes the following specific steps:
s1: and (3) carrying out full coverage detection on the robot moving area by using an unmanned aerial vehicle, obtaining an image sequence of a downward-looking camera and parameters in the flight process, and recovering a bird view map of the robot moving area through a sparse point cloud map.
The prior art exists in the specific method of converting unmanned aerial vehicle detection data into a bird's-eye view map. In this embodiment, an unmanned aerial vehicle may be used to detect a robot moving area, fly a distance above a region to be detected and return after covering the region, and return the obtained downward-looking camera image sequence, flying IMU, and camera internal and external parameter information to a ground workstation, where the ground workstation estimates the pose of each frame of image in the image sequence by SLAM technology, constructs a sparse point cloud map of the ground by feature point matching, and finally interpolates and constructs a Mesh surface for the sparse point cloud by using the image to recover a bird's-eye view map of the region to be detected.
The method for acquiring the aerial view map by using the unmanned aerial vehicle can greatly expand the application range of the invention, can acquire the accurate map of the moving area of the robot in a targeted manner, avoids the defects of incomplete coverage of the satellite map, insufficient resolution and the like, and has higher flexibility.
S2: and detecting an area in front of the position by using a binocular camera carried on the mobile robot to form a ground point cloud rich in texture information, and observing the ground point cloud at a bird's-eye view angle to obtain a ground point cloud image.
In this embodiment, the binocular camera mounted on the mobile robot may be used to estimate depth information of an area in front of the location and form a point cloud, image information of the left eye camera is given to the formed point cloud in a texture form to form a ground point cloud rich in texture information, and the ground point cloud is observed at a bird's-eye view angle to obtain a ground point cloud image at the bird's-eye view angle.
S3: the mobile robot estimates the position of the mobile robot from the sensor mounted on the mobile robot, and cuts out a local bird 'S-eye map having the same size as the ground point cloud image obtained in S2, from the bird' S-eye map obtained in S1, centering on the position of the mobile robot.
The mobile robot can estimate the position of the mobile robot based on a sensor mounted on the mobile robot, and for example, a GPS positioning device or a odometer can realize the positioning of the approximate position of the robot. However, the accuracy of the GPS positioning device is limited by the device itself and the environment in which the mobile robot is located, large errors tend to occur under external interference, and the odometer can only achieve a rough position estimation. Therefore, the mobile robot can estimate its position only by a rough positioning, and needs to be corrected by the subsequent process of the present invention.
In the invention, the correction process is realized by image matching of the ground point cloud image and the local aerial view map. The ground point cloud image is derived from a peripheral scene image of the real position of the mobile robot, the local aerial view map is derived from a map constructed after the unmanned aerial vehicle is detected, and if the ground point cloud image can be registered to the local aerial view map through image matching, accurate positioning of the robot can be achieved through the local aerial view map information. However, the size of the local bird's-eye view map is too large, and the registration efficiency of the non-destination image is too low, so the preliminary estimation of the self position of the mobile robot can be used for reducing the range of the registered image. In consideration of the requirement of subsequent image registration, an image with the same size as the ground point cloud image can be intercepted by taking the estimated position of the self-positioning as the center for registration with the ground point cloud image. In this embodiment, the image sizes of the ground point cloud image and the local bird's-eye view map are set to 256 × 256.
S4: inputting the ground point cloud image and the local aerial view map into a depth phase correlation network, extracting robust features in the ground point cloud image and the local aerial view map through convolution operation, converting the extracted features into feature maps with the same size as an original image through deconvolution operation, removing translation components of the feature maps of the ground point cloud image and the local aerial view map through Fourier transform operation, converting rotation components into translation through logarithmic polarity transform operation, and finally obtaining a phase correlation map through phase correlation operation.
Therefore, the core of the method is to construct a depth phase correlation network, so that the method can process the input ground point cloud image and the local aerial view map, realize heterogeneous matching of the ground point cloud image and the local aerial view map, and output a phase correlation map.
As shown in fig. 2, the core of the network framework structure of the depth phase-related network constructed in a preferred embodiment of the present invention is 8 independent U-Net networks, and a fourier transform layer (FFT), a log-polarity transform Layer (LPT) and a phase-related layer (DC), the input of the depth phase-related network is a pair of heterogeneous graphics, namely, the aforementioned local bird's-eye view map Sample1 and ground point cloud image Sample2, and the final output thereof is three pose transformation relationships, i.e., translation, rotation and scaling, required for registering the local bird's-eye view map and the ground point cloud image. The local aerial view map is used as a matched template, and the ground point cloud image can be matched and spliced to the local aerial view map after pose transformation.
In order to solve the problem that heterogeneous images cannot be directly registered, a general processing method is to extract features from two images and use the features to replace original sensor measurement values to estimate relative postures. In the conventional phase correlation algorithm, a high-pass filter is used to suppress random noise of two inputs, and this process can be regarded as a feature extractor. However, for a pair of input heterogeneous images, the two images have obvious change, and a high-pass filter is far from sufficient. The present invention addresses this problem with end-to-end learning, considering that there are no common features to directly supervise the feature extractor. In the invention, 8 independent trainable U-Net networks (marked as U-Net 1-U-Net 8) are respectively constructed for a local aerial view map and a source image in a rotation scaling stage and a translation stage, and after the 8U-Net networks are trained in advance under the supervision of three losses of translation, rotation and scaling, isomorphic features, namely common features, in the images can be extracted from the isomerous images, so that the two isomerous images are converted into two isomerous feature maps. In the invention, if only 4U-Net networks are set, the solution of rotation and scaling transformation needs to be coupled, and the solution of x-direction translation and y-direction translation needs to be coupled, so that the effect of the features extracted by the trained feature extractor is poor; therefore, the rotation, the zooming, the x translation and the y translation are decoupled, respective U-Net networks are trained respectively, and 8U-Net networks are obtained in total, so that the effect of improving the precision is achieved.
In this embodiment, the input and output sizes are 256 × 256 for 8 independent U-Net networks, respectively. Each U-Net network extracts the features with the same size as the input original image by 4 encoder layers adopting convolution operation to carry out down sampling and 4 decoder layers adopting deconvolution operation to carry out up sampling, and meanwhile, jump connection exists between the encoder layers and the decoder layers, and the specific U-Net network structure belongs to the prior art and is not described any more. As training progresses, the parameters of the 8U-nets are adjusted. Note that this network is lightweight, so it has efficient enough real-time to meet the requirements of the application scenario.
In addition, the Fourier transform layer (FFT) is used for carrying out Fourier transform on the feature map extracted by the U-Net network, and the translation transform relation between images is removed, but the rotation and scaling transform relation is reserved. Since, according to the properties of the fourier transform, only the rotation and scale have an effect on the amplitude of the spectrum, but are not sensitive to the translation. Thus, introducing the FFT results in a representation that is insensitive to translation but particularly sensitive to scaling and rotation, so that translation can be ignored when scaling and rotation are subsequently solved for.
The log-polar transform Layer (LPT) performs log-polar transformation on the FFT-transformed image, and maps the image from a cartesian coordinate system to a log-polar coordinate system. In this mapping process, the scaling and rotation in the cartesian coordinate system may be converted into translation in a log polar coordinate system. The coordinate system transformation can derive a cross-correlation form for scaling and rotation, eliminating all exhaustive evaluations in the entire depth-phase correlation network.
In addition, the role of the phase correlation layer (DC) is to perform a phase correlation solution, i.e. to calculate the cross-correlation between the two magnitude spectra. According to the correlation obtained by solving, the translation transformation relation between the two can be obtained. The specific calculation process of cross correlation belongs to the prior art and is not described in detail.
The following describes in detail a specific calculation process of a phase correlation map between the local aerial view map and the ground point cloud image based on the depth-phase correlation network, and includes the following steps:
s401: a first U-Net network U-Net1 and a second U-Net network U-Net2 which are trained in advance are used as two feature extractors, heterogeneous local aerial view maps and ground point cloud images are used as original input images of the two feature extractors U-Net1 and U-Net2 respectively (namely, the local aerial view maps are input into the U-Net1, the ground point cloud images are input into the U-Net2, the same is carried out below), isomorphic features in the two original input images are extracted, and a first feature map and a second feature map which are isomorphic are obtained. At this time, the translation, rotation and scaling transformation relations between the original input pictures are simultaneously preserved in the first feature map and the second feature map.
S402: and (3) performing a first fourier transform operation (denoted as FFT1) on the first feature map and the second feature map obtained in S401, and then taking respective amplitude spectrums, wherein a rotation and scaling transformation relation between original input pictures is retained between the two amplitude spectrums, but a translation transformation relation is filtered out in FFT 1.
S403: the two magnitude spectra obtained in S402 are respectively subjected to a first log-polar transformation operation (denoted as LPT1) to be transformed from a cartesian coordinate system into a log-polar coordinate system, so that a rotational transformation in the cartesian coordinate system between the two magnitude spectra is mapped to a translational transformation in the Y direction (denoted as Y) in the log-polar coordinate system.
S404: and performing phase correlation solving on the amplitude spectrums subjected to the coordinate transformation in the step S403 in a phase correlation layer (DC) to form a phase correlation diagram A, and performing argmax operation on the phase correlation diagram A to obtain a translation transformation relation between the two. It should be noted that in the LPT1 of S403, there is a mapping relationship between the rotation transformation in the cartesian coordinate system and the translation transformation Y in the Y direction in the logarithmic polar coordinate system, so that the translation transformation relationship can be retransformed according to the mapping relationship between the cartesian coordinate system and the logarithmic polar coordinate system in S403 to obtain the rotation transformation relationship between the local bird' S-eye view map and the ground point cloud image.
The rotation transformation relationship is essentially an angle theta at which the ground point cloud image needs to be rotated to realize the registration with the local aerial view map.
S405: similarly, a third U-Net network U-Net3 and a fourth U-Net network U-Net4 which are trained in advance are used as two feature extractors, a heterogeneous local bird's-eye view map and a ground point cloud image are used as original input pictures of the two feature extractors U-Net3 and U-Net4 respectively, and isomorphic features in the two original input pictures are extracted to obtain a third feature map and a fourth feature map which are isomorphic. At this time, the third feature map and the fourth feature map also simultaneously retain the translation, rotation and scaling transformation relations between the original input pictures.
S406: the third feature map and the fourth feature map obtained in S405 are subjected to a second fourier transform operation (denoted as FFT2) respectively, and then respective amplitude spectra are obtained. Also, the rotational and scaling transform relationship between the original input pictures remains between the two magnitude spectra while the translation transform relationship has been filtered out in the FFT 2.
S407: and performing second log-polar transformation operation (denoted as LPT2) on the two magnitude spectra obtained in S406 respectively to convert the two magnitude spectra from cartesian coordinates into log-polar coordinates, so that the scaling transformation between the two magnitude spectra in the cartesian coordinates is mapped into translation transformation X in the X direction in the log-polar coordinates.
S408: and performing phase correlation solving on the amplitude spectrums subjected to the coordinate transformation in the step S407 in a phase correlation layer (DC) to form a phase correlation diagram B, and performing argmax operation on the phase correlation diagram B to obtain a translation transformation relation between the two. Similarly, in LPT2 in S407, since there is a mapping relationship between the rotation transformation in the cartesian coordinate system and the translation transformation X in the X direction in the logarithmic polar coordinate system, the local bird' S eye view map and the ground point cloud image may be obtained by reconversion according to the mapping relationship between the cartesian coordinate system and the logarithmic polar coordinate system in S407.
The scaling transformation relationship is essentially a scale for scaling the ground point cloud image to realize the registration with the local aerial view map.
Thus, through the above steps, the rotation transformation relationship and the scaling transformation relationship between the local bird's eye view map and the ground point cloud image have been obtained.
S409: and performing corresponding rotation and scaling transformation on the ground point cloud image according to the rotation transformation relation and the scaling transformation relation obtained in the S404 and the S408 to obtain a new ground point cloud image. After the local aerial view map and the ground point cloud image are converted through rotation and scaling, the angle and scale difference does not exist, so that the new ground point cloud image and the input local aerial view map only contain a translation conversion relation at present, but do not have a rotation conversion relation and a scaling conversion relation, and the translation difference between the new ground point cloud image and the input local aerial view map only needs to be eliminated through translation conversion subsequently. For the translation transformation relation, the translation transformation relation in the x and y directions can be obtained only by phase correlation solving.
S410: and taking a fifth U-Net network U-Net5 and a sixth U-Net network U-Net6 which are trained in advance as two feature extractors, respectively taking a local bird's-eye view map and a new ground point cloud image as original input pictures of the two feature extractors U-Net5 and U-Net6, extracting isomorphic features in the two original input pictures, and obtaining an isomorphic fifth feature map and a isomorphic sixth feature map. At this time, only the translation transformation relationship between the original input pictures is retained in the fifth feature map and the sixth feature map, and the rotation and scaling transformation relationship does not exist.
S411: and performing phase correlation solving on the fifth feature map and the sixth feature map obtained in the step S410 in a phase correlation layer (DC) to form a phase correlation map C, and performing argmax operation on the phase correlation map C to obtain a translation transformation relation in the x direction between the local aerial view map and the ground point cloud image.
S412: and taking a pre-trained seventh U-Net network U-Net7 and an eighth U-Net network U-Net8 as two feature extractors, respectively taking a local aerial view map and a new ground point cloud image as respective original input pictures of the two feature extractors U-Net7 and U-Net8, extracting isomorphic features in the two original input pictures, and obtaining an isomorphic seventh feature map and an isomorphic eighth feature map. At this time, only the translation transformation relationship between the original input pictures is retained in the seventh feature map and the eighth feature map, and the rotation and scaling transformation relationship does not exist.
S413: and performing phase correlation solving on the seventh feature map and the eighth feature map obtained in the step S412 in a phase correlation layer (DC) to form a phase correlation map D, and performing argmax operation on the phase correlation map D to obtain a translation transformation relation in the y direction between the local aerial view map and the ground point cloud image.
The translation transformation relation in the X direction and the translation transformation relation in the Y direction are essentially a distance X that the ground point cloud image needs to be translated in the X direction and a distance Y that the ground point cloud image needs to be translated in the Y direction to realize registration with the local aerial view map.
Therefore, the pose estimation of the invention is realized by two stages, and the estimated values of four degrees of freedom (X, Y, theta, scale) are obtained. Firstly, the estimation of the rotation and scaling transformation relation is realized through the rotation scaling stages from S401 to S409, and then the estimation of the translation transformation relation is realized through the translation stages from S410 to S413. And (5) integrating the results of the steps S404, S408, S411 and S413 to obtain pose estimation values of three transformation relations of rotation, scaling and translation between the heterogeneous local aerial view map and the ground point cloud image, thereby finishing the pose estimation processes of the aerial view map and the ground point cloud image.
It should be noted, however, that the final purpose of the deep phase correlation network is not to obtain the pose estimation value, but to obtain the phase correlation map E finally used for calculating the probability distribution map. The phase correlation diagram E is obtained by superimposing the phase correlation diagram C in the step S411 and the phase correlation diagram D in the step S413 through a network branch in the above-mentioned posture estimation process.
S414: the phase correlation diagram C output in the step S411 and the phase correlation diagram D output in the step S413 are superimposed, and the superimposition is performed by pixel-by-pixel summation, resulting in a phase correlation diagram E. Since the phase correlation diagram E is obtained by superimposing two phase correlation diagrams, a normalization operation needs to be performed, and then the normalized phase correlation diagram E is taken as a final output for performing subsequent probability distribution diagram calculation.
Therefore, accurate output of the phase correlation diagram E still needs to achieve accurate acquisition of the phase correlation diagram C and the phase correlation diagram D, so the deep phase correlation network still needs to be trained with the aim of improving the final pose estimation accuracy. In the training process, 8U-Net networks in the deep phase correlation network are trained in advance, and a reasonable loss function needs to be set in order to ensure that each U-Net network can accurately extract isomorphic characteristics. The total loss function should be a weighted sum of a rotation transformation loss, a scaling transformation loss, a translation transformation loss in the x direction and a translation transformation loss in the y direction between the local aerial view map and the ground point cloud image, and the specific weighted value can be adjusted according to the actual situation.
In this embodiment, the weighting weights of the four losses in the total loss function are all 1, and the L1 losses are used for all the four losses, and the four loss functions are as follows:
the predicted rotation relationship theta in S404 is denoted as theta _ predict, the predicted scaling relationship scale in S408 is denoted as scale _ predict, the predicted translation transformation X in the X direction in S411 is denoted as X _ predict, and the predicted translation transformation Y in the Y direction in S413 is denoted as Y _ predict. Therefore, in each training process, the relationship of translation (x _ prediction, y _ prediction), rotation (theta _ prediction), and scaling (scale _ prediction) between two heterogeneous pictures is obtained.
1) In the model, the obtained theta _ predict and the truth value theta _ gt are subjected to 1 norm distance loss, and L _ theta is (theta _ gt-theta _ predict), and the L _ theta is transmitted back to train U-Net1 and U-Net2, so that better characteristics for obtaining the theta _ predict can be extracted.
2) And (3) performing 1 norm distance loss on the obtained scale _ predict and the true scale _ gt of the scale _ predict, and returning the L _ scale to train U-Net3 and U-Net4, so that better characteristics for solving the scale _ predict can be extracted.
3) And (3) performing 1 norm distance loss on the obtained x _ prediction and a true value x _ gt of the x _ prediction, returning the L _ x to train U-Net5 and U-Net6, so that better characteristics for obtaining the x _ prediction can be extracted.
4) And (3) performing 1 norm distance loss on the obtained y _ prediction and a true value y _ gt of the y _ prediction, returning the L _ y to train U-Net7 and U-Net8, so that better characteristics for obtaining the y _ prediction can be extracted.
Therefore, the total loss function is L _ x + L _ y + L _ theta + L _ scale, and model parameters of 8U-Net networks are optimized through a gradient descent method in the training process, so that the total loss function is minimum. 8U-Net networks after training form a depth phase correlation network for estimating the position of the actual heterogeneous image, the position of two heterogeneous images can be estimated in the depth phase correlation network according to the methods from S1 to S13, and in the process, an accurate phase correlation diagram C and an accurate phase correlation diagram D can be output.
S5: and performing Softmax operation on the normalized phase correlation diagram E to convert the normalized phase correlation diagram E into a distribution of 0-1, so as to obtain a probability distribution diagram.
S6: and on the basis of the probability distribution map, positioning the accurate position of the mobile robot on the map based on a particle filter positioning method.
The particle filter positioning method belongs to the prior art, and the following briefly describes an implementation manner of the embodiment as follows:
the method for positioning the accurate position of the mobile robot on the map by the particle filter positioning method comprises the following steps:
S61: first, particle swarm initialization is carried out, and a preset number of points are scattered near the current position of the mobile robot in the bird's eye view map, wherein each point represents an assumed position of the mobile robot.
S62: and then acquiring a probability distribution map, and mapping the points into the probability distribution map, wherein the probability value of a point in the probability distribution map represents the weight of the point, and the higher the weight is, the higher the possibility that the mobile robot is at the position is.
S63: after the weights of the particles are obtained, resampling operation is carried out through a wheel disc method according to the weights, so that the particles with large weights continuously exist, and the particles with small weights are gradually filtered.
S64: the mobile robot moves all the particles according to the motion estimated based on the odometer, and the particles perform updating calculation of the weight again according to the current probability distribution map.
S65: and continuously iterating and repeating the steps S63 and S64 to enable the particles to gradually gather near the real position, and determining the accurate position of the mobile robot on the map by using the position center of the final gathered particles after iteration is finished.
Based on the particle filter positioning algorithm, positioning can gradually converge to a more accurate degree along with the movement of the robot.
In one embodiment, as shown in fig. 3, a map is constructed for a drone in the present embodiment, and the robot moves in the map. As shown in fig. 4, a pair of images is input for the depth-phase correlation network model when the robot is at a certain position, the left image is a ground point cloud image obtained from binocular camera data, and the right image is a local bird's-eye view map cut from the bird's-eye view map with the position determined roughly from the odometry data as the center. And after the two images are input into a depth phase correlation network, outputting a phase correlation diagram, converting the phase correlation diagram into a probability distribution diagram, and obtaining a positioning route of the robot in the moving process after the particle filter positioning algorithm. The errors of the different methods are further quantified, the odometer of the ground mobile robot has a course error when moving, the quantified indexes in the table 1 show the positioning course error which is directly estimated by the odometer without any correction when the robot moves 200m on three different road sections, and the positioning course error corrected by the method of the invention has the unit of meter.
TABLE 1 errors before and after correction and correction time consumption of the invention
Without correction error Corrected by the method Correction time using the method
Road section
1 23.1m 1.02m 30ms
Road section
2 19.6m 0.91m 28ms
Road section 3 26.7m 0.87m 33ms
Therefore, the method can correct the position rough estimation value determined by the vehicle-mounted sensors such as the GPS, the odometer and the like, eliminates the adverse effect of external factors such as illumination, shelters and the like on the positioning result, and greatly improves the robustness of the autonomous positioning of the mobile robot.
The above-described embodiments are merely preferred embodiments of the present invention, which should not be construed as limiting the invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, the technical scheme obtained by adopting the mode of equivalent replacement or equivalent transformation is within the protection scope of the invention.

Claims (8)

1. A mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information is characterized by comprising the following steps:
s1: utilizing an unmanned aerial vehicle to carry out full coverage detection on a robot moving area, obtaining an image sequence of a downward-looking camera and parameters in a flight process, and recovering a bird-eye view map of the robot moving area through a sparse point cloud map;
S2: detecting an area in front of the position by using a binocular camera carried on the mobile robot to form ground point cloud rich in texture information, and observing the ground point cloud at a bird's-eye view angle to obtain a ground point cloud image;
s3: the mobile robot estimates the position of the mobile robot according to a sensor carried by the mobile robot, and a local aerial view map with the same size as the ground point cloud image is intercepted from the aerial view map by taking the position of the mobile robot as the center;
s4: inputting the ground point cloud image and the local aerial view map into a depth phase correlation network, extracting robust features in the ground point cloud image and the local aerial view map through convolution operation, converting the extracted features into feature maps with the same size as an original image through deconvolution operation, removing translation components of the feature maps of the ground point cloud image and the local aerial view map through Fourier transform operation, converting rotation components into translation through logarithmic polarity transform operation, and finally obtaining a phase correlation map through phase correlation operation;
s5: performing Softmax operation on the phase correlation diagram to convert the phase correlation diagram into 0-1 distribution to obtain a probability distribution diagram;
S6: and on the basis of the probability distribution map, positioning the accurate position of the mobile robot on the map based on a particle filter positioning method.
2. The method as claimed in claim 1, wherein in S1, the drone first detects a robot moving area, flies a distance above the area to be detected and returns after covering the area, and returns the image sequence of the downward-looking camera, the flying IMU and the camera parameter information to the ground workstation, and the ground workstation first estimates the pose of each frame of image in the image sequence by SLAM technique, then constructs a sparse point cloud map of the ground by feature point matching, and finally interpolates the sparse point cloud by using the image and constructs a Mesh surface, and recovers a bird' S-eye view map of the area to be detected.
3. The method of claim 1, wherein in step S2, the binocular camera mounted on the mobile robot is used to estimate depth information of an area in front of the location and form a point cloud, image information of the left eye camera is added to the formed point cloud in a texture form to form a ground point cloud rich in texture information, and the ground point cloud is observed at a bird 'S eye view angle to obtain a ground point cloud image at the bird' S eye view angle.
4. The method as claimed in claim 1, wherein the mobile robot is located according to GPS or odometer in S3.
5. The method as claimed in claim 1, wherein the depth-phase correlation network in S4 includes 8 different U-Net networks, and wherein the method for outputting the phase correlation map for the input ground point cloud image and the local bird' S eye view map includes:
s401: taking a first U-Net network and a second U-Net network which are trained in advance as two feature extractors, respectively taking a local aerial view map and a ground point cloud image as respective original input images of the two feature extractors, and extracting isomorphic features in the two original input images to obtain an isomorphic first feature image and a isomorphic second feature image;
s402: respectively carrying out Fourier transform on the first characteristic diagram and the second characteristic diagram obtained in the S401, and then taking respective magnitude spectrums;
s403: respectively carrying out log-polar coordinate transformation on the two magnitude spectrums obtained in the S402 to convert the two magnitude spectrums from a Cartesian coordinate system to a log-polar coordinate system, so that the rotation transformation between the two magnitude spectrums under the Cartesian coordinate system is mapped to the translation transformation in the y direction in the log-polar coordinate system;
S404: performing phase correlation solving on the amplitude spectrums subjected to the coordinate transformation in the step S403 to obtain a translation transformation relation between the two amplitude spectrums, and performing retransformation according to a mapping relation between a Cartesian coordinate system and a logarithmic polar coordinate system in the step S403 to obtain a rotation transformation relation between the local aerial view map and the ground point cloud image;
s405: taking a third U-Net network and a fourth U-Net network which are trained in advance as two feature extractors, respectively taking a local aerial view map and a ground point cloud image as respective original input images of the two feature extractors, and extracting isomorphic features in the two original input images to obtain an isomorphic third feature image and a isomorphic fourth feature image;
s406: performing Fourier transform on the third characteristic diagram and the fourth characteristic diagram obtained in the step S405 respectively, and then taking respective magnitude spectrums;
s407: respectively carrying out log-polar coordinate transformation on the two magnitude spectrums obtained in the step S406 to convert the two magnitude spectrums from a Cartesian coordinate system to a log-polar coordinate system, so that scaling transformation under the Cartesian coordinate system between the two magnitude spectrums is mapped to translation transformation in the x direction in the log-polar coordinate system;
s408: performing phase correlation solving on the amplitude spectrums subjected to the coordinate transformation in the step S407 to obtain a translation transformation relation between the two amplitude spectrums, and performing retransformation according to a mapping relation between a Cartesian coordinate system and a logarithmic polar coordinate system in the step S407 to obtain a scaling transformation relation between the local aerial view map and the ground point cloud image;
S409: performing corresponding rotation and scaling transformation on the ground point cloud image according to the rotation transformation relation and the scaling transformation relation obtained in S404 and S408 to obtain a new ground point cloud image;
s410: taking a fifth U-Net network and a sixth U-Net network which are trained in advance as two feature extractors, respectively taking a local aerial view map and a new ground point cloud image as respective original input images of the two feature extractors, and extracting isomorphic features in the two original input images to obtain an isomorphic fifth feature map and a isomorphic sixth feature map;
s411: performing phase correlation solving on the fifth feature map and the sixth feature map obtained in the step S410 to obtain a first phase correlation map, which is used for further calculating a translation transformation relationship in the x direction between the local aerial view map and the ground point cloud image;
s412: taking a pre-trained seventh U-Net network and an eighth U-Net network as two feature extractors, respectively taking a local aerial view map and a new ground point cloud image as respective original input images of the two feature extractors, extracting isomorphic features in the two original input images, and obtaining a seventh feature map and an eighth feature map which are isomorphic and only keep the translation transformation relation between the original input images;
S413: performing phase correlation solving on the seventh feature map and the eighth feature map obtained in the step S412 to obtain a second phase correlation map, which is used for further calculating a translation transformation relation in the y direction between the local aerial view map and the ground point cloud image;
s414: and after superposition and summation, the first phase correlation diagram and the second phase correlation diagram are normalized, and the normalized phase correlation diagram is used as a final output phase correlation diagram for performing Softmax operation.
6. The method of claim 5, wherein 8U-Net networks are independent of each other in the deep phase correlation network, and each U-Net network extracts a feature map having the same size as the input original image through 4 encoder layers downsampled through convolution operation and 4 decoder layers upsampled through deconvolution operation; and training the 8U-Net networks in advance, wherein the total trained loss function is the weighted sum of the rotation transformation relation loss, the scaling transformation relation loss, the translation transformation relation loss in the x direction and the translation transformation relation loss in the y direction between the local aerial view map and the ground point cloud image.
7. The method for positioning the mobile robot based on the unmanned aerial vehicle map and the ground binocular information according to claim 6, wherein the weighted weights of the four losses in the total loss function are all 1, and the four losses are all L1 losses.
8. The method for positioning a mobile robot based on the unmanned aerial vehicle map and the ground binocular information of claim 1, wherein in S6, the method for positioning the accurate position of the mobile robot on the map based on the particle filter positioning method is as follows:
s61: firstly, scattering a preset number of points near the current position of the mobile robot, wherein each point represents an assumed position of the mobile robot;
s62: mapping the points into the probability distribution map, wherein the probability value of one point in the probability distribution map represents the weight of the point, and the higher the weight is, the higher the possibility that the mobile robot is at the position is;
s63: after the weight of the particles is obtained, resampling operation is carried out according to the weight, so that the particles with large weight continuously exist, and the particles with small weight are gradually filtered;
s64: the mobile robot moves all the particles according to the estimated motion, and the particles perform weight updating calculation according to the probability distribution map;
S65: and continuously iterating and repeating the steps S63 and S64 to enable the particles to gradually gather near the real position, and determining the accurate position of the mobile robot on the map by using the position center of the final gathered particles after iteration is finished.
CN202110797651.8A 2021-07-14 2021-07-14 Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information Active CN113538579B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110797651.8A CN113538579B (en) 2021-07-14 2021-07-14 Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110797651.8A CN113538579B (en) 2021-07-14 2021-07-14 Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information

Publications (2)

Publication Number Publication Date
CN113538579A true CN113538579A (en) 2021-10-22
CN113538579B CN113538579B (en) 2023-09-22

Family

ID=78099192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110797651.8A Active CN113538579B (en) 2021-07-14 2021-07-14 Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information

Country Status (1)

Country Link
CN (1) CN113538579B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797425A (en) * 2023-01-19 2023-03-14 中国科学技术大学 Laser global positioning method based on point cloud aerial view and rough-to-fine strategy

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017091008A1 (en) * 2015-11-26 2017-06-01 삼성전자주식회사 Mobile robot and control method therefor
CN107941217A (en) * 2017-09-30 2018-04-20 杭州迦智科技有限公司 A kind of robot localization method, electronic equipment, storage medium, device
CN111578958A (en) * 2020-05-19 2020-08-25 山东金惠新达智能制造科技有限公司 Mobile robot navigation real-time positioning method, system, medium and electronic device
US20200306983A1 (en) * 2019-03-27 2020-10-01 Lg Electronics Inc. Mobile robot and method of controlling the same
CN112200869A (en) * 2020-10-09 2021-01-08 浙江大学 Robot global optimal visual positioning method and device based on point-line characteristics
CN113084798A (en) * 2021-03-16 2021-07-09 浙江大学湖州研究院 Robot calibration device based on multi-station measurement

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017091008A1 (en) * 2015-11-26 2017-06-01 삼성전자주식회사 Mobile robot and control method therefor
CN107941217A (en) * 2017-09-30 2018-04-20 杭州迦智科技有限公司 A kind of robot localization method, electronic equipment, storage medium, device
US20200306983A1 (en) * 2019-03-27 2020-10-01 Lg Electronics Inc. Mobile robot and method of controlling the same
CN111578958A (en) * 2020-05-19 2020-08-25 山东金惠新达智能制造科技有限公司 Mobile robot navigation real-time positioning method, system, medium and electronic device
CN112200869A (en) * 2020-10-09 2021-01-08 浙江大学 Robot global optimal visual positioning method and device based on point-line characteristics
CN113084798A (en) * 2021-03-16 2021-07-09 浙江大学湖州研究院 Robot calibration device based on multi-station measurement

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PUENTE I ET.AL: "Review of mobile mapping and surveying technologies", 《MEASUREMENT》, vol. 46, no. 7, pages 2127 - 2145, XP055631662, DOI: 10.1016/j.measurement.2013.03.006 *
WANG L ET.AL: "Map-based localization method for autonomous vehicles using 3D-LIDAR", 《IFAC-POPERSONLINE》, vol. 50, no. 1, pages 276 - 281, XP055958775, DOI: 10.1016/j.ifacol.2017.08.046 *
张伟伟;陈超;徐军;: "融合激光与视觉点云信息的定位与建图方法", 计算机应用与软件, no. 07 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797425A (en) * 2023-01-19 2023-03-14 中国科学技术大学 Laser global positioning method based on point cloud aerial view and rough-to-fine strategy

Also Published As

Publication number Publication date
CN113538579B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
EP3520076B1 (en) Computer vision systems and methods for detecting and modeling features of structures in images
CN108152831B (en) Laser radar obstacle identification method and system
CN107808407A (en) Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN107796384B (en) 2D vehicle positioning using geographic arcs
CN111768489B (en) Indoor navigation map construction method and system
EP3291178B1 (en) 3d vehicle localizing using geoarcs
CN112967392A (en) Large-scale park mapping and positioning method based on multi-sensor contact
CN113552585B (en) Mobile robot positioning method based on satellite map and laser radar information
Lippiello et al. Closed-form solution for absolute scale velocity estimation using visual and inertial data with a sliding least-squares estimation
CN115574816A (en) Bionic vision multi-source information intelligent perception unmanned platform
CN114077249B (en) Operation method, operation equipment, device and storage medium
CN114529585A (en) Mobile equipment autonomous positioning method based on depth vision and inertial measurement
CN113538579B (en) Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information
CN113313824A (en) Three-dimensional semantic map construction method
Hanlon et al. Active visual localization for multi-agent collaboration: A data-driven approach
CN116704029A (en) Dense object semantic map construction method and device, storage medium and electronic equipment
CN113960614A (en) Elevation map construction method based on frame-map matching
Warren Long-range stereo visual odometry for unmanned aerial vehicles
CN112747752A (en) Vehicle positioning method, device, equipment and storage medium based on laser odometer
Park et al. Localization of an unmanned ground vehicle based on hybrid 3D registration of 360 degree range data and DSM
Shen et al. Feature extraction from vehicle-borne laser scanning data
Cao et al. Autonomous Landing Spot Detection for Unmanned Aerial Vehicles Based on Monocular Vision
Hwang et al. Surface estimation ICP algorithm for building a 3D map by a scanning LRF
Annaiyan Collision-Free Navigation of Small UAVs in Complex Urban Environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant