CN113538579B - Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information - Google Patents

Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information Download PDF

Info

Publication number
CN113538579B
CN113538579B CN202110797651.8A CN202110797651A CN113538579B CN 113538579 B CN113538579 B CN 113538579B CN 202110797651 A CN202110797651 A CN 202110797651A CN 113538579 B CN113538579 B CN 113538579B
Authority
CN
China
Prior art keywords
map
point cloud
mobile robot
phase correlation
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110797651.8A
Other languages
Chinese (zh)
Other versions
CN113538579A (en
Inventor
王越
许学成
陈泽希
熊蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110797651.8A priority Critical patent/CN113538579B/en
Publication of CN113538579A publication Critical patent/CN113538579A/en
Application granted granted Critical
Publication of CN113538579B publication Critical patent/CN113538579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information, and belongs to the field of mobile robot positioning. According to the method, a ground point cloud image of the position of the mobile robot is obtained by using a binocular camera of the mobile robot, meanwhile, a local image is intercepted from an unmanned aerial vehicle aerial view map by using a position coarse estimation value determined by a vehicle-mounted sensor, and the two images are obtained through a depth phase correlation network, are converted into a probability distribution map, so that the accurate positioning of the robot can be realized through a particle filter positioning algorithm. The method can correct the rough position estimation value determined by the vehicle-mounted sensors such as the GPS, the odometer and the like, eliminates the adverse effect of external factors such as illumination, shielding and the like on the positioning result, and greatly improves the robustness of autonomous positioning of the mobile robot.

Description

Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information
Technical Field
The invention belongs to the field of mobile robot positioning, and particularly relates to a mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information.
Background
Self-positioning technology is a very important part of mobile robotic systems. Especially in some rescue scenes, such as earthquake, landslide and other scenes. The ground mobile robot has larger limit in the movement process due to the characteristics of large load, movement mode and the like, and the ground mobile robot needs to avoid detouring when encountering obstacles. However, in the case of rescue with many obstacles, how to position and navigate is difficult, because the view of the robot is blocked due to the obstacles, it is difficult to sense the environment in a large range, and navigation planning cannot be performed. Therefore, the cooperation of the unmanned aerial vehicle is very important. Unmanned aerial vehicle is comparatively nimble, and the field of vision is big, but the load is little. In the cooperation of the ground robot and the aerial robot, the unmanned aerial vehicle can play a role in exploring a path.
According to the above description, designing a set of method for positioning a mobile robot by means of a map constructed by an unmanned aerial vehicle and a vehicle-mounted binocular camera is a technical problem to be solved in the prior art.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provide a mobile robot positioning method based on unmanned aerial vehicle map and binocular camera information.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information comprises the following steps:
s1: performing full coverage detection on a robot moving area by using an unmanned aerial vehicle, obtaining a downward-looking camera image sequence and parameters in the flight process, and recovering a bird's-eye view map of the robot moving area through a sparse point cloud map;
s2: detecting a region in front of a position of the mobile robot by using a binocular camera carried on the mobile robot to form a ground point cloud rich in texture information, and observing the ground point cloud by using a bird's eye view to obtain a ground point cloud image;
s3: the mobile robot estimates the position of the mobile robot according to the self-contained sensor, and intercepts a local aerial view map with the same size as the ground point cloud image from the aerial view map by taking the position of the mobile robot as the center;
s4: inputting the ground point cloud image and the local aerial view map into a depth-phase correlation network, extracting robust features in the ground point cloud image and the local aerial view map through convolution operation, converting the extracted features into feature images with the same size as an original image through deconvolution operation, removing translation components of the feature images of the ground point cloud image and the feature images of the local aerial view map through Fourier transform operation, converting rotation components into translation through logarithmic polarity transformation operation, and finally obtaining a phase correlation image through phase correlation operation;
S5: performing Softmax operation on the phase correlation map to convert the phase correlation map into 0-1 distribution, so as to obtain a probability distribution map;
s6: and positioning the accurate position of the mobile robot on the map based on the particle filter positioning method on the basis of the probability distribution map.
In the S1, the unmanned aerial vehicle detects a moving area of the robot, flies a distance above a required detection area and returns after covering the area, the obtained sequence of images of the looking-down camera, the flying IMU and the camera parameter information are transmitted back to the ground workstation, the ground workstation firstly estimates the pose of each frame of image in the sequence of images through the SLAM technology, then a sparse point cloud map of the ground is constructed through feature point matching, finally the sparse point cloud is interpolated and a Mesh surface is constructed by utilizing the images, and a bird' S eye view map of the required detection area is recovered.
Preferably, in the step S2, the depth information of the area in front of the position is estimated by a binocular camera mounted on the mobile robot, a point cloud is formed, the image information of the left-eye camera is given to the formed point cloud in the form of a texture, a ground point cloud rich in the texture information is formed, and the ground point cloud is observed at a bird 'S eye view angle, so that a ground point cloud image at the bird' S eye view angle is obtained.
Preferably, in the step S3, the mobile robot estimates its own position according to a GPS or an odometer.
Preferably, in the step S4, the depth phase correlation network includes 8 different U-Net networks, and the specific method for outputting the phase correlation map for the input ground point cloud image and the local bird' S eye map is as follows:
s401: taking a first U-Net network and a second U-Net network which are trained in advance as two feature extractors, taking a local aerial view map and a ground point cloud image as respective original input pictures of the two feature extractors, and extracting isomorphic features in the two original input pictures to obtain a first isomorphic feature picture and a second isomorphic feature picture;
s402: performing Fourier transform on the first characteristic diagram and the second characteristic diagram obtained in the step S401 respectively, and then taking respective amplitude spectrums;
s403: respectively carrying out logarithmic polar coordinate transformation on the two amplitude spectrums obtained in the step S402, so that the two amplitude spectrums are transformed into the logarithmic polar coordinate system from a Cartesian coordinate system, and therefore, the rotation transformation under the Cartesian coordinate system between the two amplitude spectrums is mapped into translation transformation in the y direction in the logarithmic polar coordinate system;
s404: carrying out phase correlation solving on the amplitude spectrum after the two coordinate transformations in the S403 to obtain a translation transformation relation between the two, and reconverting according to the mapping relation between the Cartesian coordinate system and the logarithmic polar coordinate system in the S403 to obtain a rotation transformation relation between the local aerial view map and the ground point cloud image;
S405: taking a third U-Net network and a fourth U-Net network which are trained in advance as two feature extractors, taking a local aerial view map and a ground point cloud image as original input pictures of the two feature extractors respectively, and extracting isomorphic features in the two original input pictures to obtain a third isomorphic feature picture and a fourth isomorphic feature picture;
s406: performing Fourier transform on the third characteristic diagram and the fourth characteristic diagram obtained in the step S405 respectively, and then taking respective amplitude spectrums;
s407: respectively carrying out logarithmic polar coordinate transformation on the two amplitude spectrums obtained in the step S406, so that the two amplitude spectrums are transformed into a logarithmic polar coordinate system from a Cartesian coordinate system, and scaling transformation under the Cartesian coordinate system between the two amplitude spectrums is mapped into translational transformation in the x direction in the logarithmic polar coordinate system;
s408: carrying out phase correlation solving on the amplitude spectrum after the two coordinate transformations in the S407 to obtain a translation transformation relation between the two, and reconverting according to the mapping relation between the Cartesian coordinate system and the logarithmic polar coordinate system in the S407 to obtain a scaling transformation relation between the local aerial view map and the ground point cloud image;
s409: performing corresponding rotation and scaling transformation on the ground point cloud image according to the rotation transformation relation and the scaling transformation relation obtained in S404 and S408 to obtain a new ground point cloud image;
S410: taking a fifth U-Net network and a sixth U-Net network which are trained in advance as two feature extractors, taking a local aerial view map and a new ground point cloud image as original input pictures of the two feature extractors respectively, and extracting isomorphic features in the two original input pictures to obtain an isomorphic fifth feature picture and an isomorphic sixth feature picture;
s411: carrying out phase correlation solving on the fifth characteristic diagram and the sixth characteristic diagram obtained in the step S410 to obtain a first phase correlation diagram, wherein the first phase correlation diagram is used for further calculating a translation transformation relation in the x direction between the local aerial view map and the ground point cloud image;
s412: taking a seventh U-Net network and an eighth U-Net network which are trained in advance as two feature extractors, taking a local aerial view map and a new ground point cloud image as original input pictures of the two feature extractors respectively, and extracting isomorphic features in the two original input pictures to obtain a seventh feature picture and an eighth feature picture which are isomorphic and only retain a translation transformation relation between the original input pictures;
s413: carrying out phase correlation solving on the seventh feature map and the eighth feature map obtained in the step S412 to obtain a second phase correlation map, wherein the second phase correlation map is used for further calculating a translation transformation relationship in the y direction between the local aerial view map and the ground point cloud image;
S414: and carrying out superposition summation on the first phase correlation diagram and the second phase correlation diagram, and then normalizing the first phase correlation diagram and the second phase correlation diagram to obtain a phase correlation diagram which is used as a final output for carrying out Softmax operation.
Preferably, in the depth phase correlation network, 8U-Net networks are independent, and each U-Net network extracts a feature map with the same size as an input original map by respectively using 4 encoder layers for downsampling through convolution operation and 4 decoder layers for upsampling through deconvolution operation; the 8U-Net networks are trained in advance, and the total loss function of the training is a weighted sum of the rotation transformation relation loss, the scaling transformation relation loss, the translation transformation relation loss in the x direction and the translation transformation relation loss in the y direction between the local aerial view map and the ground point cloud image.
Preferably, the weighting weights of the four losses in the total loss function are all 1, and all the four losses adopt L1 loss.
Preferably, in S6, the method for positioning the accurate position of the mobile robot on the map based on the particle filter positioning method is as follows:
s61: firstly, scattering a preset number of points near the current position of the mobile robot, wherein each point represents an assumed position of the mobile robot;
S62: mapping the points into the probability distribution diagram, wherein the probability value of a point in the probability distribution diagram represents the weight of the point, and the greater the weight is, the greater the probability of the mobile robot at the position is;
s63: after the weight of the particles is obtained, resampling operation is carried out according to the weight, so that the particles with large weight continuously exist, and the particles with small weight are filtered gradually;
s64: the mobile robot moves all particles according to the estimated motion, and the particles update and calculate weights again according to the probability distribution map;
s65: and (3) repeating the steps (S63) and (S64) continuously and iteratively to gradually gather the particles near the real position, and determining the accurate position of the mobile robot on the map by using the position center of the final gathered particles after the iteration is finished.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the ground point cloud image of the position of the mobile robot is obtained by using the binocular camera of the mobile robot, meanwhile, the local image is intercepted from the aerial view map of the unmanned aerial vehicle by using the position coarse estimation value determined by the vehicle-mounted sensor, and the two types of images are obtained through a depth phase correlation network to obtain a phase correlation graph and are converted into a probability distribution graph, so that the accurate positioning of the robot can be realized through a particle filter positioning algorithm. The method can correct the rough position estimation value determined by the vehicle-mounted sensors such as the GPS, the odometer and the like, eliminates the adverse effect of external factors such as illumination, shielding and the like on the positioning result, and greatly improves the robustness of autonomous positioning of the mobile robot.
Drawings
FIG. 1 is a flow chart of steps of a mobile robot positioning method based on unmanned map and ground binocular information;
fig. 2 is a model framework diagram of a deep phase correlation network.
Fig. 3 is a map of an unmanned structure in one example.
FIG. 4 is a partial bird's eye view map taken from a ground point cloud image and corresponding locations taken by a binocular camera in one example.
Detailed Description
The invention is further illustrated and described below with reference to the drawings and detailed description. The technical features of the embodiments of the invention can be combined correspondingly on the premise of no mutual conflict.
The invention designs a method for positioning by means of unmanned aerial vehicle detection and binocular cameras in a mobile robot, which has the following conception: the method comprises the steps of firstly detecting a moving area of a robot by using an unmanned aerial vehicle to form a corresponding aerial view map, then constructing a ground point cloud image by the mobile robot based on detection data of binocular cameras, intercepting a local aerial view map of the position of the robot from the aerial view map, and training an end-to-end matching model through the local aerial view map and the ground point cloud image, so that the model can be matched with two types of images to achieve the positioning purpose. The model has certain generalization capability, in the practical application process, a phase correlation diagram can be generated only by inputting a ground point cloud image constructed by the ground and a local aerial view map of a current detection area into a model trained before, a probability distribution diagram for positioning is further generated, and the accurate positioning of the robot can be realized through a particle filter positioning algorithm.
Specific embodiments of the above positioning method are described in detail below.
As shown in fig. 1, in a preferred embodiment of the present invention, a mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information is provided, which specifically includes the following steps:
s1: and carrying out full coverage detection on the moving area of the robot by using the unmanned aerial vehicle, and recovering a bird's-eye view map of the moving area of the robot through the sparse point cloud map by using the obtained downward-looking camera image sequence and parameters in the flight process.
The specific practice of converting unmanned aerial vehicle detection data into a bird's eye view map exists in the prior art. In this embodiment, the moving area of the robot may be detected by the unmanned aerial vehicle, the unmanned aerial vehicle flies for a distance above the required detection area and returns after covering the area, the obtained image sequence of the downward-looking camera, the flight IMU and the internal and external parameter information of the camera are returned to the ground workstation, the ground workstation firstly estimates the pose of each frame of image in the image sequence by the SLAM technology, then a sparse point cloud map of the ground is constructed by feature point matching, and finally the image is utilized to interpolate the sparse point cloud and construct a Mesh surface, so as to recover the bird's eye view map of the required detection area.
The method for acquiring the aerial view map by using the unmanned aerial vehicle can greatly expand the application range of the method, can pointedly acquire the accurate map of the moving area of the robot, avoids the defects of incomplete coverage, insufficient resolution and the like of the satellite map, and has higher flexibility.
S2: and detecting the area in front of the position by using a binocular camera carried on the mobile robot to form a ground point cloud rich in texture information, and observing the ground point cloud with a bird's eye view angle to obtain a ground point cloud image.
In this embodiment, the depth information of the area in front of the position can be estimated by using a binocular camera mounted on the mobile robot, and the point cloud is formed, the image information of the left-eye camera is given to the formed point cloud in the form of texture, so as to form a ground point cloud rich in texture information, and the ground point cloud is observed with a bird's eye view angle, so as to obtain a ground point cloud image under the bird's eye view angle.
S3: the mobile robot estimates the position of the mobile robot according to the self-contained sensor, and intercepts a local aerial view map with the same size as the ground point cloud image obtained in S2 from the aerial view map obtained in S1 by taking the position of the mobile robot as the center.
The estimation of the position of the mobile robot can be realized based on the sensor carried by the mobile robot, for example, the GPS positioning device or the odometer can realize the positioning of the approximate position of the robot. However, the accuracy of the GPS positioning device is limited by the device itself and the environment in which the mobile robot is located, and often a large error occurs under external interference, and the odometer can only implement rough position estimation. Therefore, the mobile robot can only be estimated as a rough position for itself, and it needs to be corrected depending on the subsequent procedure of the present invention.
In the present invention, the correction flow is realized by image matching of the ground point cloud image with the local bird's eye view map. The ground point cloud image is derived from a surrounding scene image of a position where the mobile robot is located, the local aerial view map is derived from a map constructed after unmanned aerial vehicle detection, and if the ground point cloud image can be registered to the local aerial view map through image matching, the accurate positioning of the robot can be realized by utilizing local aerial view map information. However, the local aerial map is oversized and the registration of the destination images is inefficient, so that the preliminary estimation of the position of the mobile robot can be used to narrow the range of the registered images. Taking the requirement of subsequent image registration into consideration, an image with the same size as the ground point cloud image can be intercepted by taking the estimated self-positioning position as the center for registration with the ground point cloud image. In this embodiment, the image sizes of the ground point cloud image and the local bird's eye view map are set to 256×256.
S4: and inputting the ground point cloud image and the local aerial view map into a depth phase correlation network, extracting robust features in the ground point cloud image and the local aerial view map through convolution operation, converting the extracted features into feature images with the same size as the original image through deconvolution operation, removing translation components of the feature images of the ground point cloud image and the feature images of the local aerial view map through Fourier transformation operation, converting rotation components into translation through logarithmic polarity transformation operation, and finally obtaining a phase correlation image through phase correlation operation.
Therefore, the core of the invention also needs to construct a depth phase correlation network, so that the depth phase correlation network can process the input ground point cloud image and the local aerial view map, realize heterogeneous matching of the ground point cloud image and the local aerial view map and output a phase correlation map.
As shown in fig. 2, the network framework structure of the depth-phase correlation network constructed in a preferred embodiment of the present invention is mainly composed of 8 independent U-Net networks, and a fourier transform layer (FFT), a logarithmic polarity transform Layer (LPT), and a phase correlation layer (DC), and the input of the depth-phase correlation network is a pair of heterogeneous graphics, namely the aforementioned local aerial view map Sample1 and ground point cloud image Sample2, and the final output is three pose transformation relations, namely translation, rotation, and scaling, required for registering the local aerial view map and the ground point cloud image. The local aerial view map is used as a matched template, and the ground point cloud image can be matched and spliced on the local aerial view map after pose transformation.
In order to solve the problem that heterogeneous images cannot be directly registered, a common processing method is to extract features from two images, and replace original sensor measurement values with the features to estimate the relative pose. In conventional phase correlation algorithms, a high pass filter is used to suppress random noise in both inputs, and this process can be considered as a feature extractor. However, for a pair of input heterogeneous images, there is a relatively significant variation between the two, and a high pass filter is far from adequate. The present invention solves this problem with end-to-end learning, considering that there are no common features to directly supervise the feature extractor. In the invention, 8 independent trainable U-Net networks (denoted as U-Net 1-U-Net 8) are respectively constructed for the local aerial map and the source image in the rotation scaling stage and the translation stage, and after the 8U-Net networks are trained in advance under the supervision of three types of losses of translation, rotation and scaling, isomorphic features in the picture, namely common features, can be extracted from the heterogeneous images, so that two heterogeneous images are converted into two isomorphic feature images. In the invention, if only 4U-Net networks are arranged, the solutions of rotation and scaling transformation are required to be coupled, and the solutions of x-direction translation and y-direction translation are also required to be coupled, so that the characteristics extracted by the characteristic extractor obtained by training have poor effects; therefore, we decouple rotation, scaling, x translation and y translation, train the respective U-Net networks respectively, and obtain 8U-Net networks in total so as to achieve the effect of improving precision.
In this embodiment, the input and output sizes are 256×256 for 8 independent U-Net networks, respectively. Each U-Net network is characterized in that 4 encoder layers which adopt convolution operation to carry out downsampling and 4 decoder layers which adopt deconvolution operation to carry out upsampling are used for extracting the characteristics which are the same as the size of an input original image, meanwhile, jump connection exists between the encoder layers and the decoder layers, and the specific U-Net network structure belongs to the prior art and is not repeated. As training proceeds, the parameters of 8U-nets are adjusted. Note that this network is lightweight, so it has a sufficiently efficient real-time performance to meet the requirements of the application scenario.
In addition, the Fourier transform layer (FFT) is used for performing Fourier transform on the feature images extracted by the U-Net network, and removing the translation transformation relation between images and retaining the rotation and scaling transformation relation. Since only rotation and scale have an effect on the amplitude of the spectrum, but not on the amplitude of the spectrum, depending on the nature of the fourier transform. The introduction of the FFT results in a representation that is insensitive to translation but particularly sensitive to scaling and rotation, so that translation can be ignored in subsequent solutions to scaling and rotation.
In addition, the logarithmic polarity transformation Layer (LPT) functions to perform logarithmic polar transformation on the FFT-transformed image, and map the image from the cartesian coordinate system to the logarithmic polar coordinate system. In this mapping process, scaling and rotation in the Cartesian coordinate system can be converted into translation in the polar coordinate system. The coordinate system transformation can derive a cross-correlation form with respect to scaling and rotation, eliminating all exhaustive evaluations in the overall depth phase correlation network.
In addition, the phase correlation layer (DC) functions to perform a phase correlation solution, i.e. to calculate the cross correlation between the two amplitude spectra. And according to the correlation obtained by solving, obtaining the translation transformation relation between the two. The specific calculation process of the cross correlation belongs to the prior art and is not described in detail.
The following describes the specific calculation process of the phase correlation diagram between the local aerial view map and the ground point cloud image based on the depth phase correlation network in detail, and the steps are as follows:
s401: taking a first U-Net network U-Net1 and a second U-Net network U-Net2 which are trained in advance as two feature extractors, taking a heterogeneous local aerial view map and a ground point cloud image as original input pictures of the two feature extractors U-Net1 and U-Net2 respectively (namely, the local aerial view map is input into the U-Net1, the ground point cloud image is input into the U-Net2, and the same is carried out below), and extracting isomorphic features in the two original input pictures to obtain a isomorphic first feature picture and a isomorphic second feature picture. At this time, the translation, rotation and scaling transformation relations between the original input pictures are simultaneously maintained in the first feature map and the second feature map.
S402: and (3) respectively carrying out first Fourier transform operation (marked as FFT 1) on the first characteristic diagram and the second characteristic diagram obtained in the S401, and then taking respective amplitude spectrums, wherein a rotation and scaling transformation relation between original input pictures is reserved between the two amplitude spectrums, and the translation transformation relation is filtered out in the FFT 1.
S403: the two amplitude spectrums obtained in S402 are respectively subjected to a first logarithmic polar coordinate transformation operation (denoted as LPT 1) to be transformed from a cartesian coordinate system into a logarithmic polar coordinate system, so that a rotational transformation in the cartesian coordinate system between the two amplitude spectrums is mapped to a translational transformation (denoted as Y) in the Y-direction in the logarithmic polar coordinate system.
S404: and (3) carrying out phase correlation solving on the amplitude spectrum after the two coordinate transformations in S403 in a phase correlation layer (DC) to form a phase correlation diagram A, and obtaining a translation transformation relation between the two phase correlation diagrams A after argmax operation. Note that in LPT1 in S403, there is a mapping relationship between the rotation transformation in the cartesian coordinate system and the translation transformation Y in the Y direction in the logarithmic polar coordinate system, so the translation transformation relationship may be reconverted according to the mapping relationship between the cartesian coordinate system and the logarithmic polar coordinate system in S403, to obtain the rotation transformation relationship between the local aerial view map and the ground point cloud image.
The above rotational transformation relationship is essentially the angle theta that the ground point cloud image needs to be rotated to achieve registration with the local aerial map.
S405: similarly, a third U-Net network U-Net3 and a fourth U-Net network U-Net4 which are trained in advance are used as two feature extractors, a heterogeneous local aerial view map and a ground point cloud image are respectively used as original input pictures of the two feature extractors U-Net3 and U-Net4, isomorphic features in the two original input pictures are extracted, and a third isomorphic feature picture and a fourth isomorphic feature picture are obtained. At this time, the translation, rotation and scaling transformation relation between the original input pictures is also reserved in the third feature map and the fourth feature map.
S406: the third feature map and the fourth feature map obtained in S405 are subjected to a second fourier transform operation (referred to as FFT 2), respectively, and then the respective amplitude spectrums are obtained. Also, the rotation and scaling transformation relation between the original input pictures is preserved between the two amplitude spectra while the translation transformation relation has been filtered out in the FFT 2.
S407: the two amplitude spectrums obtained in S406 are respectively subjected to a second logarithmic polar coordinate transformation operation (denoted as LPT 2) to be transformed from a cartesian coordinate system into a logarithmic polar coordinate system, so that scaling transformation under the cartesian coordinate system between the two amplitude spectrums is mapped to translational transformation X in the X direction in the logarithmic polar coordinate system.
S408: and (3) carrying out phase correlation solving on the amplitude spectrums after the two coordinate transformations in S407 in a phase correlation layer (DC) to form a phase correlation diagram B, and carrying out argmax operation on the phase correlation diagram B to obtain a translation transformation relation between the two. Similarly, in the LPT2 of S407, there is a mapping relationship between the rotation transformation in the cartesian coordinate system and the translation transformation X in the X direction in the logarithmic polar coordinate system, so that the local bird' S-eye view map and the ground point cloud image may be reconverted according to the mapping relationship between the cartesian coordinate system and the logarithmic polar coordinate system in S407.
The scaling relationship is essentially a scale that needs to be scaled to achieve registration of the ground point cloud image with the local aerial map.
Thus, through the above steps, a rotation transformation relationship and a scaling transformation relationship between the local bird's eye map and the ground point cloud image have been obtained.
S409: and (3) performing corresponding rotation and scaling transformation on the ground point cloud image according to the rotation transformation relation and the scaling transformation relation obtained in S404 and S408 to obtain a new ground point cloud image. Because the angle and the proportion difference between the local aerial view map and the ground point cloud image do not exist after rotation and scaling transformation, the new ground point cloud image and the input local aerial view map currently only contain a translation transformation relation, do not exist, and only the translation difference between the new ground point cloud image and the input local aerial view map is eliminated through translation transformation. For the translation transformation relation, the translation transformation relation in the x and y directions can be obtained only by solving the phase correlation.
S410: taking a fifth U-Net network U-Net5 and a sixth U-Net network U-Net6 which are trained in advance as two feature extractors, taking a local aerial view map and a new ground point cloud image as original input pictures of the two feature extractors U-Net5 and U-Net6 respectively, and extracting isomorphic features in the two original input pictures to obtain an isomorphic fifth feature picture and an isomorphic sixth feature picture. At this time, only the translational transformation relationship between the original input pictures is retained in the fifth and sixth feature maps, and the rotational and scaling transformation relationship does not exist.
S411: and (3) carrying out phase correlation solving on the fifth characteristic diagram and the sixth characteristic diagram obtained in the S410 in a phase correlation layer (DC) to form a phase correlation diagram C, and obtaining a translation transformation relationship in the x direction between the local aerial view map and the ground point cloud image after carrying out argmax operation on the phase correlation diagram C.
S412: taking a seventh U-Net network U-Net7 and an eighth U-Net network U-Net8 which are trained in advance as two feature extractors, taking a local aerial view map and a new ground point cloud image as original input pictures of the two feature extractors U-Net7 and U-Net8 respectively, and extracting isomorphic features in the two original input pictures to obtain an isomorphic seventh feature picture and an isomorphic eighth feature picture. At this time, only the translational transformation relationship between the original input pictures is retained in the seventh feature map and the eighth feature map, and the rotational and scaling transformation relationship does not exist.
S413: and (3) carrying out phase correlation solving on the seventh feature map and the eighth feature map obtained in the S412 in a phase correlation layer (DC) to form a phase correlation map D, and obtaining a translation transformation relationship in the y direction between the local aerial view map and the ground point cloud image after carrying out argmax operation on the phase correlation map D.
The above-mentioned translational transformation relation in the X-direction and translational transformation relation in the Y-direction are essentially that the ground point cloud image needs to be registered with the local aerial view map by a distance X translated in the X-direction and a distance Y translated in the Y-direction, respectively.
It can be seen that the pose estimation of the invention is realized in two stages, and an estimated value of four degrees of freedom (X, Y, theta, scale) is obtained in total. Firstly, the rotation and scaling transformation relation is estimated through the rotation scaling stage of S401 to S409, and then the translation transformation relation is estimated through the translation stage of S410 to S413. By integrating the results of S404, S408, S411 and S413, pose estimation values of three transformation relations of rotation, scaling and translation between the heterogeneous local aerial view map and the ground point cloud image can be obtained, so that the pose estimation process of the two can be completed.
It should be noted, however, that the final purpose of the deep phase correlation network described above is not to obtain pose estimates, but rather to obtain a phase correlation map E that is ultimately used to calculate the probability distribution map. The phase correlation map E is obtained by superimposing the phase correlation maps C in step S411 and D in step S413 through one network branch in the above-described pose estimation process.
S414: the phase correlation diagram C output in step S411 and the phase correlation diagram D output in step S413 are superimposed by pixel-by-pixel summation, resulting in a phase correlation diagram E. Since the phase correlation diagram E is obtained by superposing two phase correlation diagrams, normalization operation is required, and the normalized phase correlation diagram E is used as a final output for subsequent probability distribution diagram calculation.
Therefore, accurate output of the phase correlation diagram E still needs to achieve accurate acquisition of the phase correlation diagram C and the phase correlation diagram D, so the deep phase correlation network still needs to train with the aim of improving the final pose estimation accuracy. In the training process, in the deep phase correlation network, 8U-Net networks are trained in advance, and in order to ensure that each U-Net network can accurately extract isomorphic characteristics, a reasonable loss function needs to be set. The total loss function for training should be a weighted sum of the rotation transformation relation loss, the scaling transformation relation loss, the translation transformation relation loss in the x-direction and the translation transformation relation loss in the y-direction between the local aerial view map and the ground point cloud image, and the specific weighted value can be adjusted according to the actual situation.
In this embodiment, the weighting weights of the four losses in the total loss function are all 1, and the four losses all adopt L1 loss, and the four loss functions are as follows:
let the rotation relation theta predicted in S404 be theta_prediction, the scale relation scale predicted in S408 be scale_prediction, the translation transform X in the X direction predicted in S411 be x_prediction, and the translation transform Y in the Y direction predicted in S413 be y_prediction. Thus, a translation (x_prediction, y_prediction), rotation (theta_prediction) and scaling (scale_prediction) relationship between two heterogeneous pictures is obtained during each round of training.
1) The calculated theta_prediction is subjected to 1-norm distance loss from the true value theta_gt in the model, L_theta= (theta_gt-theta_prediction), and L_theta is returned to train U-Net1 and U-Net2 so that better characteristics for calculating the theta_prediction can be extracted.
2) And (3) performing 1-norm distance loss on the calculated scale_prediction and the true value scale_gt, wherein L_scale= (scale_gt-scale_prediction), and returning L_scale to train U-Net3 and U-Net4 so that the characteristics for calculating scale_prediction can be extracted better.
3) And (3) carrying out 1-norm distance loss on the obtained x_prediction and the true value x_gt, wherein L_x= (x_gt-x_prediction), and returning L_x to train U-Net5 and U-Net6 so that the characteristics for solving the x_prediction can be extracted better.
4) And (3) carrying out 1-norm distance loss on the obtained y_prediction and the true value y_gt, wherein L_y= (y_gt-y_prediction), and L_y is returned to train U-Net7 and U-Net8 so that the characteristics for solving the y_prediction can be extracted better.
Therefore, the total loss function is L=L_x+L_y+L_theta+L_scale, and model parameters of 8U-Net networks are optimized through a gradient descent method in the training process, so that the total loss function is minimized. The trained 8U-Net networks form a depth phase correlation network for estimating the pose of the actual heterogeneous images, the pose estimation of the two heterogeneous images can be carried out according to the method of S1-S13 in the depth phase correlation network, and in the process, an accurate phase correlation graph C and an accurate phase correlation graph D can be output.
S5: and carrying out Softmax operation on the normalized phase correlation diagram E to convert the normalized phase correlation diagram E into 0-1 distribution, so as to obtain a probability distribution diagram.
S6: and positioning the accurate position of the mobile robot on the map based on the particle filter positioning method on the basis of the probability distribution map.
The particle filter positioning method belongs to the prior art, and the implementation manner of the particle filter positioning method in the embodiment is briefly described as follows:
the method for positioning the accurate position of the mobile robot on the map by the particle filter positioning method comprises the following steps:
S61: firstly, initializing a particle swarm, and scattering a preset number of points near the current position of the mobile robot in a bird's eye view map, wherein each point represents an assumed position of the mobile robot.
S62: then, a probability distribution map is obtained, the points are mapped into the probability distribution map, the probability value of one point in the probability distribution map represents the weight of the point, and the greater the weight is, the greater the probability of the mobile robot at the position is.
S63: after the weight of the particles is obtained, resampling operation is carried out through a roulette method according to the weight, so that the particles with large weight continuously exist, and the particles with small weight are filtered out gradually.
S64: the mobile robot moves all particles according to the motion estimated based on the odometer, and the particles perform updating calculation of the weight again according to the current probability distribution map.
S65: and (3) repeating the steps (S63) and (S64) continuously and iteratively to gradually gather the particles near the real position, and determining the accurate position of the mobile robot on the map by using the position center of the final gathered particles after the iteration is finished.
Based on the particle filter positioning algorithm step, the positioning can gradually converge to a more accurate degree along with the movement of the robot.
In a specific example, as shown in fig. 3, a map constructed by the unmanned robot in this embodiment is shown, and the robot moves in the map. As shown in fig. 4, an image pair is input for a depth-phase correlation network model when the robot is at a certain position, the left image is a ground point cloud image obtained from binocular camera data, and the right image is a local bird's-eye view map taken from the bird's-eye view map centering on a position roughly estimated from mileage data. The two images are input into a depth phase correlation network, then a phase correlation diagram is output, and then the two images are converted into a probability distribution diagram, and a path positioned in the moving process of the robot is obtained after the particle filtering positioning algorithm. Further quantifying the errors of different methods, when the ground mobile robot moves, the odometer has a course error, and the quantified indexes in table 1 show the positioning course error estimated directly by the odometer without any correction in the process that the robot advances by 200m in three different road sections, and the positioning course error corrected by the method of the invention is expressed in meters.
TABLE 1 errors before and after correction and correction time-consuming according to the invention
No correction error Correction by the method Time consuming correction by the method
Road section 1 23.1m 1.02m 30ms
Road section 2 19.6m 0.91m 28ms
Road section 3 26.7m 0.87m 33ms
Therefore, the method can correct the rough position estimation value determined by the vehicle-mounted sensors such as the GPS, the odometer and the like, eliminates the adverse effect of external factors such as illumination, shielding and the like on the positioning result, and greatly improves the robustness of autonomous positioning of the mobile robot.
The above embodiment is only a preferred embodiment of the present invention, but it is not intended to limit the present invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, all the technical schemes obtained by adopting the equivalent substitution or equivalent transformation are within the protection scope of the invention.

Claims (7)

1. The mobile robot positioning method based on the unmanned aerial vehicle map and the ground binocular information is characterized by comprising the following steps:
s1: performing full coverage detection on a robot moving area by using an unmanned aerial vehicle, obtaining a downward-looking camera image sequence and parameters in the flight process, and recovering a bird's-eye view map of the robot moving area through a sparse point cloud map;
S2: detecting a region in front of a position of the mobile robot by using a binocular camera carried on the mobile robot to form a ground point cloud rich in texture information, and observing the ground point cloud by using a bird's eye view to obtain a ground point cloud image;
s3: the mobile robot estimates the position of the mobile robot according to the self-contained sensor, and intercepts a local aerial view map with the same size as the ground point cloud image from the aerial view map by taking the position of the mobile robot as the center;
s4: inputting the ground point cloud image and the local aerial view map into a depth-phase correlation network comprising 8 different U-Net networks, extracting robust features in the ground point cloud image and the local aerial view map through convolution operation, converting the extracted features into feature images with the same size as an original image through deconvolution operation, removing translation components of the feature images of the ground point cloud image and the feature images of the local aerial view map through Fourier transform operation, converting rotation components into translation through logarithmic polarity transform operation, and finally obtaining a phase correlation image through phase correlation operation, wherein the method comprises the following steps:
s401: taking a first U-Net network and a second U-Net network which are trained in advance as two feature extractors, taking a local aerial view map and a ground point cloud image as respective original input pictures of the two feature extractors, and extracting isomorphic features in the two original input pictures to obtain a first isomorphic feature picture and a second isomorphic feature picture;
S402: performing Fourier transform on the first characteristic diagram and the second characteristic diagram obtained in the step S401 respectively, and then taking respective amplitude spectrums;
s403: respectively carrying out logarithmic polar coordinate transformation on the two amplitude spectrums obtained in the step S402, so that the two amplitude spectrums are transformed into the logarithmic polar coordinate system from a Cartesian coordinate system, and therefore, the rotation transformation under the Cartesian coordinate system between the two amplitude spectrums is mapped into translation transformation in the y direction in the logarithmic polar coordinate system;
s404: carrying out phase correlation solving on the amplitude spectrum after the two coordinate transformations in the S403 to obtain a translation transformation relation between the two, and reconverting according to the mapping relation between the Cartesian coordinate system and the logarithmic polar coordinate system in the S403 to obtain a rotation transformation relation between the local aerial view map and the ground point cloud image;
s405: taking a third U-Net network and a fourth U-Net network which are trained in advance as two feature extractors, taking a local aerial view map and a ground point cloud image as original input pictures of the two feature extractors respectively, and extracting isomorphic features in the two original input pictures to obtain a third isomorphic feature picture and a fourth isomorphic feature picture;
s406: performing Fourier transform on the third characteristic diagram and the fourth characteristic diagram obtained in the step S405 respectively, and then taking respective amplitude spectrums;
S407: respectively carrying out logarithmic polar coordinate transformation on the two amplitude spectrums obtained in the step S406, so that the two amplitude spectrums are transformed into a logarithmic polar coordinate system from a Cartesian coordinate system, and scaling transformation under the Cartesian coordinate system between the two amplitude spectrums is mapped into translational transformation in the x direction in the logarithmic polar coordinate system;
s408: carrying out phase correlation solving on the amplitude spectrum after the two coordinate transformations in the S407 to obtain a translation transformation relation between the two, and reconverting according to the mapping relation between the Cartesian coordinate system and the logarithmic polar coordinate system in the S407 to obtain a scaling transformation relation between the local aerial view map and the ground point cloud image;
s409: performing corresponding rotation and scaling transformation on the ground point cloud image according to the rotation transformation relation and the scaling transformation relation obtained in S404 and S408 to obtain a new ground point cloud image;
s410: taking a fifth U-Net network and a sixth U-Net network which are trained in advance as two feature extractors, taking a local aerial view map and a new ground point cloud image as original input pictures of the two feature extractors respectively, and extracting isomorphic features in the two original input pictures to obtain an isomorphic fifth feature picture and an isomorphic sixth feature picture;
S411: carrying out phase correlation solving on the fifth characteristic diagram and the sixth characteristic diagram obtained in the step S410 to obtain a first phase correlation diagram, wherein the first phase correlation diagram is used for further calculating a translation transformation relation in the x direction between the local aerial view map and the ground point cloud image;
s412: taking a seventh U-Net network and an eighth U-Net network which are trained in advance as two feature extractors, taking a local aerial view map and a new ground point cloud image as original input pictures of the two feature extractors respectively, and extracting isomorphic features in the two original input pictures to obtain a seventh feature picture and an eighth feature picture which are isomorphic and only retain a translation transformation relation between the original input pictures;
s413: carrying out phase correlation solving on the seventh feature map and the eighth feature map obtained in the step S412 to obtain a second phase correlation map, wherein the second phase correlation map is used for further calculating a translation transformation relationship in the y direction between the local aerial view map and the ground point cloud image;
s414: the first phase correlation diagram and the second phase correlation diagram are subjected to superposition summation and then normalized, and the normalized phase correlation diagram is used as a final phase correlation diagram for carrying out Softmax operation;
s5: performing Softmax operation on the phase correlation map to convert the phase correlation map into 0-1 distribution, so as to obtain a probability distribution map;
S6: and positioning the accurate position of the mobile robot on the map based on the particle filter positioning method on the basis of the probability distribution map.
2. The mobile robot positioning method based on the binocular information of the unmanned aerial vehicle map and the ground according to claim 1, wherein in the step S1, the unmanned aerial vehicle firstly detects a moving area of the robot, flies a distance on an upper surface of a required detection area and returns after covering the area, the obtained downward-looking camera image sequence, the flying IMU and the camera parameter information are returned to the ground workstation, the ground workstation firstly estimates the pose of each frame of image in the image sequence through SLAM technology, then a sparse point cloud map of the ground is constructed through characteristic point matching, finally the sparse point cloud is interpolated by utilizing the image, a Mesh surface is constructed, and a bird' S eye view map of the required detection area is recovered.
3. The method for positioning a mobile robot based on unmanned aerial vehicle map and ground binocular information according to claim 1, wherein in S2, depth information of a region in front of a position is estimated by using a binocular camera mounted on the mobile robot and a point cloud is formed, image information of a left-eye camera is given to the formed point cloud in a texture form to form a ground point cloud rich in texture information, and the ground point cloud is observed with a bird 'S-eye view angle to obtain a ground point cloud image under the bird' S-eye view angle.
4. The mobile robot positioning method based on the unmanned aerial vehicle map and ground binocular information according to claim 1, wherein in S3, the mobile robot estimates its own position according to GPS or odometer.
5. The mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information according to claim 1, wherein in the depth phase correlation network, 8U-Net networks are independent from each other, and each U-Net network extracts a feature map with the same size as an input original map by 4 encoder layers down-sampled through convolution operation and 4 decoder layers up-sampled through deconvolution operation; the 8U-Net networks are trained in advance, and the total loss function of the training is a weighted sum of the rotation transformation relation loss, the scaling transformation relation loss, the translation transformation relation loss in the x direction and the translation transformation relation loss in the y direction between the local aerial view map and the ground point cloud image.
6. The mobile robot positioning method based on the unmanned aerial vehicle map and ground binocular information of claim 5, wherein the weighting weights of four losses in the total loss function are all 1, and all four losses adopt L1 losses.
7. The mobile robot positioning method based on the unmanned aerial vehicle map and ground binocular information according to claim 1, wherein in S6, the method for positioning the accurate position of the mobile robot on the map based on the particle filter positioning method is as follows:
s61: firstly, scattering a preset number of points near the current position of the mobile robot, wherein each point represents an assumed position of the mobile robot;
s62: mapping the points into the probability distribution diagram, wherein the probability value of a point in the probability distribution diagram represents the weight of the point, and the greater the weight is, the greater the probability of the mobile robot at the position is;
s63: after the weight of the particles is obtained, resampling operation is carried out according to the weight, so that the particles with large weight continuously exist, and the particles with small weight are filtered gradually;
s64: the mobile robot moves all particles according to the estimated motion, and the particles update and calculate weights again according to the probability distribution map;
s65: and (3) repeating the steps (S63) and (S64) continuously and iteratively to gradually gather the particles near the real position, and determining the accurate position of the mobile robot on the map by using the position center of the final gathered particles after the iteration is finished.
CN202110797651.8A 2021-07-14 2021-07-14 Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information Active CN113538579B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110797651.8A CN113538579B (en) 2021-07-14 2021-07-14 Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110797651.8A CN113538579B (en) 2021-07-14 2021-07-14 Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information

Publications (2)

Publication Number Publication Date
CN113538579A CN113538579A (en) 2021-10-22
CN113538579B true CN113538579B (en) 2023-09-22

Family

ID=78099192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110797651.8A Active CN113538579B (en) 2021-07-14 2021-07-14 Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information

Country Status (1)

Country Link
CN (1) CN113538579B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797425B (en) * 2023-01-19 2023-06-16 中国科学技术大学 Laser global positioning method based on point cloud aerial view and coarse-to-fine strategy

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017091008A1 (en) * 2015-11-26 2017-06-01 삼성전자주식회사 Mobile robot and control method therefor
CN107941217A (en) * 2017-09-30 2018-04-20 杭州迦智科技有限公司 A kind of robot localization method, electronic equipment, storage medium, device
CN111578958A (en) * 2020-05-19 2020-08-25 山东金惠新达智能制造科技有限公司 Mobile robot navigation real-time positioning method, system, medium and electronic device
CN112200869A (en) * 2020-10-09 2021-01-08 浙江大学 Robot global optimal visual positioning method and device based on point-line characteristics
CN113084798A (en) * 2021-03-16 2021-07-09 浙江大学湖州研究院 Robot calibration device based on multi-station measurement

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102243179B1 (en) * 2019-03-27 2021-04-21 엘지전자 주식회사 Moving robot and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017091008A1 (en) * 2015-11-26 2017-06-01 삼성전자주식회사 Mobile robot and control method therefor
CN107941217A (en) * 2017-09-30 2018-04-20 杭州迦智科技有限公司 A kind of robot localization method, electronic equipment, storage medium, device
CN111578958A (en) * 2020-05-19 2020-08-25 山东金惠新达智能制造科技有限公司 Mobile robot navigation real-time positioning method, system, medium and electronic device
CN112200869A (en) * 2020-10-09 2021-01-08 浙江大学 Robot global optimal visual positioning method and device based on point-line characteristics
CN113084798A (en) * 2021-03-16 2021-07-09 浙江大学湖州研究院 Robot calibration device based on multi-station measurement

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PUENTE I et.al.Review of mobile mapping and surveying technologies.《Measurement》.2013,第46卷(第7期),第2127-2145页. *
WANG L et.al.Map-based localization method for autonomous vehicles using 3D-LIDAR.《IFAC-PopersOnLine》.2017,第50卷(第1期),第276-281页. *
张伟伟 ; 陈超 ; 徐军 ; .融合激光与视觉点云信息的定位与建图方法.计算机应用与软件.2020,(第07期),全文. *

Also Published As

Publication number Publication date
CN113538579A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN110189399B (en) Indoor three-dimensional layout reconstruction method and system
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN108917753B (en) Aircraft position determination method based on motion recovery structure
AU2021200832A1 (en) Collaborative 3d mapping and surface registration
CN111060924A (en) SLAM and target tracking method
Sanfourche et al. Perception for UAV: Vision-Based Navigation and Environment Modeling.
CN115421158B (en) Self-supervision learning solid-state laser radar three-dimensional semantic mapping method and device
CN107978017A (en) Doors structure fast modeling method based on wire extraction
CN113552585B (en) Mobile robot positioning method based on satellite map and laser radar information
Dani et al. Image moments for higher-level feature based navigation
CN116485854A (en) 3D vehicle positioning using geographic arcs
Zhao et al. Review of slam techniques for autonomous underwater vehicles
Andert Drawing stereo disparity images into occupancy grids: Measurement model and fast implementation
CN115451964A (en) Ship scene simultaneous mapping and positioning method based on multi-mode mixed features
CN113538579B (en) Mobile robot positioning method based on unmanned aerial vehicle map and ground binocular information
CN109341685B (en) Fixed wing aircraft vision auxiliary landing navigation method based on homography transformation
CN115049794A (en) Method and system for generating dense global point cloud picture through deep completion
CN113313824B (en) Three-dimensional semantic map construction method
CN112017259B (en) Indoor positioning and image building method based on depth camera and thermal imager
CN117132737A (en) Three-dimensional building model construction method, system and equipment
CN117115063A (en) Multi-source data fusion application method
Wang et al. Automated mosaicking of UAV images based on SFM method
CN116704029A (en) Dense object semantic map construction method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant