WO2020211655A1 - 激光粗配准方法、装置、移动终端及存储介质 - Google Patents
激光粗配准方法、装置、移动终端及存储介质 Download PDFInfo
- Publication number
- WO2020211655A1 WO2020211655A1 PCT/CN2020/082981 CN2020082981W WO2020211655A1 WO 2020211655 A1 WO2020211655 A1 WO 2020211655A1 CN 2020082981 W CN2020082981 W CN 2020082981W WO 2020211655 A1 WO2020211655 A1 WO 2020211655A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- obstacle
- map
- image
- probability
- robot
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 68
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 41
- 239000013598 vector Substances 0.000 claims description 31
- 238000006243 chemical reaction Methods 0.000 claims description 29
- 239000011159 matrix material Substances 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 19
- 238000005457 optimization Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 24
- 230000008569 process Effects 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000010146 3D printing Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0248—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the embodiments of the present disclosure relate to a laser coarse registration method, device, mobile terminal, and storage medium.
- SLAM Lidar-based real-time positioning and map construction
- the map can be updated through the laser information, that is, an incremental map can be built to realize autonomous positioning and navigation of mobile robots.
- an incremental map can be built to realize autonomous positioning and navigation of mobile robots.
- the map is theoretically updated and enriched, and the positioning of the mobile robot will become more and more accurate.
- the technical problem to be solved by the embodiments of the present invention is to provide a laser coarse registration method, device, mobile terminal, and storage medium, so as to solve the problem that the violent matching in the coarse registration in the known technology increases the computational complexity and leads to the coarse registration.
- Technical problem of reduced pose accuracy is to provide a laser coarse registration method, device, mobile terminal, and storage medium, so as to solve the problem that the violent matching in the coarse registration in the known technology increases the computational complexity and leads to the coarse registration.
- At least one embodiment of the present invention also provides a laser coarse registration device, a mobile terminal, and a storage medium to ensure the implementation and application of the above method.
- the first aspect provides a laser coarse registration method, the method includes:
- the obstacle image and the map image are matched through a convolutional neural network to obtain the coarse registration pose of the robot on the map image.
- said converting the location information of the obstacle into an obstacle image includes:
- the converting the map data into a map image includes:
- matching the obstacle image and the map image through a convolutional neural network to obtain the coarse registration pose of the robot on the map image includes:
- the obstacle image and the map image are respectively input to the laser convolutional neural network LaserCNN layer of the convolutional neural network to perform convolution matching processing to obtain a first feature matrix and a second feature matrix;
- the one-dimensional feature vector is fully connected in three dimensions to obtain a three-dimensional feature vector, and the obtained three-dimensional feature vector is input to a fully connected layer for processing to obtain a three-dimensional feature vector.
- the vector is the coarse registration pose of the robot on the map image.
- the method further includes:
- the probability map is updated according to the precise pose.
- the updating the probability map according to the precise pose includes:
- a second aspect provides a laser coarse registration device, the device comprising:
- the first determining module is used to determine the location information of obstacles around the robot
- the first conversion module is used to convert the position information of the obstacle into an obstacle image
- the acquisition module is used to acquire the map data of the probability map established in advance
- the second conversion module is used to convert the map data into a map image
- the matching processing module is used to perform matching processing between the obstacle image and the map image through a convolutional neural network to obtain the coarse registration pose of the robot on the map image.
- the first conversion module includes:
- the first conversion processing module is configured to perform conversion processing on the position information of the obstacle to obtain the pixel value of the obstacle image
- the second determination module is configured to determine the point with the pixel value of the first pixel value as an obstacle point on the obstacle image, and determine the point with the pixel value of the second pixel value as the non-point on the obstacle image. Obstacle point
- the first marking module is used to mark obstacle points and non-obstacle points on the obstacle image.
- the second conversion module includes:
- the second conversion processing module is configured to perform conversion processing on the map data according to a preset probability threshold to obtain the pixel value of the map image;
- the third determining module is configured to determine the point with the pixel value of the first pixel value as an obstacle point on the map image, and determine the point with the pixel value of the second pixel value as the point on the map image Non-obstacle points;
- the second marking module is used to mark the points of the obstacle image and the points of the non-obstacle image on the map image.
- the matching processing module includes:
- the first convolution processing module is configured to input the obstacle image and the map image into the laser convolution neural network LaserCNN layer of the convolution neural network to perform convolution matching processing to obtain the first feature matrix and the second feature matrix;
- the second convolution processing module is configured to input the first feature matrix and the second feature matrix to the pose Pose layer of the convolutional neural network to perform superposition and flattening processing to obtain a one-dimensional feature vector;
- the fully connected processing module is used to perform three-dimensional fully connected processing on the one-dimensional feature vector, and input the obtained vector into the last fully connected layer for processing to obtain a three-dimensional feature vector, the one three-dimensional feature vector Is the coarse registration pose of the robot on the map image.
- the device further includes:
- the fine registration processing module is used to accurately register the coarse registration pose of the robot on the map image by using a non-linear optimization registration method to obtain the precise position of the robot in the probability map. posture;
- the update module is used to update the probability map according to the precise pose.
- the update module includes:
- a position calculation module configured to calculate the positions of obstacles around the robot on the probability map according to the precise pose
- the judgment module is used to judge whether the corresponding point on the probability map is an obstacle point according to the position;
- the first location update module is configured to update the probability value of the corresponding point on the probability map by a preset hitting probability value when the judgment module determines that the corresponding point on the probability map is an obstacle point ;
- the second location update module is used to perform the calculation of the probability value of the corresponding point on the probability map by the preset missing probability value when the judgment module determines that the corresponding point on the probability map is a non-obstacle point Update.
- a mobile terminal including: a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
- the computer program is executed by the processor to achieve the above The steps of the laser coarse registration method.
- a fourth aspect provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the laser coarse registration method as described above are realized.
- At least one embodiment of the present invention includes the following advantages:
- the location information of the obstacles is converted into an obstacle image; and the map data of the probability map is obtained, and the map data is converted into a map image; , Performing matching processing on the obstacle image and the map image through a convolutional neural network to obtain the coarse registration pose of the robot on the map image.
- the obstacle image and map image obtained by the robot are subjected to convolution matching processing (that is, deep learning) through the convolutional neural network, so as to realize the rough registration of the laser data collected by the robot and the probability map , That is, the computational complexity of violent matching can be avoided by the coarse registration of the convolutional neural network, the scope of the registration is enlarged, and the accuracy of the coarse registration is improved.
- convolution matching processing that is, deep learning
- FIG. 1 is a flowchart of a laser coarse registration method according to an embodiment of the present invention
- FIG. 2 is a schematic structural diagram of a LaserCNN layer in a convolutional neural network provided by an embodiment of the present invention
- FIG. 3 is a schematic structural diagram of a pose layer in a convolutional neural network provided by an embodiment of the present invention.
- Fig. 5 is a schematic structural diagram of a laser coarse registration device provided by an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of a first conversion module provided by an embodiment of the present invention.
- Figure 7 is a schematic structural diagram of a second conversion module provided by an embodiment of the present invention.
- FIG. 8 is a schematic structural diagram of a matching processing module provided by an embodiment of the present invention.
- FIG. 9 is another structural diagram of a laser coarse registration device provided by an embodiment of the present invention.
- Fig. 10 is a schematic structural diagram of an update module provided by an embodiment of the present invention.
- FIG. 1 is a flowchart of a laser coarse registration method according to an embodiment of the present invention.
- the method is applied to a robot and may specifically include the following steps:
- Step 101 Determine location information of obstacles around the robot
- the robot first collects multiple laser data through its own laser radar, and then determines the position information of the obstacle through the laser data.
- each angle of the laser beam for data ⁇ i, the depth of each of the laser beam is d i, where, i is a natural number from 1 to N.
- laser data is used to determine the position information of the robot to the obstacle.
- Step 102 Convert the position information of the obstacle into an obstacle image
- the position information of the obstacle is converted to obtain the pixel value of the obstacle image, and the position information of the obstacle is converted into the pixel value of the obstacle image.
- the conversion formula is:
- 255 is the first pixel value
- image represents the obstacle image
- scale is the scaling factor of the obstacle image
- the scaling factor can be set according to the measurement range of the laser data
- the values of x 0 and y 0 It is determined according to the size of the obstacle image
- x 0 is half the width of the obstacle image
- y 0 is half the length of the obstacle image
- the value of other pixels is 0, which is the second pixel value.
- Step 103 Obtain map data of a pre-established probability map
- the probability map in this step is a pre-established probability map in the world coordinate system, and the probability map is built in the robot in advance.
- Step 104 Convert the map data into a map image
- the map data is converted as follows to obtain the pixel value of the corresponding pixel on the map image;
- Image'(scale'*x,scale'*y) 255 (that is, the first pixel value), that is, the map data Convert into the corresponding pixel value according to the above formula.
- image' represents a map image
- scale' is a scaling factor for the obstacle image
- the scaling factor can be set according to the measurement range of the laser data
- the pixel value of other points is 0 (ie, the second pixel value).
- each pixel value is judged, and the point whose pixel value is the first pixel value is determined as an obstacle on the map image An object point, a point whose pixel value is the second pixel value is determined as a non-obstacle point on the map image.
- the obstacle points and non-obstacle points on the map image are marked to obtain a map image including the obstacle points and the non-obstacle points.
- Step 105 Use a convolutional neural network to perform matching processing on the obstacle image and the map image to obtain the coarse registration pose of the robot on the map image.
- the matching process in this step includes:
- the LaserCNN layer of the convolutional neural network to perform convolution matching processing to obtain the first feature matrix and the second feature matrix;
- the first feature matrix and the second feature matrix The matrix is input to the Pose layer of the convolutional neural network for superposition and flattening processing to obtain a one-dimensional feature vector;
- the one-dimensional feature vector is sequentially subjected to three-dimensional fully connected processing to obtain a three-dimensional feature vector, and
- the three-dimensional feature vector is input to a fully connected layer for processing to obtain a three-dimensional feature vector, and the three-dimensional feature vector is the coarse registration pose of the robot on the map image.
- the convolutional neural network takes a laser frame aligned to a map (Scan2Map) network as an example.
- the Scan2Map network includes a laser convolutional neural network (LaserCNN, Convolutional Neural Networks) layer and a pose layer.
- the schematic diagram of the LaserCNN layer is shown in FIG. 2, which is a schematic diagram of the structure of the LaserCNN layer in a convolutional neural network provided by an embodiment of the present invention.
- a schematic diagram of the pose layer is shown in FIG. 3, which is a schematic structural diagram of a pose layer in a convolutional neural network provided by an embodiment of the present invention.
- the LaserCNN layer is divided into L1, L2, L3, L4, L5, and L6.
- the L1 (ie conv1) consists of 32 3x3 convolution kernels and a convolution layer with a step size of 1.
- L2 (ie pool1) is an average pooling layer of size 2;
- the L3 (ie conv2) is composed of 64 3 ⁇ 3 convolution kernels, and a convolution layer with a step size of 1;
- L4 (ie pool2) is size
- the average pooling layer is 2;
- the L5 (ie conv3) is composed of 128 3 ⁇ 3 convolution kernels, and the convolution layer with a step size of 1;
- L6 (ie pool3) is the average pooling layer with size 2 .
- the pose layer is divided into L1, L2, L3, L4, L5, and L6.
- the L1 layer superimposes the outputs of the two LaserCNN layers (ie laser1 and laser2) (ie concantenate); the L2 layer combines L1
- the output is flattened into a one-dimensional layer (ie Flat);
- L3 is a fully connected layer with a dimension of 128 (ie fc1:Dense);
- L4 is a fully connected layer with a dimension of 64 (ie fc2:Dense);
- L5 is a fully connected layer with a dimension of 64
- the 16 fully connected layer (ie, fc3:Dense); L6 (ie, pose:Dense) outputs 3 vectors, representing the transformation (x, y, theta) of the two inputs laser1 and laser2.
- the coarse pose (x, y, theta) output after passing through the convolutional neural network takes (x c , y c , ⁇ c ) as an example
- post-processing is required.
- the rough registration posture is (x c /scale-x 0 ,y c /scale-y 0 , ⁇ c ), that is to say, in the early process, how many times the coordinate point is magnified, and the subsequent coordinate point How many times the corresponding reduction.
- the location information of the obstacles is converted into an obstacle image; and the map data of the probability map is obtained, and the map data is converted into a map image; , Performing matching processing on the obstacle image and the map image through a convolutional neural network to obtain the coarse registration pose of the robot on the map image.
- the obstacle image and map image obtained by the robot are subjected to convolution matching processing (that is, deep learning) through the convolutional neural network, so as to realize the rough registration of the laser data collected by the robot and the probability map , That is, the computational complexity of violent matching can be avoided by the coarse registration of the convolutional neural network, the scope of the registration is enlarged, and the accuracy of the coarse registration is improved.
- convolution matching processing that is, deep learning
- this embodiment is based on the foregoing embodiment
- FIG. 4 is another flow chart of a laser coarse registration method provided by an embodiment of the present invention.
- This embodiment can also perform nonlinear coarse registration poses on the basis of the above-mentioned embodiment in FIG. Optimized registration to obtain an accurate pose, and automatically update the probability map according to the accurate pose.
- the method specifically includes:
- Step 401 Determine location information of obstacles around the robot
- Step 402 Convert the position information of the obstacle into an obstacle image
- Step 403 Obtain map data of a pre-established probability map
- Step 404 Convert the map data into a map image
- Step 405 Perform matching processing on the obstacle image and the map image through a convolutional neural network to obtain a coarse registration pose of the robot on the map image;
- step 401 to step 405 are the same as step 101 to step 105.
- step 401 to step 405 are the same as step 101 to step 105.
- Step 406 Perform precise registration on the coarse registration pose of the robot on the map image by using a non-linear optimization registration method to obtain the precise pose of the robot in the probability map;
- the process of using a registration method based on nonlinear optimization to accurately register the obtained coarse registration pose includes:
- the probability map in SLAM based on lidar is represented by a common probability grid.
- the map is modeled as a discrete grid.
- Each grid is assigned a probability value of 0 to 1. The closer the probability value is to 1, the greater the probability that the grid is an obstacle. The closer the probability value is to 0, it indicates that the grid is not The greater the possibility of obstacles.
- the lidar sensor can indirectly obtain the distance from the robot to the obstacle by recording the time difference between launching and returning to the laser after encountering an obstacle.
- the non-linear optimization-based registration method for precise registration of the obtained coarse registration pose is already a well-known technique, and will not be repeated again.
- Step 407 Update the probability map according to the precise pose
- the point corresponding to the probability map is an obstacle according to the position, that is, determine whether the point corresponding to the probability map has an obstacle scanned by the robot this time, and if so, use the preset probability of hitting
- the probability value of the corresponding point on the probability map is updated. If there is no probability value, the probability value of the corresponding point on the probability map is updated by the preset missing probability value.
- the update of obstacles (which can be understood as obstacle points), the obstacle points are (cos( ⁇ f )x i +x f ,sin( ⁇ f )y i +y f )
- the corresponding probability grid In this embodiment, the obstacle points on the map can be updated by the following formula, which is:
- the original occupancy ratio odds(M old ) of the obstacle is multiplied by the occupancy ratio odds(P hit ) of the obstacle hit by the laser data to obtain the occupancy ratio odds(M new ) of the new obstacle.
- P hit in the formula is the preset hit probability, that is, the preset probability of laser data hitting an obstacle; among them, odds (P hit ) represents the occupation ratio of the obstacle, referred to as odds(P), which is the obstacle
- P is the probability of obstacles.
- the non-obstacle point is the line segment from the origin of the laser data to cos( ⁇ f)x i +x f ,sin( ⁇ f)y i +y f ), which can be updated by the following formula.
- the formula is :
- P miss is the preset miss probability, that is, the preset laser data does not hit the probability of an obstacle; odds (P miss ) represents the non-obstacle occupancy ratio, that is, the probability of non-obstacle is less than the probability of obstacle Probability.
- the location information of the obstacles is converted into an obstacle image; and the map data of the probability map is obtained, and the map data is converted into a map image; , Performing matching processing on the obstacle image and the map image through a convolutional neural network to obtain the coarse registration pose of the robot on the map image.
- the obstacle image and map image obtained by the robot are subjected to convolution matching processing (that is, deep learning) through the convolutional neural network, so as to realize the rough registration of the laser data collected by the robot and the probability map , That is, the computational complexity of violent matching can be avoided by the coarse registration of the convolutional neural network, the scope of the registration is enlarged, and the accuracy of the coarse registration is improved.
- convolution matching processing that is, deep learning
- FIG. 5 is a schematic structural diagram of a laser coarse registration device provided by an embodiment of the present invention.
- the device specifically includes: a first determination module 501, a first conversion module 502, an acquisition module 503, and a second conversion module 504 And matching processing module 505, in which,
- the first determining module 501 is used to determine the position information of obstacles around the robot;
- the first conversion module 502 is configured to convert the position information of the obstacle into an obstacle image
- the obtaining module 503 is used to obtain map data of a pre-established probability map
- the second conversion module 504 is configured to convert the map data into a map image
- the matching processing module 505 is configured to perform matching processing between the obstacle image and the map image through a convolutional neural network to obtain the coarse registration pose of the robot on the map image.
- the first conversion module 502 includes: a first conversion processing module 601, a second determination module 602, and a first marking module 603, Its structure diagram is shown in Figure 6, where,
- the first conversion processing module 601 is configured to perform conversion processing on the position information of the obstacle to obtain the corresponding pixel value
- the second determining module 602 is configured to determine the point with the pixel value of the first pixel value as an obstacle point on the obstacle image, and determine the point with the pixel value of the second pixel value as the point on the obstacle image Non-obstacle points;
- the first marking module 603 is used to mark obstacle points and non-obstacle points on the obstacle image.
- this embodiment is based on the foregoing embodiment, and the second conversion module 504 includes: a second conversion processing module 701, a third determination module 702, and a second marking module 703, Its structure diagram is shown in Figure 7, where,
- the second conversion processing module 701 is configured to perform conversion processing on the map data according to a preset probability threshold to obtain the pixel value of the map image;
- the third determining module 702 is configured to determine the point whose pixel value is the first pixel value as an obstacle point on the map image, and determine the point whose pixel value is the second pixel value as the map image Non-obstacle points on the top;
- the second marking module 703 is used to mark the points of the obstacle image and the points of the non-obstacle image on the map image.
- this embodiment is based on the foregoing embodiment, and the matching processing module 505 includes: a first convolution processing module 801, a second convolution processing module 802, and a fully connected processing module 803, its structure diagram is shown in Figure 8, where,
- the first convolution processing module 801 is configured to input the obstacle image and the map image into the laser convolution neural network LaserCNN layer of the convolution neural network to perform convolution matching processing to obtain a first feature matrix and a second feature matrix;
- the second convolution processing module 802 is configured to input the first feature matrix and the second feature matrix to the pose Pose layer of the convolutional neural network to perform superposition and flattening processing to obtain a one-dimensional feature vector;
- the fully connected processing module 803 is configured to perform three-dimensional fully connected processing on the one-dimensional feature vector, and input the obtained vector to the last fully connected layer for processing to obtain a three-dimensional feature vector.
- the vector is the coarse registration pose of the robot on the map image.
- this embodiment is based on the above-mentioned embodiment, and the device may further include: a fine registration processing module 901 and an update module 902.
- the schematic diagram of the structure is as shown in FIG. 9. 9 Take as an example on the basis of the embodiment described in FIG. 5, where:
- the fine registration processing module 901 is configured to accurately register the coarse registration pose of the robot on the map image by using a non-linear optimization registration method to obtain the accurate registration of the robot in the probability map.
- the update module 902 is configured to update the probability map according to the precise pose.
- this embodiment is based on the above-mentioned embodiment, and the update module 902 includes: a position calculation module 1001, a judgment module 1002, a first position update module 1003, and a second position update module 1003, its structure diagram is shown in Figure 10, where,
- the position calculation module 1001 is configured to calculate the positions of obstacles around the robot on the probability map according to the precise pose
- the judgment module 1002 is used to judge whether the corresponding point on the probability map is an obstacle point according to the position;
- the first location update module 1003 is configured to: when the judgment module determines that the corresponding point on the probability map is an obstacle point, perform a preset hit probability value on the probability value of the corresponding point on the probability map Update
- the second location update module 1004 is configured to: when the judgment module determines that the corresponding point on the probability map is a non-obstacle point, the probability value of the corresponding point on the probability map is omitted by default Update.
- the corresponding image after the conversion of the obstacle position and the probability map is used to perform rough matching to calculate the pose of the robot, which solves the computational complexity caused by violence matching in the previous rough matching
- the problem has expanded the scope of registration and improved the accuracy of coarse registration.
- an embodiment of the present invention further provides a mobile terminal, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the computer program is executed by the processor.
- a mobile terminal including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the computer program is executed by the processor.
- the embodiment of the present invention also provides a non-transitory computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, each process of the foregoing laser coarse registration method embodiment is implemented, and To achieve the same technical effect, in order to avoid repetition, I will not repeat them here.
- the computer readable storage medium such as read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), magnetic disk or optical disk, etc.
- the embodiments of the embodiments of the present invention may be provided as methods, devices, or computer program products. Therefore, the embodiments of the present invention may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of the present invention may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing terminal equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
- the instruction device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- Medical Informatics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Databases & Information Systems (AREA)
- Optics & Photonics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
一种激光粗配准方法、装置、移动终端及存储介质,所述方法包括:确定机器人周边障碍物的位置信息;将所述障碍物的位置信息转换成障碍物图像;获取预先建立的概率地图的地图数据;将所述地图数据转换成地图图像;将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位。所述激光粗配准方法将机器人得到的障碍物图像和地图图像通过卷积神经网络进行卷积匹配处理,实现了机器人采集的激光数据与概率地图的粗配准,扩大了配准的范围,提高了粗配准的精度。
Description
本申请要求于2019年4月17日提交的中国专利申请第201910310449.0的优先权,该中国专利申请的全文通过引用的方式结合于此以作为本申请的一部分。
本公开实施例涉及一种激光粗配准方法、装置、移动终端及存储介质。
随着移动机器人的发展及其性能的不断完善,人们对其需求也逐渐增加,比如,无人机,扫地机器人、3D打印机器人、监控机器人等极大的方便了人们的生活。移动机器人实现自主路径规划和导航需要对周围环境进行感知和建模。其中最常用的就是采用基于激光雷达的即时定位与地图构建(SLAM,simultaneous localization and mapping)算法。而基于激光雷达的SLAM算法,在机器人移动过程中,通过激光照射到障碍物上反回得到的数据跟地图进行匹配,从而计算出移动机器人在地图上的位姿。同时,通过激光的信息反过来还可以更新地图,即建造增量式地图,实现移动机器人的自主定位和导航。这样随着机器人的移动,理论上地图不断更新和丰富,移动机器人的定位也会越来越准确。
发明内容
本发明实施例所要解决的技术问题是提供一种激光粗配准方法、装置、移动终端及存储介质,以解决已知技术由于粗配准中的暴力匹配增加了计算复杂度,导致粗配准位姿的准确度降低的技术问题。
相应的,本发明的至少一个实施例还提供了一种激光粗配准装置、移动终端及存储介质,用以保证上述方法的实现及应用。
为了解决上述问题,本公开的至少一个是通过如下技术方案实现的:
第一方面提供一种激光粗配准方法,所述方法包括:
确定机器人周边障碍物的位置信息;
将所述障碍物的位置信息转换成障碍物图像;
获取预先建立的概率地图的地图数据;
将所述地图数据转换成地图图像;
将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿。
可选的,所述将所述障碍物的位置信息转换成障碍物图像,包括:
对所述障碍物的位置信息进行转换处理,得到障碍物图像的像素值;
将所述像素值为第一像素值的点确定为障碍物图像上的障碍物点,将所述像素值为第二像素值的点确定为障碍物图像上的非障碍物点;
标记所述障碍物图像上的障碍物点和非障碍物点。
可选的,所述将所述地图数据转换成地图图像,包括:
按照预设概率阈值将所述地图数据进行转换处理,得到地图图像的像素值;
将所述像素值为第一像素值的点确定为所述地图图像上的障碍物点,将所述像素值为第二像素值的点确定为所述地图图像上的非障碍物点;
标记所述地图图像上的障碍物点和非障碍物点。
可选的,将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿,包括:
对所述障碍物图像和地图图像分别输入卷积神经网络的激光卷积神经网络LaserCNN层进行卷积匹配处理,得到第一特征矩阵和第二特征矩阵;
将所述第一特征矩阵和第二特征矩阵输入到所述卷积神经网络的姿态Pose层进行叠加和扁平化处理,得到一维特征向量;
将所述一维特征向量进行三个维度的全连接处理,得到三维特征向量,并将得到的所述三维特征向量输入到一个全连接层进行处理,得到一个三维特征向量,所述一个三维特征向量为所述机器人在所述地图图像上的粗配准位姿。
可选的,所述方法还包括:
利用非线性优化的配准方法对所述机器人在所述地图图像上的粗配准位姿进行精确配准,得到所述机器人在所述概率地图中的精确位姿;
根据所述精确位姿更新所述概率地图。
可选的,所述根据所述精确位姿更新所述概率地图,包括:
根据所述精确位姿计算所述机器人周边的障碍物在所述概率地图上的位置;
根据所述位置判断所述概率地图上对应的点是否为障碍物点,如果是,则通过预设击中概率值对对应的点在所述概率地图上的概率值进行更新,如果否,则通过预设漏掉概率值对对应的点在所述概率地图上的概率值进行更新。
第二方面提供一种激光粗配准装置,所述装置包括:
第一确定模块,用于确定机器人周边障碍物的位置信息;
第一转换模块,用于将所述障碍物的位置信息转换成障碍物图像;
获取模块,用于获取预先建立的概率地图的地图数据;
第二转换模块,用于将所述地图数据转换成地图图像;
匹配处理模块,用于将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿。
可选的,所述第一转换模块包括:
第一转换处理模块,用于对所述障碍物的位置信息进行转换处理,得到碍物图像的像素值;
第二确定模块,用于将所述像素值为第一像素值的点确定为障碍物图像上的障碍物点,将所述像素值为第二像素值的点确定为障碍物图像上的非障碍物点;
第一标记模块,用于标记所述障碍物图像上的障碍物点和非障碍物点。
可选的,所述第二转换模块包括:
第二转换处理模块,用于按照预设概率阈值将所述地图数据进行转换处理,得到地图图像的像素值;
第三确定模块,用于将所述像素值为第一像素值的点确定为所述地图图像上的障碍物点,将所述像素值为第二像素值的点确定为所述地图图像上的非障碍物点;
第二标记模块,用于标记所述地图图像上的所述障碍物图像的点和非障碍物图像的点。
可选的,所述匹配处理模块包括:
第一卷积处理模块,用于对所述障碍物图像和地图图像分别输入卷积神经网络的激光卷积神经网络LaserCNN层进行卷积匹配处理,得到第一特征矩阵和第二特征矩阵;
第二卷积处理模块,用于将所述第一特征矩阵和第二特征矩阵输入到所述卷积神经网络的姿态Pose层进行叠加和扁平化处理,得到一维特征向量;
全连接处理模块,用于将所述一维特征向量进行三个维度的全连接处理,并将得到的向量输入到最后一个全连接层进行处理,得到一个三维特征向量,所述一个三维特征向量为所述机器人在所述地图图像上的粗配准位姿。
可选的,所述装置还包括:
精配准处理模块,用于利用非线性优化的配准方法对所述机器人在所述地图图像上的粗配准位姿进行精确配准,得到所述机器人在所述概率地图中的精确位姿;
更新模块,用于根据所述精确位姿更新所述概率地图。
可选的,所述更新模块包括:
位置计算模块,用于根据所述精确位姿计算所述机器人周边的障碍物在所述概率地图上的位置;
判断模块,用于根据所述位置判断概率地图上对应点是否为障碍物点;
第一位置更新模块,用于在所述判断模块判定所述概率地图上对应的点为障碍物点时,通过预设击中概率值对对应的点在所述概率地图上的概率值进行更新;
第二位置更新模块,用于在所述判断模块判定所述概率地图上对应的点为非障碍物点时,通过预设漏掉概率值对对应的点在所述概率地图上的概率值进行更新。
第三方面提供一种移动终端,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如上所述的激光粗配准方法的步骤。
第四方面提供一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的激光粗配准方法中的步骤。
与已知技术相比,本发明的至少一个实施例包括以下优点:
本发明实施例中,在确定机器人周边障碍物的位置信息后,将所述障碍物的位置信息转换成障碍物图像;以及获取概率地图的地图数据,并将该地图数据转换成地图图像;然后,将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿。也就是说,本发明实施例中,将机器人得到的障碍物图像和地图图像通过卷积神经网络进行卷积匹配处理(即深度学习),实现了机器人采集的激光数据与概率地图的粗配准,即通过卷积神经网络的粗配准可以避免暴力匹配的计算复杂度,扩大了配准的范围,提高了粗配准的精度。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种激光粗配准方法的流程图;
图2为本发明实施例提供的一种卷积神经网络中的LaserCNN层的结构示意图;
图3为本发明实施例提供的一种卷积神经网络中的pose层的结构示意图;
图4是本发明实施例提供的一种激光粗配准方法的另一流程图;
图5是本发明实施例提供的一种激光粗配准装置的结构示意图;
图6是本发明实施例提供的第一转换模块的结构示意图;
图7是本发明实施例提供的第二转换模块的结构示意图;
图8是本发明实施例提供的匹配处理模块的结构示意图;
图9是本发明实施例提供的一种激光粗配准装置的另一结构示意图;
图10是本发明实施例提供的更新模块的结构示意图。
为使本公开的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本公开作进一步详细的说明。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
在实际应用中,由于地图表示和非线性求解器的限制,只有非线性优化初值在离真值较近处才能成功配准,且,非线性优化的配准方法对初值要求比较苛刻,通常使用离散的姿态进行粗匹配,而粗匹配是先对可能的位姿进行离散化,然后进行暴力匹配,最后,根据暴力匹配的结果进行非线性优化计算位姿。由于粗匹配中的暴力匹配增加了计算复杂度,导致匹配范围较小,从而降低了计算位姿的准确度。因此,发明人注意到,如何在降低匹配计算复杂度的情况下,提高激光和地图粗配准的准确度,是目前有待解决的技术问题。
请参阅图1,为本发明实施例提供的一种激光粗配准方法的流程图;所述方法应用于机器人,具体可以包括如下步骤:
步骤101:确定机器人周边障碍物的位置信息;
该步骤中,机器人通过自身的激光雷达先采集多束激光数据,然后,通过所述激光数据来确定障碍物的位置信息。
假如,机器人采集的激光数据为N束,每束激光数据的角度为θ
i,每个激光束的深度值为d
i,其中,i为1至N的自然数。
该实施例中,利用激光数据来确定机器人到障碍物的位置信息,该障碍物的位置信息通过多个坐标点来表示,可以用坐标点(x
i,y
i)表示,其中,x
i和y
i的值可以通过公式x
i=cos(θ
i)d
i,y
i=sin(θ
i)d
i来确定。
步骤102:将所述障碍物的位置信息转换成障碍物图像;
该步骤中,首先,对所述障碍物的位置信息进行转换处理,得到障碍物图像的像素值,将障碍物的位置信息转换成障碍物图像的像素值。其转换公式为:
Image(scale*x
i+x
0,scale*y
i+y
0)=255
其中,该公式中,255为第一像素值,image表示障碍物图像,scale是对障碍物图像进行缩放系数,该缩放系数可以根据激光数据的测量范围来设定,x
0和y
0的值是根据障碍物图像尺寸来确定的,x
0为障碍物图像宽度的一半,y
0为障碍物图像长度的一半,其他像素点的值为0,即第二像素值。
其次,将所述像素值为第一像素值的点确定为障碍物图像上的障碍物点,将所述像素值为第二像素值的点确定为非障碍物图像上的非障碍物点;
最后,标记所述障碍物图像上的障碍物点和非障碍物点。
步骤103:获取预先建立的概率地图的地图数据;
该步骤中的概率地图,为预先建立的世界坐标系下的概率地图,该概率地图预先内置在机器人内部。
步骤104:将所述地图数据转换成地图图像;
该步骤中,首先,按照预设概率阈值将所述地图数据按下述进行转换处理,得到地图图像上对应的像素点的像素值;
如果概率地图中概率预设阈值M(x,y)>0.5,则按照公式Image’(scale’*x,scale’*y)=255(即第一像素值)进行坐标转换处理,即将地图数据按照上述公式转换成对应的像素值。其中,image’表示地图图像,scale’是对障碍物图像进行缩放系数,该缩放系数可以根据激光数据的测量范围来设定,其它点的像素值为0(即第二像素值)。
其中,需要说明的是,该公式中用Image’和scale’表示,只是为了与上述公式中的Image和scale进行区分。
其次,将所述像素值为第一像素值的点确定为地图图像上的障碍物点,将所述像素值为第二像素值的点确定为地图图像上的非障碍物点;
也就是说,该实施例中,在将地图数据转换成地图图像上对应的像素值后,对每个像素值进行判断,并将像素值为第一像素值的点确定为地图图像上的障碍物点,将像素值为第二像素值的点确定为地图图像上的非障碍物点。
最后,标记所述地图图像上的障碍物点和非障碍物点,得到包括障碍物点和非障碍物点的地图图像。
步骤105:利用卷积神经网络对所述障碍物图像和地图图像进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿。
其中,该步骤中的匹配处理过程包括:
首先,对所述障碍物图像和地图图像分别输入卷积神经网络的LaserCNN层进行卷积匹配处理,得到第一特征矩阵和第二特征矩阵;其次,将所述第一特征矩阵和第二特征矩阵输入到卷积神经网络的Pose层进行叠加和扁平化处理,得到一维特征向量;最后,将所述一维特征向量依次进行三个维度的全连接处理,得到三维特征向量,并将所述三维特征向量输入到一个全连接层进行处理,得到一个三维特征向量,所述一个三维特征向量为所述机器人在所述地图图像上的粗配准位姿。
该实施例中,卷积神经网络以激光帧对齐到地图(Scan2Map)网络为例,该Scan2Map网络包括激光卷积神经网络(LaserCNN,Convolutional Neural Networks)层和位姿(pose)层。其中,LaserCNN层的示意图如图2所示,图2为本发明实施例提供的一种卷积神经网络中的LaserCNN层的结构示意图。而pose层的示意图如图3所示,图3为本发明实施例提供的一种卷积神经网络中的pose层的结构示意图。
其中,如图2所示,LaserCNN层分为L1、L2、L3、L4、L5和L6,所述L1(即conv1)由32个3x3的卷积核组成,步长为1的卷积层;L2(即pool1)为size为2的平均池化层;所述L3(即conv2)由64个3×3的卷积核组成,步长为1的卷积层;L4(即pool2)为size为2的平均池化层;所述L5(即conv3)由128个3×3的卷积核组成,步长为1的卷积层;L6(即pool3)为size为2的平均池化层。
其中,如图3所示,pose层分为L1、L2、L3、L4、L5和L6,L1层将两个LaserCNN层的输出(即laser1和laser2)叠加起来(即concantenate);L2层将L1输出做扁平化为一个一维层(即Flattern);L3是维度为128的全连接层(即fc1:Dense);L4是维度为64的全连接层(即fc2:Dense);L5是维度为16的全连接层(即fc3:Dense);L6(即pose:Dense)输出3个向量,表示两个输入laser1和laser2的变换(x,y,theta)。
为了便于理解,下面以一种应用实例来说明,具体还请参阅图3。
假设,当前输入到卷积神经网络的候选参照物的分辨率为512*512的两幅雷达图像laser1和laser2,那么送入卷积神经网络的矩阵即为:512*512*1(*1是因为雷达图像为单个通道,所以用*1来表示),两个laser经过同样的LaserCNN层输出后,每个都得到一个64×64×128的特征矩阵,需要说明 的是,该图3中用一个LaserCNN层:Model为例;
Pose层首先将两个特征矩阵叠加为64×64×256(concatenate),然后扁平化为1048576(即64×64×256=1048576)的一维特征向量,后续,再进行3个维度为128、64、16的全连接(即fc1:Dense,fc2:Dense和fc3:Dense)后,在最后一个全连接层(即pose:Dense)直接得到一个3维向量,这三维向量代表两个输入laser1和laser2经过卷积神经网络后变换为粗位姿,用(x,y,theta)表示。
当然,假如经过卷积神经网络后输出的粗位姿(x,y,theta)以(x
c,y
c,θ
c)为例,那么,在得到粗位姿后,还需要通过后处理才能得到粗配准姿态为(x
c/scale-x
0,y
c/scale-y
0,θ
c),也就是说,前期过程中,对坐标点放大多少倍,后续还要对该坐标点进行对应的缩小多少倍。
本发明实施例中,在确定机器人周边障碍物的位置信息后,将所述障碍物的位置信息转换成障碍物图像;以及获取概率地图的地图数据,并将该地图数据转换成地图图像;然后,将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿。也就是说,本发明实施例中,将机器人得到的障碍物图像和地图图像通过卷积神经网络进行卷积匹配处理(即深度学习),实现了机器人采集的激光数据与概率地图的粗配准,即通过卷积神经网络的粗配准可以避免暴力匹配的计算复杂度,扩大了配准的范围,提高了粗配准的精度。
可选的,在另一实施例中,该实施例在上述实施例的基础上,
还请参阅图4,为本发明实施例提供的一种激光粗配准方法的另一流程图,该实施例在上述图1实施例的基础上,还可以对粗配准位姿进行非线性优化的配准,得到精确位姿,以及根据该精确位姿自动更新概率地图。所述方法具体包括:
步骤401:确定机器人周边障碍物的位置信息;
步骤402:将所述障碍物的位置信息转换成障碍物图像;
步骤403:获取预先建立的概率地图的地图数据;
步骤404:将所述地图数据转换成地图图像;
步骤405:将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿;
其中,步骤401至步骤405与步骤101至步骤105相同,具体详见上述,在此不再赘述。
步骤406:利用非线性优化的配准方法对所述机器人在所述地图图像上的粗配准位姿进行精确配准,得到所述机器人在所述概率地图中的精确位姿;
该步骤中,采用基于非线性优化的配准方法对得到的粗配准位姿进行精确配准的过程包括:
首先,计算通过激光数据检测到的障碍物对应概率地图中的概率和,当激光数据中的障碍物正好对应概率地图中障碍物时,概率和应为最大。
其次,不断优化粗配准位姿,得到概率和最大时,成功配准了激光数据和概率地图。
再次,基于激光雷达的SLAM中概率地图用常用概率格子表示。地图被建模成一个个离散的格子,每个格子付与0到1的概率值,概率值越接近1,说明该格子是障碍物的可能就越大,概率值越接近0,说明该格子是非障碍物的可能就越大。
然后,激光雷达传感器通过记录发射时和遇到障碍物后返回激光的时间差,可以间接得到到该机器人到障碍物的距离。
最后,通过配准激光数据测量到障碍物和概率地图,可以得到机器人在概率地图上的位姿。
需要说明的,该实施例中,基于非线性优化的配准方法对得到的粗配准位姿进行精确配准,已是熟知技术,再次不再赘述。
步骤407:根据所述精确位姿更新所述概率地图;
该步骤中,先根据所述精确位姿计算所述机器人周边的障碍物在所述概率地图上的位置;
然后,根据所述位置判断所述概率地图对应的点是否为障碍物,即判断所述概率地图对应的点是否有此次机器人扫描到的障碍物,如果有,则通过预设击中概率值对对应的点在所述概率地图上的概率值进行更新,如果无,则通过预设漏掉概率值对对应的点在所述概率地图上的概率值进行更新。
也即是说,该实施例中,障碍物(可以理解成障碍物点)的更新,障碍物点为(cos(θ
f)x
i+x
f,sin(θ
f)y
i+y
f)对应的概率格子。本实施例中,可以通过下公式来更新地图上的障碍物点,其公式为:
odds(M
new)=odds(M
old)odds(P
hit)
其中,由上述公式可知:原来障碍物的占据比odds(M
old)乘以激光数据击中障碍物的占据比odds(P
hit),就得到新障碍物的占据比odds(M
new)。
该公式中的P
hit为预设击中概率,即预先设定的激光数据击中障碍物的概率;其中,odds(P
hit)表示障碍物占据比,简称为odds(P),即障碍物的概率比上非障碍物的概率,具体可以通过下述公式来设定,
其中,P为障碍物的概率。
非障碍物点的更新,非障碍物点为激光数据原点到cos(θf)x
i+x
f,sin(θf)y
i+y
f)的线段,具体可以通过下述公式更新,其公式为:
odds(M
new)=odds(M
old)odds(P
miss)
其中,P
miss为预设漏掉概率,即预先设定的激光数据没有击中障碍物的概率;odds(P
miss)表示非障碍物占据比,即非障碍物的概率比障碍物的概率的概率。
本发明实施例中,在确定机器人周边障碍物的位置信息后,将所述障碍物的位置信息转换成障碍物图像;以及获取概率地图的地图数据,并将该地图数据转换成地图图像;然后,将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿。也就是说,本发明实施例中,将机器人得到的障碍物图像和地图图像通过卷积神经网络进行卷积匹配处理(即深度学习),实现了机器人采集的激光数据与概率地图的粗配准,即通过卷积神经网络的粗配准可以避免暴力匹配的计算复杂度,扩大了配准的范围,提高了粗配准的精度。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作并不一定是本发明实施例所必须的。
请参阅图5,为本发明实施例提供的一种激光粗配准装置的结构示意图,所述装置具体包括:第一确定模块501,第一转换模块502,获取模块503,第二转换模块504和匹配处理模块505,其中,
第一确定模块501,用于确定机器人周边障碍物的位置信息;
第一转换模块502,用于将所述障碍物的位置信息转换成障碍物图像;
获取模块503,用于获取预先建立的概率地图的地图数据;
第二转换模块504,用于将所述地图数据转换成地图图像;
匹配处理模块505,用于将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述第一转换模块502包括:第一转换处理模块601,第二确定模块602和第一标记模块603,其结构示意图如图6所示,其中,
第一转换处理模块601,用于对所述障碍物的位置信息进行转换处理,得到对应的像素值;
第二确定模块602,用于将所述像素值为第一像素值的点确定为障碍物图像上的障碍物点,将所述像素值为第二像素值的点确定为障碍物图像上的非障碍物点;
第一标记模块603,用于标记所述障碍物图像上的障碍物点和非障碍物点。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述第二转换模块504包括:第二转换处理模块701,第三确定模块702和第二标记模块703,其结构示意图如图7所示,其中,
第二转换处理模块701,用于按照预设概率阈值将所述地图数据进行转换处理,得到地图图像的像素值;
第三确定模块702,用于将所述像素值为第一像素值的点确定为所述地图图像上的障碍物点,将所述像素值为第二像素值的点确定为所述地图图像上的非障碍物点;
第二标记模块703,用于标记所述地图图像上的所述障碍物图像的点和非障碍物图像的点。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述匹配处理模块505包括:第一卷积处理模块801,第二卷积处理模块802和全连接处理模块803,其结构示意图如图8所示,其中,
第一卷积处理模块801,用于对所述障碍物图像和地图图像分别输入卷积神经网络的激光卷积神经网络LaserCNN层进行卷积匹配处理,得到第一特征矩阵和第二特征矩阵;
第二卷积处理模块802,用于将所述第一特征矩阵和第二特征矩阵输入到所述卷积神经网络的位姿Pose层进行叠加和扁平化处理,得到一维特征向量;
全连接处理模块803,用于将所述一维特征向量进行三个维度的全连接处理,并将得到的向量输入到最后一个全连接层进行处理,得到一个三维特征向量,所述一个三维特征向量为所述机器人在所述地图图像上的粗配准位姿。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述装置还可以包括:精配准处理模块901和更新模块902,其结构示意图如图9所示,图9以在图5所述实施例的基础上为例,其中,
精配准处理模块901,用于利用非线性优化的配准方法对所述机器人在所述地图图像上的粗配准位姿进行精确配准,得到所述机器人在所述概率地图中的精确位姿;
更新模块902,用于根据所述精确位姿更新所述概率地图。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述更新模块902包括:位置计算模块1001,判断模块1002,第一位置更新模块1003和第二位置更新模块1003,其结构示意图如图10所示,其中,
位置计算模块1001,用于根据所述精确位姿计算所述机器人周边的障碍物在所述概率地图上的位置;
判断模块1002,用于根据所述位置判断概率地图上对应点是否为障碍物点;
第一位置更新模块1003,用于在所述判断模块判定所述概率地图上对应的点为障碍物点时,通过预设击中概率值对对应的点在所述概率地图上的概率值进行更新;
第二位置更新模块1004,用于在所述判断模块判定所述概率地图上对应的点为非障碍物点时,通过预设漏掉概率值对对应的点在所述概率地图上的概率值进行更新。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本发明实施例中,通过深度学习的方法,利用障碍物位置和概率地图转换后的相应图像,进行粗匹配来计算机器人的位姿,解决了以往粗匹配中由于暴力匹配带来的计算复杂度的问题,扩大了配准的范围,提高了粗配准的精度。
可选的,本发明实施例还提供一种移动终端,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现上述激光粗配准方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本发明实施例还提供一种非瞬时计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述激光粗配准方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
本领域内的技术人员应明白,本发明实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明实施例是参照根据本发明实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存 储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明实施例的可选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括可选实施例以及落入本发明实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本公开实施例所提供的一种激光粗配准方法、装置、移动终端及存储介质,进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。
Claims (10)
- 一种激光粗配准方法,包括:确定机器人周边障碍物的位置信息;将所述障碍物的位置信息转换成障碍物图像;获取预先建立的概率地图的地图数据;将所述地图数据转换成地图图像;将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿。
- 根据权利要求1所述的方法,其中,所述将所述障碍物的位置信息转换成障碍物图像,包括:对所述障碍物的位置信息进行转换处理,得到障碍物图像的像素值;将所述像素值为第一像素值的点确定为障碍物图像上的障碍物点,将所述像素值为第二像素值的点确定为障碍物图像上的非障碍物点;标记所述障碍物图像上的障碍物点和非障碍物点。
- 根据权利要求1所述的方法,其中,所述将所述地图数据转换成地图图像,包括:按照预设概率阈值将所述地图数据进行转换处理,得到地图图像的像素值;将所述像素值为第一像素值的点确定为所述地图图像上的障碍物点,将所述像素值为第二像素值的点确定为所述地图图像上的非障碍物点;标记所述地图图像上的障碍物点和非障碍物点。
- 根据权利要求1至3任一项所述的方法,其中,将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿,包括:对所述障碍物图像和地图图像分别输入卷积神经网络的激光卷积神经网络LaserCNN层进行卷积匹配处理,得到第一特征矩阵和第二特征矩阵;将所述第一特征矩阵和第二特征矩阵输入到所述卷积神经网络的位姿Pose层进行叠加和扁平化处理,得到一维特征向量;将所述一维特征向量进行三个维度的全连接处理,得到三维特征向量, 并将得到的所述三维特征向量输入到一个全连接层进行处理,得到一个三维特征向量,所述一个三维特征向量为所述机器人在所述地图图像上的粗配准位姿。
- 根据权利要求1至3任一项所述的方法,其中,所述方法还包括:利用非线性优化的配准方法对所述机器人在所述地图图像上的粗配准位姿进行精确配准,得到所述机器人在所述概率地图中的精确位姿;根据所述精确位姿更新所述概率地图。
- 根据权利要求5所述的方法,其中,所述根据所述精确位姿更新所述概率地图,包括:根据所述精确位姿计算所述机器人周边的障碍物在所述概率地图上的位置;根据所述位置判断所述概率地图上对应的点是否为障碍物点,如果是,则通过预设击中概率值对对应的点在所述概率地图上的概率值进行更新,如果否,则通过预设漏掉概率值对对应的点在所述概率地图上的概率值进行更新。
- 一种激光粗配准装置,包括:第一确定模块,用于确定机器人周边障碍物的位置信息;第一转换模块,用于将所述障碍物的位置信息转换成障碍物图像;获取模块,用于获取预先建立的概率地图的地图数据;第二转换模块,用于将所述地图数据转换成地图图像;匹配处理模块,用于将所述障碍物图像和地图图像通过卷积神经网络进行匹配处理,得到所述机器人在所述地图图像上的粗配准位姿。
- 根据权利要求7所述的装置,还包括:精配准处理模块,用于利用非线性优化的配准方法对所述机器人在所述地图图像上的粗配准位姿进行精确配准,得到所述机器人在所述概率地图中的精确位姿;更新模块,用于根据所述精确位姿更新所述概率地图。
- 一种移动终端,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至6中任一项所 述的激光粗配准方法的步骤。
- 一种非瞬时计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的激光粗配准方法中的步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/603,838 US20220198688A1 (en) | 2019-04-17 | 2020-04-02 | Laser coarse registration method, device, mobile terminal and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910310449.0A CN110189366B (zh) | 2019-04-17 | 2019-04-17 | 一种激光粗配准方法、装置、移动终端及存储介质 |
CN201910310449.0 | 2019-04-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020211655A1 true WO2020211655A1 (zh) | 2020-10-22 |
Family
ID=67714657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/082981 WO2020211655A1 (zh) | 2019-04-17 | 2020-04-02 | 激光粗配准方法、装置、移动终端及存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220198688A1 (zh) |
CN (1) | CN110189366B (zh) |
WO (1) | WO2020211655A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113432533A (zh) * | 2021-06-18 | 2021-09-24 | 北京盈迪曼德科技有限公司 | 一种机器人定位方法、装置、机器人及存储介质 |
CN114903384A (zh) * | 2022-06-13 | 2022-08-16 | 苏州澜途科技有限公司 | 清洁机器人的工作场景地图区域分割方法及装置 |
CN116660916A (zh) * | 2023-05-26 | 2023-08-29 | 广东省农业科学院设施农业研究所 | 一种用于果园移动机器人的定位方法、建图方法及电子设备 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110189366B (zh) * | 2019-04-17 | 2021-07-06 | 北京迈格威科技有限公司 | 一种激光粗配准方法、装置、移动终端及存储介质 |
CN111578946A (zh) * | 2020-05-27 | 2020-08-25 | 杭州蓝芯科技有限公司 | 一种基于深度学习的激光导航agv重定位方法和装置 |
CN114067555B (zh) * | 2020-08-05 | 2023-02-17 | 北京万集科技股份有限公司 | 多基站数据的配准方法、装置、服务器和可读存储介质 |
US20220097690A1 (en) * | 2020-09-30 | 2022-03-31 | Toyota Motor Engineering & Manufacturing North America, Inc. | Optical sense-compute solution for real-time navigation involving multiple vehicles |
CN112150491B (zh) * | 2020-09-30 | 2023-08-18 | 北京小狗吸尘器集团股份有限公司 | 图像检测方法、装置、电子设备和计算机可读介质 |
CN112612034B (zh) * | 2020-12-24 | 2023-10-13 | 长三角哈特机器人产业技术研究院 | 基于激光帧和概率地图扫描的位姿匹配方法 |
CN113375683A (zh) * | 2021-06-10 | 2021-09-10 | 亿嘉和科技股份有限公司 | 机器人环境地图实时更新方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170169313A1 (en) * | 2015-12-14 | 2017-06-15 | Samsung Electronics Co., Ltd. | Image processing apparatus and method based on deep learning and neural network learning |
CN108332758A (zh) * | 2018-01-26 | 2018-07-27 | 上海思岚科技有限公司 | 一种移动机器人的走廊识别方法及装置 |
CN109459039A (zh) * | 2019-01-08 | 2019-03-12 | 湖南大学 | 一种医药搬运机器人的激光定位导航系统及其方法 |
CN109579825A (zh) * | 2018-11-26 | 2019-04-05 | 江苏科技大学 | 基于双目视觉和卷积神经网络的机器人定位系统及方法 |
CN110189366A (zh) * | 2019-04-17 | 2019-08-30 | 北京迈格威科技有限公司 | 一种激光粗配准方法、装置、移动终端及存储介质 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5215740B2 (ja) * | 2008-06-09 | 2013-06-19 | 株式会社日立製作所 | 移動ロボットシステム |
US20140323148A1 (en) * | 2013-04-30 | 2014-10-30 | Qualcomm Incorporated | Wide area localization from slam maps |
CN104897161B (zh) * | 2015-06-02 | 2018-12-14 | 武汉大学 | 基于激光测距的室内平面地图制图方法 |
KR101775114B1 (ko) * | 2016-01-25 | 2017-09-05 | 충북대학교 산학협력단 | 이동 로봇의 위치 인식 및 지도 작성 시스템 및 방법 |
CN105892461B (zh) * | 2016-04-13 | 2018-12-04 | 上海物景智能科技有限公司 | 一种机器人所处环境与地图的匹配识别方法及系统 |
CN106325270B (zh) * | 2016-08-12 | 2019-05-31 | 天津大学 | 基于感知和自主计算定位导航的智能车导航方法 |
KR20180023608A (ko) * | 2016-08-26 | 2018-03-07 | 비전세미콘 주식회사 | 무인 운반차의 운행 시스템 |
WO2018053175A1 (en) * | 2016-09-14 | 2018-03-22 | Nauto Global Limited | Systems and methods for near-crash determination |
US10545029B2 (en) * | 2016-12-30 | 2020-01-28 | DeepMap Inc. | Lane network construction using high definition maps for autonomous vehicles |
CN106780484A (zh) * | 2017-01-11 | 2017-05-31 | 山东大学 | 基于卷积神经网络特征描述子的机器人帧间位姿估计方法 |
CN107390681B (zh) * | 2017-06-21 | 2019-08-20 | 华南理工大学 | 一种基于激光雷达与地图匹配的移动机器人实时定位方法 |
CN107462892B (zh) * | 2017-07-28 | 2021-11-30 | 深圳市远弗科技有限公司 | 基于多超声传感器的移动机器人同步定位与地图构建方法 |
US11427218B2 (en) * | 2017-08-04 | 2022-08-30 | Sony Corporation | Control apparatus, control method, program, and moving body |
CN107908185A (zh) * | 2017-10-14 | 2018-04-13 | 北醒(北京)光子科技有限公司 | 一种机器人自主全局重定位方法及机器人 |
CN108242079B (zh) * | 2017-12-30 | 2021-06-25 | 北京工业大学 | 一种基于多特征视觉里程计和图优化模型的vslam方法 |
CN108256574B (zh) * | 2018-01-16 | 2020-08-11 | 广东省智能制造研究所 | 机器人定位方法及装置 |
CN108896994A (zh) * | 2018-05-11 | 2018-11-27 | 武汉环宇智行科技有限公司 | 一种无人驾驶车辆定位方法及设备 |
CN109285220B (zh) * | 2018-08-30 | 2022-11-15 | 阿波罗智能技术(北京)有限公司 | 一种三维场景地图的生成方法、装置、设备及存储介质 |
CN108873001B (zh) * | 2018-09-17 | 2022-09-09 | 江苏金智科技股份有限公司 | 一种精准评判机器人定位精度的方法 |
-
2019
- 2019-04-17 CN CN201910310449.0A patent/CN110189366B/zh active Active
-
2020
- 2020-04-02 US US17/603,838 patent/US20220198688A1/en not_active Abandoned
- 2020-04-02 WO PCT/CN2020/082981 patent/WO2020211655A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170169313A1 (en) * | 2015-12-14 | 2017-06-15 | Samsung Electronics Co., Ltd. | Image processing apparatus and method based on deep learning and neural network learning |
CN108332758A (zh) * | 2018-01-26 | 2018-07-27 | 上海思岚科技有限公司 | 一种移动机器人的走廊识别方法及装置 |
CN109579825A (zh) * | 2018-11-26 | 2019-04-05 | 江苏科技大学 | 基于双目视觉和卷积神经网络的机器人定位系统及方法 |
CN109459039A (zh) * | 2019-01-08 | 2019-03-12 | 湖南大学 | 一种医药搬运机器人的激光定位导航系统及其方法 |
CN110189366A (zh) * | 2019-04-17 | 2019-08-30 | 北京迈格威科技有限公司 | 一种激光粗配准方法、装置、移动终端及存储介质 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113432533A (zh) * | 2021-06-18 | 2021-09-24 | 北京盈迪曼德科技有限公司 | 一种机器人定位方法、装置、机器人及存储介质 |
CN113432533B (zh) * | 2021-06-18 | 2023-08-15 | 北京盈迪曼德科技有限公司 | 一种机器人定位方法、装置、机器人及存储介质 |
CN114903384A (zh) * | 2022-06-13 | 2022-08-16 | 苏州澜途科技有限公司 | 清洁机器人的工作场景地图区域分割方法及装置 |
CN116660916A (zh) * | 2023-05-26 | 2023-08-29 | 广东省农业科学院设施农业研究所 | 一种用于果园移动机器人的定位方法、建图方法及电子设备 |
CN116660916B (zh) * | 2023-05-26 | 2024-02-02 | 广东省农业科学院设施农业研究所 | 一种用于果园移动机器人的定位方法、建图方法及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN110189366A (zh) | 2019-08-30 |
CN110189366B (zh) | 2021-07-06 |
US20220198688A1 (en) | 2022-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020211655A1 (zh) | 激光粗配准方法、装置、移动终端及存储介质 | |
CN112859859B (zh) | 一种基于三维障碍物体素对象映射的动态栅格地图更新方法 | |
KR102143108B1 (ko) | 차선 인식 모델링 방법, 장치, 저장 매체 및 기기, 및 인식 방법, 장치, 저장 매체 및 기기 | |
WO2022017131A1 (zh) | 点云数据的处理方法、智能行驶控制方法及装置 | |
JP7300550B2 (ja) | 視覚標識に基づき標識マップを構築する方法、装置 | |
CN110276826A (zh) | 一种电网作业环境地图的构建方法及系统 | |
JP7440005B2 (ja) | 高精細地図の作成方法、装置、デバイス及びコンピュータプログラム | |
WO2021203807A1 (zh) | 一种基于多源数据知识迁移的三维物体检测框架 | |
CN109325979B (zh) | 基于深度学习的机器人回环检测方法 | |
CN109934857B (zh) | 一种基于卷积神经网络与orb特征的回环检测方法 | |
CN104615880B (zh) | 一种三维激光雷达点云匹配的快速icp方法 | |
CN110570449A (zh) | 一种基于毫米波雷达与视觉slam的定位与建图方法 | |
Ye et al. | Keypoint-based LiDAR-camera online calibration with robust geometric network | |
CN116188893A (zh) | 基于bev的图像检测模型训练及目标检测方法和装置 | |
CN116246119A (zh) | 3d目标检测方法、电子设备及存储介质 | |
CN116222577B (zh) | 闭环检测方法、训练方法、系统、电子设备及存储介质 | |
CN115457492A (zh) | 目标检测方法、装置、计算机设备及存储介质 | |
CN114998276A (zh) | 一种基于三维点云的机器人动态障碍物实时检测方法 | |
CN113724387A (zh) | 一种激光与相机融合的地图构建方法 | |
CN109636897B (zh) | 一种基于改进RGB-D SLAM的Octomap优化方法 | |
Wen et al. | CAE-RLSM: Consistent and efficient redundant line segment merging for online feature map building | |
Kawanishi et al. | Parallel line-based structure from motion by using omnidirectional camera in textureless scene | |
CN117193278B (zh) | 动态沿边路径生成的方法、装置、计算机设备和存储介质 | |
CN113836251B (zh) | 一种认知地图构建方法、装置、设备及介质 | |
WO2022017129A1 (zh) | 目标对象检测方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20791032 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20791032 Country of ref document: EP Kind code of ref document: A1 |