CN111486855A - Indoor two-dimensional semantic grid map construction method with object navigation points - Google Patents

Indoor two-dimensional semantic grid map construction method with object navigation points Download PDF

Info

Publication number
CN111486855A
CN111486855A CN202010348297.6A CN202010348297A CN111486855A CN 111486855 A CN111486855 A CN 111486855A CN 202010348297 A CN202010348297 A CN 202010348297A CN 111486855 A CN111486855 A CN 111486855A
Authority
CN
China
Prior art keywords
grid map
map
coordinate system
environment
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010348297.6A
Other languages
Chinese (zh)
Other versions
CN111486855B (en
Inventor
蒋林
聂文康
向超
马先重
朱建阳
雷斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN202010348297.6A priority Critical patent/CN111486855B/en
Publication of CN111486855A publication Critical patent/CN111486855A/en
Application granted granted Critical
Publication of CN111486855B publication Critical patent/CN111486855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention provides a method for constructing an indoor two-dimensional semantic grid map with object navigation points, which comprises the following steps of: s1, constructing an environment grid map and acquiring the real-time position of the robot; step S2, completing the identification of the object in the environment; s3, acquiring a main body part of the target object to obtain the coordinates of the target object in a camera coordinate system; step S4, coordinate conversion is carried out to obtain the coordinates of the identified object under a grid map coordinate system, and an object grid map is established; step S5, merging the object grid map and the environment grid map to obtain a semantic grid map reflecting environment semantic information; and step S6, extracting the navigation points of the target object in the semantic grid map to obtain the indoor two-dimensional semantic grid map with the navigation points. The method of the invention enables the original grid map to contain semantic information and object navigation points, and the semantic grid map provides richer features for robot positioning and navigation and can complete more advanced and complex tasks.

Description

Indoor two-dimensional semantic grid map construction method with object navigation points
Technical Field
The invention belongs to the field of mobile robot positioning and navigation, and particularly relates to a method for constructing an indoor two-dimensional semantic grid map with object navigation points.
Background
Although the traditional slam method can meet the requirements of positioning, drawing, navigation and the like of the robot, the output topological map or grid map can only express topological information and geometric information in the environment, but the robot cannot really understand the environment and cannot complete high-level complex tasks because the topological map and the grid map lack extraction and description of environment semantic information. In the intelligent development process of the robot, a deep learning method becomes an indispensable tool of the robot, and target recognition and detection can provide more accurate and stable semantic information for the robot, so that the robot thinks the environment as people, understands the environment, supplements information which cannot be expressed by a grid map, and tiles the positioning and navigation of the robot. The invention provides an indoor two-dimensional semantic grid map construction method with object navigation points on the basis of outputting a grid map and positioning information based on an instant positioning and mapping algorithm, so that the original grid map contains semantic information, and the object navigation points are extracted at the same time, thereby laying a foundation for a robot to complete advanced and complex tasks and providing richer information for robot positioning and navigation.
Disclosure of Invention
The invention provides an indoor two-dimensional semantic grid map construction method with object navigation points, aiming at the defect that a map built by a traditional slam method lacks environment semantic information, so that the original grid map contains semantic information and also contains the object navigation points. The semantic grid map provides richer features for the positioning and navigation of the robot, so that higher-level and complex tasks can be completed.
The invention adopts the following specific technical scheme for solving the problems in the prior art:
an indoor two-dimensional semantic grid map construction method with object navigation points is characterized by comprising the following steps:
s1, constructing an environment grid map by means of a robot experiment platform and a Gmapping algorithm, and acquiring the real-time position of the robot;
step S2, acquiring color image and depth image data by using a depth camera, inputting the color image into an SSD detection and identification method, acquiring an object detection frame and a category in the image, and completing the identification of an object in the environment;
s3, removing the background and other objects except the target object in the color image detection frame by using a Grab-cut algorithm, acquiring the main body part of the target object, and acquiring the coordinates of the target object in a camera coordinate system;
step S4, after the object point cloud coordinates under the sensor coordinates are obtained, coordinate conversion is carried out to obtain the coordinates of the identified object under the grid map coordinate system, and an object grid map is established according to the obtained coordinates of each target object;
step S5, after the robot finishes traversing the environment, obtaining an object grid map and an environment grid map, and then combining the two grid maps to obtain a semantic grid map reflecting environment semantic information;
and step S6, after the semantic grid map is created, extracting the navigation points of the target object in the map to obtain the indoor two-dimensional semantic grid map with the navigation points.
The specific steps of step S1 are as follows:
step S101: firstly, a mobile robot experiment platform loaded with a laser radar is well built, and environment configuration required by a Gmapping algorithm map building is well configured;
step S102: and establishing an environment grid map according to the laser radar data, traversing the robot in the environment, simultaneously acquiring odometer data, establishing the map by using a Gmapping algorithm, simultaneously acquiring the real-time position of the robot, and obtaining the environment grid map after the robot traverses.
The target detection method in step S2 is to use the SSD method for object detection to obtain semantic information of the object, and the sensor used for visual recognition is kinect v 2.
Based on the fact that calibration of the sensor kinect v2 and registration of the RGB image and the depth image are completed in advance, the step S2 is to make a required data set and complete training of the SSD target detection recognition model, and then the object to be recognized in the environment is selected as an object which does not move in the environment, and the step S2 includes the following specific steps:
step S201: acquiring color image and depth image data by using kinect v 2;
step S202: and inputting the color image into an SSD detection and identification method, and acquiring an object detection frame and a class in the image.
The specific steps of the step S3 of acquiring point cloud coordinates in the object camera coordinate system are as follows:
step S301: removing the background and other objects except the target object in the color image detection frame obtained in the step S2 by using a Grab-cut algorithm, and segmenting a main body part of the identification object;
step S302: obtaining the depth information of the main body part of the divided identification object through the depth picture obtained by the Kinect v2 under the current frame;
step S303: combining camera parameters obtained by camera calibration, converting coordinates of the identification object in the current detection frame picture into a camera coordinate system, and obtaining point cloud coordinates of the identification object at the current position, wherein the conversion process is as follows:
point cloud (X) under Kinect v2 camera coordinate systemc,Yc,Zc) Obtained by equation (1):
Figure BDA0002470978240000031
where d is the depth value with (u, v) pixel coordinates in the depth image, s is the depth map scaling factor, cx、cy、fx、fyThe focal length and the aperture center of the camera on two axes, respectively, g represents the number of pixels of one physical size.
The specific steps of the object coordinate system transformation identified in step S4 are as follows:
step S401: converting the object point cloud coordinate into a robot coordinate system and then converting the object point cloud coordinate into a global map according to a transformation matrix between a camera coordinate system and a robot coordinate system and a transformation matrix between the robot coordinate system and the global coordinate system;
Figure BDA0002470978240000041
in the formula (2), (X)C,YC,ZC) Is the coordinate of the recognized object in the camera coordinate system, (X)R,YR,ZR) Is the coordinate of the recognized object in the robot coordinate system, R2And T23 x 3 rotation matrix and 3 x 1 translation matrix, 0, between the Kinect v2 depth camera coordinate system and the mobile robot coordinate system, respectivelyTIs (0,0, 0);
the conversion relation between the mobile robot coordinate system and the world coordinate system is formula (3):
Figure BDA0002470978240000042
in the formula (3), (X)w,Yw,Zw) Is the coordinate of the recognized object in the world coordinate system, R3And T3Rotation matrix of 3 x 3 and translation matrix of 3 x 1 between the mobile robot coordinate system and the world coordinate system, 0TIs (0,0, 0);
step S402: converting the finally obtained point cloud coordinate of the identification object in the global coordinate system into a grid map coordinate system to obtain the coordinate (x) of the object in the grid mapg,yg):
Figure BDA0002470978240000043
In the formula (4), (x, y) represents a certain point coordinate value in the real world coordinate system, (x)g,yg) And (3) corresponding to the coordinate value of the point in the grid map coordinate system, wherein r represents the length value of the grid unit in the real world corresponding to the side length, and ceil is an upward rounding symbol.
The specific steps of step S5 are as follows:
step S501: before the object grid map is created, a blank object grid map in a sufficient space is created, the grid resolutions adopted by the environment grid map and the object grid map are consistent, and the center of the blank map is assumed to be the origin of the map creation;
step S502: marking the coordinates of the identification object under the finally obtained grid map coordinate system in a blank grid map, and representing different objects by using different gray values to obtain an object grid map;
step S503: after traversing the environment, synthesizing the environment grid map and the object grid map, wherein the object grid map and the environment grid map are synchronously established, and the grid resolutions of the two maps are kept consistent, so that the two generated maps have the same size and direction as the same object, and can be conveniently combined by aligning the postures of the two maps, and the specific combining process is as follows:
firstly, obtaining the position of a map center (map building origin) by using a map file generated by an environment raster map built by Gmapping; the generated object grid map is in a picture format, the center of the object grid map is the picture center, the center of the environment grid map is aligned with the center of the object grid map, the direction is kept unchanged, and the center alignment of the two maps is realized; and then synthesizing the map, traversing and reading a target object area (namely a non-white area) of the object grid map by using OpenCV as a tool, adding the color of the area into a position corresponding to the environment grid map, keeping the centers and postures of the two maps consistent in the process, completing the addition of the color area in the environment grid map after traversing all pixel values in the object grid map, and finally generating the environment grid map which is the synthesized semantic grid map.
The specific steps of obtaining the semantic grid map with navigation points in the step S6 are as follows:
step S601: firstly, performing expansion corrosion treatment on a generated object grid map to remove discrete single semantic points in the map, and connecting semantic points belonging to the same object to extract the single object in the semantic grid map;
step S602: selecting the position with less other objects around the object as a navigation point to prevent the robot from sinking into the area with more obstacles;
step S603: obtaining the occupied area of a single identification object by using a minimum rectangle surrounding method, and obtaining the main direction of an idle area around the object by using an image moment method;
step S604: on a connecting line between the center of the area occupied by the recognized object and the center of the idle area, a proper distance value is set according to the size of the robot, a point close to the end of the idle area on the connecting line is selected as a navigation point position, the direction of the navigation point faces to a target object, coordinates of the points are obtained, and the semantic grid map with the object navigation point is obtained.
The invention has the following advantages:
the invention enables the original grid map to contain semantic information, and the semantic grid map provides richer features for the positioning and navigation of the robot, thereby completing more advanced and complex tasks. The established two-dimensional semantic grid map is used for the repositioning process of the particle filtering method, and the matching degree of the semantic imitation laser data and the semantic grid map and the matching degree of the laser data and the barrier grid map are used for calculating the weight of the particles, so that the convergence of the particles is faster, and the repositioning process is more stable. The repositioning is to determine the possible area of the robot through semantic observation and reduce the number of particles required by repositioning. The method for extracting the navigation points selects the position where few other objects exist around the object as the navigation points, prevents the robot from being trapped in a region with more obstacles, and is more scientific compared with the method for directly taking the target object as the navigation points, so that the robot is safer and more reliable in the navigation process, and the planning efficiency is higher.
Drawings
FIG. 1 is a diagram of an SSD network model architecture;
FIG. 2 is a schematic diagram of a robot platform model and sensor coordinate system;
FIG. 3 is a frame of semantic imitation laser data generated from an identification environment;
FIG. 4 is a diagram showing that only a cluster of simulated laser data closest to the robot at this time is extracted in FIG. 3;
FIG. 5 is a well-established environmental grid map;
FIG. 6 is a well-established grid map of an object;
FIG. 7 is a synthesized semantic grid map;
FIG. 8 is a grid map of an object after expansion erosion;
FIG. 9 is a schematic diagram of navigation point extraction;
fig. 10 is a schematic diagram of the extraction result of the navigation pose of each object.
Detailed Description
In the process of establishing the map, synchronizing Gmapping algorithm positioning information, color image and depth image data acquired by Kinect v2, inputting the color image into an SSD detection and identification method, acquiring an object detection frame and a category in the image, calculating a corresponding position of an object in the color image in the depth image by using registration data, converting the acquired Kinect v2 depth data into simulated laser data, converting the simulated laser data with semantic information in a camera into a global map coordinate system, updating the category state of each grid unit in the semantic grid map through Bayesian estimation, and finally completing the establishment of the semantic grid map. And after the map is created, extracting the navigation points to obtain the semantic grid map with the navigation points.
The method comprises the steps of firstly, completing the instant positioning of a robot and the creation of a grid map on an ROS operation platform by using a Gmapping algorithm by means of an L MS111 laser radar sensor and a mobile robot experiment platform, and acquiring the position of the robot in real time;
in order to complete the construction of the semantic map, in addition to the requirement that the robot can sense semantic information in the environment, the robot is also required to clarify the current pose of the robot so as to convert data in a sensor coordinate system into a world coordinate system. And (3) the robot finishes the estimation of the self pose and the construction of an environment model through one or more sensors under the condition of no environment prior information by using a slam method. The Gmapping open source algorithm with input data of milemeter data and laser radar data is a typical laser radar slam method, the method is based on a RBPF particle filter algorithm, the positioning and graph building process is separated, the proposed distribution in the robot state estimation process is improved, the proposed distribution is more accurate, the number of particles is reduced, selective resampling is performed, and resampling is performed only when the weight change of the particles exceeds a threshold value through setting a certain threshold value, so that particle degradation is prevented. The Gmapping effectively utilizes the odometer data and the latest frame of laser radar observation to complete the prediction of the pose of the robot, matches the current observation with the partial map of the environment and resamples the current observation, completes the posterior state estimation of the robot, has higher positioning precision and realizes the pose acquisition of the semantic map construction of the robot.
The method for establishing the grid map by using the laser radar comprises the following basic steps: the grid resolution of the constructed graph is 0.05 meter, after the global coordinate conversion of all distance information of a frame of laser radar is completed, setting 1 to the grid units on the grid map corresponding to all detected obstacle position points to represent an occupied state, calculating a grid unit set in an idle state between all the detected position points and the mobile robot by using a Bresenham algorithm, and setting 0 to the grid unit set to represent the idle state. The data are repeatedly acquired according to the pose states of the mobile robot at different moments and sensor information, the state of each grid unit in the grid map coordinate system is updated according to the Bayesian estimation principle, and after the traversal of the use environment of the robot is completed, whether obstacles exist in the grid is judged according to whether the probability value of each grid is larger than a certain set threshold value or not, and the grid map is created.
Step two: acquiring color image and depth image data, inputting the color image into an SSD detection and identification method, acquiring an object detection frame and a class in the image, and completing identification of an object in the environment;
the method comprises the steps of completing construction of a semantic map, firstly, enabling a robot to know semantic information in an environment, enabling the robot to detect an object existing in the environment and obtain the position of the object, wherein a network model for SSD target detection is utilized, a network structure diagram is shown in figure 1, and the object in the environment is detected.
The data set directly determines the detection and recognition effect of the network model generated by final training, and the object detection data set is manufactured on the basis of the principles of moderate and extensive existence of data in the scene, and balanced data types and quantities. Scene pictures under multiple views, multiple distances and multiple brightness are collected, pictures of actual use scenes of the robot are added, and a data set of an object detection and identification task is formed. And after the data set is manufactured, dividing the marked data set into a training set and a testing set according to the proportion of 7:3 for model training and testing in the next step. In order to further enrich the data set and improve the generalization capability of the model, data enhancement operations such as color change, scale transformation, random clipping and the like are carried out on the data set. During training, weights of the SSD network model can be initialized by adopting pre-trained network weights on the ImageNet data set, and the indoor object detection data set is trained in the GPU mode. The main relevant configuration parameters in the training process are: the number of the pictures processed in batch at each time is 8; the optimization algorithm adopts a Stochastic Gradient Descent method (SGD); setting the basic learning rate to be 0.001, and dynamically adjusting the learning rate in the training process; learning momentum is set to 0.9; the weighted attenuation value is set to 0.0005; the maximum number of iterations is 60000, and the training parameters may also vary according to the network structure and the data set.
The method comprises the steps of detecting an object in the environment by using a network model for SSD target detection, obtaining the position of the identified object in an image acquired by a camera, and then converting the position into a world coordinate system through coordinate conversion to construct a semantic map, thereby completing semantic perception of the object in the working environment of the mobile robot, and the method is also an important step for establishing the semantic map.
Step three: and step two, the detection frame of the target object is obtained, the background and other objects except the target object in the color image detection frame are removed by using a Grab-cut algorithm, the main body part of the target object is obtained, and the target object area is segmented and extracted at a single view angle. Extracting point cloud information of the detected object by combining depth information of the same frame of depth map, namely acquiring coordinates of the target object in a camera coordinate system;
and step three, the premise that the calibration of the camera and the registration of the RGB image and the depth image are completed is that the registration between the color image and the depth image is completed by calling a correlation function in an L ibfreenect2 library.
And (4) acquiring a main body part in the detection frame by using a Grab-cut algorithm, and segmenting and extracting the target area at a single view angle. The method for Grab-cut adopts a mixed Gaussian model to calculate the regional item, measures RGB three channels to measure the similarity of adjacent pixels, adopts Euclidean distance to calculate the pixel difference, separates a weight value graph by a maximum flow algorithm, and solves the minimum energy cut edge to obtain a segmented image. After the detection frame is obtained, most of the edge background area in the detection frame can be removed by using the Grab-cut method, but for some target objects with irregular outlines or when the background is similar to the color of the object, a certain background residue still exists.
After the color camera of the Kinect v2 sensor and the object detection network model SSD are used for completing the detection and identification of indoor objects and the main body part segmentation is carried out by the Grab-cut, the coordinates of the detected objects in the image coordinate system and the classes of the objects in each frame of image are successfully obtained, but the above process only obtains the two-dimensional information of the target, and in order to construct a semantic grid map, the point cloud coordinates of the detected objects can be extracted by using the Kinect v2 camera parameters obtained by calibration, so that the space three-dimensional information of the detected objects in the camera coordinate system can be obtained.
Point cloud (X) under Kinect v2 camera coordinate systemc,Yc,Zc) Obtained by equation (1):
Figure BDA0002470978240000101
where d is the depth value with (u, v) pixel coordinates in the depth image and s is the depth mapScaling factor, cx、cy、fx、fyThe focal length and the aperture center of the camera on two axes, respectively, g represents the number of pixels of one physical size.
The background and redundant objects of the target object in the detection frame are removed by using the Grab-cut method, and the point cloud coordinates of the main body part of the segmented target object in the camera coordinate system can be calculated by the formula (1).
Step four: after the point cloud coordinates of the objects under the sensor coordinates are obtained, converting the local coordinates into the global coordinate system to obtain the coordinates of the identified objects under the world coordinate system, and obtaining the coordinates of each target object according to the obtained coordinates;
and after the object point cloud coordinates in the camera coordinate system are acquired, converting the local coordinates into the global coordinate system. The representation diagram of each coordinate system of the mobile robot platform model is shown in fig. 2:
the laser radar coordinate system in FIG. 2 is OL-XLYLZLKinect v2 depth camera coordinate system OC-XCYCZCThe coordinate system of the mobile robot is OR-XRYRZR. In addition, the real world coordinate system is OW-XWYWZWUsually, the origin of the robot environment map is used as the origin of coordinates, and the grid map coordinate system is OG-XGYGZGA grid discretized coordinate system of the world coordinate system. And (3) converting the coordinate between a Kinect v2 depth camera coordinate system and a mobile robot coordinate system, between the mobile robot coordinate system and a real world coordinate system, and between the real world coordinate system and a grid map coordinate system.
The conversion relation between the Kinect v2 depth camera coordinate system and the mobile robot coordinate system is formula (2):
Figure BDA0002470978240000111
in the formula (2), R2And T2Respectively Kinect v2 deepRotation matrix of 3 x 3 and translation matrix of 3 x 1 between coordinate system of degree camera and coordinate system of mobile robot, 0TIs (0,0, 0). As can be seen from the robot platform in fig. 2: xRAxis and ZCSame axial direction, YRAxis and XCSame axial direction, ZRAxis and YCThe axial directions are the same. According to the measured values, the origin of the Kinect v2 depth camera coordinate system is 135mm in front of, 330mm above and 95mm to the right of the origin of the mobile robot coordinate system. Thus, the following formulae (5) and (6) can be obtained:
Figure BDA0002470978240000121
Figure BDA0002470978240000122
the conversion relation between the mobile robot coordinate system and the world coordinate system is formula (3):
Figure BDA0002470978240000123
in the formula (3), R3And T3Rotation matrix of 3 x 3 and translation matrix of 3 x 1 between the mobile robot coordinate system and the world coordinate system, 0TIs (0,0, 0). Since the mobile robot is in a moving state, each time R3And T3All are in a change state, which is determined by the pose of the mobile robot at the current moment, and the pose of the mobile robot at a certain moment is set as (x)0,y0θ), calculated from the positioning algorithm gmaping mentioned in step one. Then formula (7) and formula (8):
Figure BDA0002470978240000124
Figure BDA0002470978240000125
and finally, converting the coordinate points in the real world coordinate system into a grid map coordinate system by an equation (4).
Figure BDA0002470978240000126
In the formula (4), (x, y) represents a certain point coordinate value in the real world coordinate system, (x)g,yg) And (3) corresponding to the coordinate value of the point in the grid map coordinate system, wherein r represents the length value of the grid unit in the real world corresponding to the side length, and ceil is an upward rounding symbol.
The image coordinates of the indoor object sensed, detected and identified by the mobile robot in the camera coordinate system corresponding to the object can be converted into the two-dimensional grid map coordinate system through the conversion among the coordinate systems, and the coordinates of the detected target object in the grid map coordinate system can be obtained.
Step five: in the process of traversing the environment by the robot, the first four steps are sequentially and circularly performed. The detected coordinates of the target object under a two-dimensional grid map coordinate system are obtained through the steps, a blank object grid map with enough space is firstly created before the object grid map is created, the grid resolutions adopted by the environment grid map and the object grid map are consistent, the center of the blank map is assumed to be the origin of the map creation, the position of the target object in the environment is drawn on the map by using the Bayesian principle along with the traversal of the robot in the environment until the traversal of the environment is completed, and the creation of the environment grid map and the creation of the object grid map are completed simultaneously; and then combining the two grid maps to obtain the semantic grid map reflecting the environment semantic information.
Bayesian principle related introduction: because a sensor used by the mobile robot and the positioning of the mobile robot have certain errors, coordinates detected and positioned by the mobile robot in a certain position point area in an actual environment representing the size of one grid unit at different moments are not equal, so that when a grid map is established, the real state of each grid is estimated through the Bayes principle, the posterior probability of occupation of each grid unit is calculated, and the maximum posterior probability is calculated according to the calculated posterior probabilityTo update the state of each grid cell. There are two states for a certain grid cell: occupied (denoted by 1) and idle (denoted by 0), for the occupied state, the probability of occupied is denoted by p (s ═ 1), for the idle state, the probability of idle is denoted by p (s ═ 0), and finally, the probability of idle is denoted by p (s ═ 0)
Figure BDA0002470978240000131
As the basis for determining the state of the grid cell. Assuming that the state of a certain grid cell before measurement is O(s), when the sensor outputs a measurement value z {0,1}, the state thereof is updated according to equation (9):
Figure BDA0002470978240000141
according to the Bayesian formula, the formula (10) is shown:
Figure BDA0002470978240000142
bringing formula (10) into formula (3-9) to obtain formula (11):
Figure BDA0002470978240000143
taking the logarithm of formula (11) to obtain formula (12):
Figure BDA0002470978240000144
the derivation of the above formula shows that the state of a certain grid cell at the next moment can be updated according to a calculated value obtained by adding the state value of the grid cell at the previous moment and the current measurement model value, the larger the data value of a certain grid state is, the more certain grid cell is determined to be in an occupied state, and the smaller the opposite value is, the more certain grid cell is determined to be in an idle state.
In the map building process, the grid resolution ratios adopted by the environment grid map and the object grid map are the same, one grid (pixel) is 0.05 m, the size and the size of the two built maps are the same, the robot platform traverses in the environment, the environment grid map can be built step by step through the first step, and the point cloud coordinates of the identification object in a world coordinate system are obtained through the four steps. Selecting Kinect v2 to obtain three-dimensional point cloud, reducing the dimensions of the three-dimensional point cloud, extracting the closest point of each row in the depth map, removing the repeatability occupied by the two-dimensional plane grid, and generating laser simulation data. For example, when an object in the environment is recognized, the generated data is as shown in fig. 3, at this time, only one cluster of laser-simulated data closest to the robot at this time is extracted as a target object part, the semantic grid map is considered to be in a semantic occupied state, the generated result is as a light gray part in fig. 4, and semantic laser-simulated data is obtained. Because kinect V2 resolution ratio is great, the data redundancy, in order to reduce the data bulk, on the basis of guaranteeing certain resolution ratio, adopt the nearest neighbor sampling method to reduce RGB image and depth image to 1/4 size of original image, when having reduced the calculated amount, satisfied the requirement for the grid that the resolution ratio is 0.05 meters equally.
After the simulated laser data is generated and objects in the environment are sensed by the camera, the detection range is divided into two areas along the horizontal view angle of the camera. After the target detection algorithm detects the two chairs, the part outside the detection frame in the visual angle range of the camera is considered not to belong to the chair part, and the robot is considered not to have the chair between the position of the robot and the obstacle of the detection frame, namely in the semantic grid map of the chair, the grid scanned by the laser imitation data in the frame is considered to be in the idle state of the chair.
If the robot does not detect an object in a certain frame or the confidence degree is smaller than a certain value in the process of sensing the environment, the robot considers that no target object in the semantic library exists in the current Kinect v2 visual field range, the grid through which the laser simulation data detection passes is considered to have no object in the frame, and the grid in the current grid unit set is in an idle state of all objects in the semantic library.
After grid points scanned in each frame of data and states of the grid points are clarified, the data are repeatedly acquired according to pose states of the mobile robot at different moments and sensor information, simultaneously, states of each grid unit in a grid map coordinate system are updated according to a Bayesian estimation principle, after traversal of the use environment of the robot is completed, whether a certain type of object exists in each grid is judged according to whether the object probability value of each grid is larger than a certain set threshold value, and creation of an indoor object grid map is completed.
After traversing the environment, the created environment grid map is as shown in fig. 5 and the object grid map is as shown in fig. 6, and after the two maps are merged, the merged map as shown in fig. 7 is obtained.
The specific process of merging is as follows: because the object grid map and the environment grid map are synchronously established, and the grid resolution of the two map maps is 0.05 m, the sizes of the same objects of the two generated maps are the same, the directions of the same objects are consistent, and the process of aligning and merging the postures of the two maps is convenient to perform. Firstly, obtaining the position of a map center (map building origin) by using a map file generated by an environment raster map built by Gmapping; the generated object grid map is in a picture format, the center of the object grid map is the picture center, and the centers of the two maps are aligned only by aligning the center of the environment grid map with the center of the object grid map and keeping the direction unchanged; and then, synthesizing a map, actually, taking the map as a picture operation, traversing and reading a non-white area (namely an object) of the object grid map by using OpenCV as a tool, adding the color of the area into a corresponding position of the environment grid map, keeping the centers and the postures of the two maps consistent in the process, completing the addition of the color area in the environment grid map after traversing all pixel values in the object grid map, and finally generating the environment grid map which is the synthesized semantic grid map. The object in the semantic map is basically overlapped with the corresponding barrier in the grid map, the position expression of the object is accurate, the environment semantic information is basically reflected, and the semantic grid map is established.
And step six, after the semantic grid map is established, extracting navigation points of the map target object, and obtaining the indoor two-dimensional semantic grid map with the navigation points.
Before the navigation point is extracted, the object map is firstly subjected to expansion corrosion processing, and the established object map represents the position and the category of an object, so that one object usually occupies grid points in a plurality of maps, semantic points of the same object in the map can be scattered, or a single semantic point exists in the map, and the data cannot be used for representing the single object. The discrete single semantic points in the map are removed by processing through image expansion corrosion, the semantic points belonging to the same object are connected, the single object in the semantic grid map is extracted, the processing result is shown in fig. 8, the single semantic points existing in the initially established semantic grid map are expanded and eliminated, and meanwhile the discrete points of the same object in the semantic grid map are connected.
In order to realize the function of the service robot, the robot can complete the task similar to 'throwing garbage', the robot needs to extract the position of each object in a semantic map, and because the position of the object occupies a grid in the map, the position of the object directly used as a navigation point is poor or the navigation is failed. And selecting a position with less other objects around the object as a navigation point to prevent the robot from sinking into an area with more obstacles. The extraction process is as follows: by extracting the grid values I within a certain range (x, y) in the vicinity of the object(u,v)And obtaining the main direction of the idle area around the object by using an image moment method:
Figure BDA0002470978240000171
Figure BDA0002470978240000172
to obtain centroid (X)c,Yc) As shown in fig. 9, the centroid is the calculated centroid of the pixel value of a certain target (the target has a specific pixel value and is distinguished from the non-target), the center point is the center point of the minimum rectangle (width) surrounding the target, and the centroid and the center point are connected to obtain the main direction θ of the free area around the target0And searching a better idle point Goal in the main direction. And obtaining the center point under a coordinate system with the point Goal as the originDirection theta1In order to make the robot navigation point a proper distance value from the object, according to the size of the object, the horizontal view angle α degrees according to Kinect v2, the center of the navigation target object and the size (width, height) of the object are determined according to the direction theta0The distance between the robot navigation point and the center of the target object is set to L/2/tan (α/2), and the robot is made to navigate to the target point (mu)goalgoal) The rear camera can observe the corresponding whole object to observe the target object again and determine the position of the rear camera. When the robot is commanded to move to the vicinity of the A object, the navigation pose of the robot is determined to be (u)goal,vgoal1) And the V direction in the figure is the initial direction of the robot to build the figure.
And extracting the navigation poses of all objects in the established semantic grid map according to the method, wherein the extraction result is shown in figure 10, the circle center of a circle far away from the target object in the map is the position of the extracted navigation point, the circle center of a circle near the target object represents the center of mass of an idle area around the object, and the navigation point corresponding to each object in the map is basically located at the position far away from the obstacle, so that the navigation poses of all the objects are obtained, and the construction of the indoor two-dimensional semantic map with the object navigation points is completed.
The protective scope of the present invention is not limited to the above-described embodiments, and it is apparent that various modifications and variations can be made to the present invention by those skilled in the art without departing from the scope and spirit of the present invention. It is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims (9)

1. An indoor two-dimensional semantic grid map construction method with object navigation points is characterized by comprising the following steps:
s1, constructing an environment grid map by means of a robot experiment platform and a Gmapping algorithm, and acquiring the real-time position of the robot;
step S2, acquiring color image and depth image data by using a depth camera, inputting the color image into an SSD detection and identification method, acquiring an object detection frame and a category in the image, and completing the identification of an object in the environment;
s3, removing the background and other objects except the target object in the color image detection frame by using a Grab-cut algorithm, acquiring the main body part of the target object, and acquiring the coordinates of the target object in a camera coordinate system;
step S4, after the object point cloud coordinates under the sensor coordinates are obtained, coordinate conversion is carried out to obtain the coordinates of the identified object under the grid map coordinate system, and an object grid map is established according to the obtained coordinates of each target object;
step S5, after the robot finishes traversing the environment, obtaining an object grid map and an environment grid map, and then combining the two grid maps to obtain a semantic grid map reflecting environment semantic information;
and step S6, after the semantic grid map is created, extracting the navigation points of the target object in the map to obtain the indoor two-dimensional semantic grid map with the navigation points.
2. The method for constructing an indoor two-dimensional semantic grid map with object navigation points according to claim 1, wherein the step S1 comprises the following steps:
step S101: firstly, a mobile robot experiment platform loaded with a laser radar is well built, and environment configuration required by a Gmapping algorithm map building is well configured;
step S102: and establishing an environment grid map according to the laser radar data, traversing the robot in the environment, simultaneously acquiring odometer data, establishing the map by using a Gmapping algorithm, simultaneously acquiring the real-time position of the robot, and obtaining the environment grid map after the robot traverses.
3. The method for constructing an indoor two-dimensional semantic grid map with object navigation points according to claim 1, characterized in that: the target detection method in step S2 is to use the SSD method for object detection to obtain semantic information of the object, and the sensor used for visual recognition is kinect v 2.
4. The method as claimed in claim 1, wherein the step S2 is based on that calibration of the sensor kinect v2 and registration of RGB image and depth image are completed in advance, a required data set is prepared, and after training of the SSD object detection recognition model is completed, the object to be recognized in the environment is selected as an object that does not move in the environment, and the step S2 includes the following specific steps:
step S201: acquiring color image and depth image data by using kinect v 2;
step S202: and inputting the color image into an SSD detection and identification method, and acquiring an object detection frame and a class in the image.
5. The method for constructing an indoor two-dimensional semantic grid map with object navigation points according to claim 1, wherein the step S3 includes the following steps:
step S301: removing the background and other objects except the target object in the color image detection frame obtained in the step S2 by using a Grab-cut algorithm, and segmenting a main body part of the identification object;
step S302: obtaining the depth information of the main body part of the divided identification object through the depth picture obtained by the Kinect v2 under the current frame;
step S303: combining camera parameters obtained by camera calibration, converting coordinates of the identification object in the current detection frame picture into a camera coordinate system, and obtaining point cloud coordinates of the identification object at the current position, wherein the conversion process is as follows:
point cloud (X) under Kinect v2 camera coordinate systemc,Yc,Zc) Obtained by equation (1):
Figure FDA0002470978230000031
in the formula, d is in the depth imagePixel coordinate is depth value of (u, v), s is depth map scaling factor, cx、cy、fx、fyThe focal length and the aperture center of the camera on two axes, respectively, g represents the number of pixels of one physical size.
6. The method as claimed in claim 1, wherein the step of transforming the object coordinate system identified in step S4 comprises the following steps:
step S401: converting the object point cloud coordinate into a robot coordinate system and then converting the object point cloud coordinate into a global map according to a transformation matrix between a camera coordinate system and a robot coordinate system and a transformation matrix between the robot coordinate system and the global coordinate system;
Figure FDA0002470978230000032
in the formula (2), (X)C,YC,ZC) Is the coordinate of the recognized object in the camera coordinate system, (X)R,YR,ZR) Is the coordinate of the recognized object in the robot coordinate system, R2And T23 x 3 rotation matrix and 3 x 1 translation matrix, 0, between the Kinect v2 depth camera coordinate system and the mobile robot coordinate system, respectivelyTIs (0,0, 0);
the conversion relation between the mobile robot coordinate system and the world coordinate system is formula (3):
Figure FDA0002470978230000041
in the formula (3), (X)w,Yw,Zw) Is the coordinate of the recognized object in the world coordinate system, R3And T3Rotation matrix of 3 x 3 and translation matrix of 3 x 1 between the mobile robot coordinate system and the world coordinate system, 0TIs (0,0, 0);
step S402: converting the point cloud coordinates of the identification object in the global coordinate system to the gridObtaining the coordinates (x) of the object in the grid map under the map coordinate systemg,yg):
Figure FDA0002470978230000042
In the formula (4), (x, y) represents a certain point coordinate value in the real world coordinate system, (x)g,yg) And (3) corresponding to the coordinate value of the point in the grid map coordinate system, wherein r represents the length value of the grid unit in the real world corresponding to the side length, and ceil is an upward rounding symbol.
7. The method for constructing an indoor two-dimensional semantic grid map with object navigation points according to claim 1, wherein the step S5 comprises the following steps:
step S501: before the object grid map is created, a blank object grid map in a sufficient space is created, the grid resolutions adopted by the environment grid map and the object grid map are consistent, and the center of the blank map is assumed to be the origin of the map creation;
step S502: marking the coordinates of the identification object under the finally obtained grid map coordinate system in a blank grid map, and representing different objects by using different gray values to obtain an object grid map;
step S503: and after traversing the environment, synthesizing the environment grid map and the object grid map, wherein the object grid map and the environment grid map are synchronously established, and the grid resolutions of the two maps are kept consistent, so that the two generated maps have the same size and direction of the same object, and the two maps are aligned and combined in posture.
8. The method according to claim 7, wherein the step S503 of merging comprises the following steps:
firstly, obtaining the position of a map center (map building origin) by using a map file generated by an environment raster map built by Gmapping; the generated object grid map is in a picture format, the center of the object grid map is the picture center, the center of the environment grid map is aligned with the center of the object grid map, the direction is kept unchanged, and the center alignment of the two maps is realized; and then synthesizing the map, traversing and reading a target object area (namely a non-white area) of the object grid map by using OpenCV as a tool, adding the color of the area into a position corresponding to the environment grid map, keeping the centers and postures of the two maps consistent in the process, completing the addition of the color area in the environment grid map after traversing all pixel values in the object grid map, and finally generating the environment grid map which is the synthesized semantic grid map.
9. The method for constructing an indoor two-dimensional semantic grid map with object navigation points according to claim 1, wherein the step S6 of obtaining the semantic grid map with navigation points comprises the following steps:
step S601: firstly, performing expansion corrosion treatment on a generated object grid map to remove discrete single semantic points in the map, and connecting semantic points belonging to the same object to extract the single object in the semantic grid map;
step S602: selecting the position with less other objects around the object as a navigation point to prevent the robot from sinking into the area with more obstacles;
step S603: obtaining the occupied area of a single identification object by using a minimum rectangle surrounding method, and obtaining the main direction of an idle area around the object by using an image moment method;
step S604: on a connecting line between the center of the area occupied by the recognized object and the center of the idle area, a proper distance value is set according to the size of the robot, a point close to the end of the idle area on the connecting line is selected as a navigation point position, the direction of the navigation point faces to a target object, coordinates of the points are obtained, and the semantic grid map with the object navigation point is obtained.
CN202010348297.6A 2020-04-28 2020-04-28 Indoor two-dimensional semantic grid map construction method with object navigation points Active CN111486855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010348297.6A CN111486855B (en) 2020-04-28 2020-04-28 Indoor two-dimensional semantic grid map construction method with object navigation points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010348297.6A CN111486855B (en) 2020-04-28 2020-04-28 Indoor two-dimensional semantic grid map construction method with object navigation points

Publications (2)

Publication Number Publication Date
CN111486855A true CN111486855A (en) 2020-08-04
CN111486855B CN111486855B (en) 2021-09-14

Family

ID=71795745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010348297.6A Active CN111486855B (en) 2020-04-28 2020-04-28 Indoor two-dimensional semantic grid map construction method with object navigation points

Country Status (1)

Country Link
CN (1) CN111486855B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111781936A (en) * 2020-08-07 2020-10-16 深圳中智永浩机器人有限公司 Robot path planning method and device, robot and computer readable storage medium
CN111986553A (en) * 2020-08-19 2020-11-24 炬星科技(深圳)有限公司 Method, device and storage medium for map association based on semantic label
CN112099513A (en) * 2020-11-09 2020-12-18 天津联汇智造科技有限公司 Method and system for accurately taking materials by mobile robot
CN112161618A (en) * 2020-09-14 2021-01-01 灵动科技(北京)有限公司 Storage robot positioning and map construction method, robot and storage medium
CN112363158A (en) * 2020-10-23 2021-02-12 浙江华睿科技有限公司 Pose estimation method for robot, and computer storage medium
CN112668585A (en) * 2020-11-26 2021-04-16 厦门大学 Object identification and positioning method in dynamic environment
CN112836698A (en) * 2020-12-31 2021-05-25 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN112950565A (en) * 2021-02-25 2021-06-11 山东英信计算机技术有限公司 Method and device for detecting and positioning water leakage of data center and data center
CN113129379A (en) * 2021-06-17 2021-07-16 同方威视技术股份有限公司 Global relocation method and device for automatic mobile equipment
CN113240682A (en) * 2021-05-31 2021-08-10 华中科技大学 Overturn-preventing construction driving map generation method and system for crawler crane
CN113238554A (en) * 2021-05-08 2021-08-10 武汉科技大学 Indoor navigation method and system based on SLAM technology integrating laser and vision
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113447012A (en) * 2021-05-10 2021-09-28 天津大学 Service robot 2D semantic map generation method and device based on deep learning
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot
CN113624223A (en) * 2021-07-30 2021-11-09 中汽创智科技有限公司 Indoor parking lot map construction method and device
CN113689459A (en) * 2021-07-30 2021-11-23 南京信息工程大学 GMM (Gaussian mixture model) combined with YOLO (YOLO) based real-time tracking and graph building method in dynamic environment
CN113759904A (en) * 2021-08-24 2021-12-07 桂林电子科技大学 Mobile robot autonomous navigation method based on fusion algorithm
CN113778096A (en) * 2021-09-15 2021-12-10 上海景吾智能科技有限公司 Positioning and model building method and system for indoor robot
CN113804182A (en) * 2021-09-16 2021-12-17 重庆邮电大学 Grid map creating method based on information fusion
CN113984037A (en) * 2021-09-30 2022-01-28 电子科技大学长三角研究院(湖州) Semantic map construction method based on target candidate box in any direction
CN114356078A (en) * 2021-12-15 2022-04-15 之江实验室 Method and device for detecting human intention based on gazing target and electronic equipment
CN114459483A (en) * 2021-12-30 2022-05-10 上海交通大学 Landmark navigation map construction and application method and system based on robot navigation
CN115379194A (en) * 2021-05-19 2022-11-22 北京小米移动软件有限公司 Depth image quantization method and device, terminal device and storage medium
CN115855030A (en) * 2023-02-28 2023-03-28 麦岩智能科技(北京)有限公司 Obstacle retention method, storage medium and equipment
WO2023173243A1 (en) * 2022-03-14 2023-09-21 罗伯特·博世有限公司 Semantic label generation for two-dimensional lidar scanning graph
WO2023219058A1 (en) * 2022-05-13 2023-11-16 ソニーグループ株式会社 Information processing method, information processing device, and information processing system
CN117289294A (en) * 2023-11-27 2023-12-26 睿羿科技(山东)有限公司 Fusion positioning method based on multi-resolution Bayesian grid
CN117437654A (en) * 2023-12-19 2024-01-23 中国地质大学(武汉) Semantic recognition-based grid map analysis method, device and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120091937A (en) * 2011-02-10 2012-08-20 고려대학교 산학협력단 Method for building simantic grid map and method for exploring an environment using simantic grid map
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN109163731A (en) * 2018-09-18 2019-01-08 北京云迹科技有限公司 A kind of semanteme map constructing method and system
CN109272554A (en) * 2018-09-18 2019-01-25 北京云迹科技有限公司 A kind of method and system of the coordinate system positioning for identifying target and semantic map structuring
WO2019026832A1 (en) * 2017-08-02 2019-02-07 Sony Corporation Generating an environment map around a mobile object with high accuracy
CN109916393A (en) * 2019-03-29 2019-06-21 电子科技大学 A kind of multiple grid point value air navigation aid and its application based on robot pose
CN110220517A (en) * 2019-07-08 2019-09-10 紫光云技术有限公司 A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words
CN110363816A (en) * 2019-06-25 2019-10-22 广东工业大学 A kind of mobile robot environment semanteme based on deep learning builds drawing method
US20190339705A1 (en) * 2018-05-04 2019-11-07 Honda Motor Co., Ltd. Transition map between lidar and high-definition map
CN110455306A (en) * 2018-05-07 2019-11-15 南京图易科技有限责任公司 A kind of robot scene identification and semantic navigation map label method based on deep learning
CN110728751A (en) * 2019-06-19 2020-01-24 武汉科技大学 Construction method of indoor 3D point cloud semantic map

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120091937A (en) * 2011-02-10 2012-08-20 고려대학교 산학협력단 Method for building simantic grid map and method for exploring an environment using simantic grid map
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
WO2019026832A1 (en) * 2017-08-02 2019-02-07 Sony Corporation Generating an environment map around a mobile object with high accuracy
US20190339705A1 (en) * 2018-05-04 2019-11-07 Honda Motor Co., Ltd. Transition map between lidar and high-definition map
CN110455306A (en) * 2018-05-07 2019-11-15 南京图易科技有限责任公司 A kind of robot scene identification and semantic navigation map label method based on deep learning
CN109163731A (en) * 2018-09-18 2019-01-08 北京云迹科技有限公司 A kind of semanteme map constructing method and system
CN109272554A (en) * 2018-09-18 2019-01-25 北京云迹科技有限公司 A kind of method and system of the coordinate system positioning for identifying target and semantic map structuring
CN109916393A (en) * 2019-03-29 2019-06-21 电子科技大学 A kind of multiple grid point value air navigation aid and its application based on robot pose
CN110728751A (en) * 2019-06-19 2020-01-24 武汉科技大学 Construction method of indoor 3D point cloud semantic map
CN110363816A (en) * 2019-06-25 2019-10-22 广东工业大学 A kind of mobile robot environment semanteme based on deep learning builds drawing method
CN110220517A (en) * 2019-07-08 2019-09-10 紫光云技术有限公司 A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHAO, CHENG等: ""Building a grid-point cloud-semantic map based on graph for the navigation of intelligent wheelchair"", 《2015 21ST INTERNATIONAL CONFERENCE ON AUTOMATION AND COMPUTING (ICAC)》 *
吴皓等: ""基于视觉 SLAM 的物体实例识别与语义地图构建"", 《华中科技大学学报(自然科学版)》 *
吴皓等: ""室内非结构化环境三维栅格语义地图的构建"", 《模式识别与人工智能》 *
周亮等: ""融合自适应邻域差分进化的 FastSLAM 算法"", 《机械设计与制造》 *

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111781936A (en) * 2020-08-07 2020-10-16 深圳中智永浩机器人有限公司 Robot path planning method and device, robot and computer readable storage medium
CN111986553A (en) * 2020-08-19 2020-11-24 炬星科技(深圳)有限公司 Method, device and storage medium for map association based on semantic label
CN111986553B (en) * 2020-08-19 2022-07-26 炬星科技(深圳)有限公司 Method, device and storage medium for map association based on semantic label
WO2022052660A1 (en) * 2020-09-14 2022-03-17 灵动科技(北京)有限公司 Warehousing robot localization and mapping methods, robot, and storage medium
CN112161618A (en) * 2020-09-14 2021-01-01 灵动科技(北京)有限公司 Storage robot positioning and map construction method, robot and storage medium
CN112363158A (en) * 2020-10-23 2021-02-12 浙江华睿科技有限公司 Pose estimation method for robot, and computer storage medium
CN112363158B (en) * 2020-10-23 2024-03-12 浙江华睿科技股份有限公司 Pose estimation method for robot, robot and computer storage medium
CN112099513A (en) * 2020-11-09 2020-12-18 天津联汇智造科技有限公司 Method and system for accurately taking materials by mobile robot
CN112668585A (en) * 2020-11-26 2021-04-16 厦门大学 Object identification and positioning method in dynamic environment
CN112668585B (en) * 2020-11-26 2024-04-09 厦门大学 Object identification and positioning method in dynamic environment
CN112836698A (en) * 2020-12-31 2021-05-25 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN112950565A (en) * 2021-02-25 2021-06-11 山东英信计算机技术有限公司 Method and device for detecting and positioning water leakage of data center and data center
CN113238554A (en) * 2021-05-08 2021-08-10 武汉科技大学 Indoor navigation method and system based on SLAM technology integrating laser and vision
CN113447012A (en) * 2021-05-10 2021-09-28 天津大学 Service robot 2D semantic map generation method and device based on deep learning
CN115379194A (en) * 2021-05-19 2022-11-22 北京小米移动软件有限公司 Depth image quantization method and device, terminal device and storage medium
CN113240682A (en) * 2021-05-31 2021-08-10 华中科技大学 Overturn-preventing construction driving map generation method and system for crawler crane
CN113240682B (en) * 2021-05-31 2022-06-21 华中科技大学 Overturn-preventing construction driving map generation method and system for crawler crane
CN113129379A (en) * 2021-06-17 2021-07-16 同方威视技术股份有限公司 Global relocation method and device for automatic mobile equipment
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot
CN113689459B (en) * 2021-07-30 2023-07-18 南京信息工程大学 Real-time tracking and mapping method based on GMM and YOLO under dynamic environment
CN113624223A (en) * 2021-07-30 2021-11-09 中汽创智科技有限公司 Indoor parking lot map construction method and device
CN113689459A (en) * 2021-07-30 2021-11-23 南京信息工程大学 GMM (Gaussian mixture model) combined with YOLO (YOLO) based real-time tracking and graph building method in dynamic environment
CN113759904A (en) * 2021-08-24 2021-12-07 桂林电子科技大学 Mobile robot autonomous navigation method based on fusion algorithm
CN113778096A (en) * 2021-09-15 2021-12-10 上海景吾智能科技有限公司 Positioning and model building method and system for indoor robot
CN113804182A (en) * 2021-09-16 2021-12-17 重庆邮电大学 Grid map creating method based on information fusion
CN113804182B (en) * 2021-09-16 2023-09-29 重庆邮电大学 Grid map creation method based on information fusion
CN113984037A (en) * 2021-09-30 2022-01-28 电子科技大学长三角研究院(湖州) Semantic map construction method based on target candidate box in any direction
CN113984037B (en) * 2021-09-30 2023-09-12 电子科技大学长三角研究院(湖州) Semantic map construction method based on target candidate frame in any direction
CN114356078A (en) * 2021-12-15 2022-04-15 之江实验室 Method and device for detecting human intention based on gazing target and electronic equipment
CN114356078B (en) * 2021-12-15 2024-03-19 之江实验室 Person intention detection method and device based on fixation target and electronic equipment
CN114459483A (en) * 2021-12-30 2022-05-10 上海交通大学 Landmark navigation map construction and application method and system based on robot navigation
CN114459483B (en) * 2021-12-30 2023-11-07 上海交通大学 Landmark navigation map construction and application method and system based on robot navigation
WO2023173243A1 (en) * 2022-03-14 2023-09-21 罗伯特·博世有限公司 Semantic label generation for two-dimensional lidar scanning graph
WO2023219058A1 (en) * 2022-05-13 2023-11-16 ソニーグループ株式会社 Information processing method, information processing device, and information processing system
CN115855030B (en) * 2023-02-28 2023-06-27 麦岩智能科技(北京)有限公司 Barrier retaining method, storage medium and device
CN115855030A (en) * 2023-02-28 2023-03-28 麦岩智能科技(北京)有限公司 Obstacle retention method, storage medium and equipment
CN117289294B (en) * 2023-11-27 2024-03-15 睿羿科技(山东)有限公司 Fusion positioning method based on multi-resolution Bayesian grid
CN117289294A (en) * 2023-11-27 2023-12-26 睿羿科技(山东)有限公司 Fusion positioning method based on multi-resolution Bayesian grid
CN117437654B (en) * 2023-12-19 2024-03-08 中国地质大学(武汉) Semantic recognition-based grid map analysis method, device and medium
CN117437654A (en) * 2023-12-19 2024-01-23 中国地质大学(武汉) Semantic recognition-based grid map analysis method, device and medium

Also Published As

Publication number Publication date
CN111486855B (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
WO2021233029A1 (en) Simultaneous localization and mapping method, device, system and storage medium
US9330504B2 (en) 3D building model construction tools
US9430871B2 (en) Method of generating three-dimensional (3D) models using ground based oblique imagery
US9245345B2 (en) Device for generating three dimensional feature data, method for generating three-dimensional feature data, and recording medium on which program for generating three-dimensional feature data is recorded
CN112070770B (en) High-precision three-dimensional map and two-dimensional grid map synchronous construction method
EP2120009A1 (en) Road/feature measuring device, feature identifying device, road/feature measuring method, road/feature measuring program, measuring device, measuring method, measuring program, measured position data, measuring terminal, measuring server device, drawing device, drawing method, drawing program, and drawing data
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
Yue et al. Fast 3D modeling in complex environments using a single Kinect sensor
CN103295239A (en) Laser-point cloud data automatic registration method based on plane base images
CN110223351B (en) Depth camera positioning method based on convolutional neural network
CN112799096B (en) Map construction method based on low-cost vehicle-mounted two-dimensional laser radar
Cosido et al. Hybridization of convergent photogrammetry, computer vision, and artificial intelligence for digital documentation of cultural heritage-a case study: the magdalena palace
CN111998862A (en) Dense binocular SLAM method based on BNN
CN113096183A (en) Obstacle detection and measurement method based on laser radar and monocular camera
CN111709988B (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN113920263A (en) Map construction method, map construction device, map construction equipment and storage medium
CN114140539A (en) Method and device for acquiring position of indoor object
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
CN114137564A (en) Automatic indoor object identification and positioning method and device
CN105339981B (en) Method for using one group of primitive registration data
CN113255779A (en) Multi-source perception data fusion identification method and system and computer readable storage medium
Cantzler Improving architectural 3D reconstruction by constrained modelling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant