CN111539994A - Particle filter repositioning method based on semantic likelihood estimation - Google Patents

Particle filter repositioning method based on semantic likelihood estimation Download PDF

Info

Publication number
CN111539994A
CN111539994A CN202010348294.2A CN202010348294A CN111539994A CN 111539994 A CN111539994 A CN 111539994A CN 202010348294 A CN202010348294 A CN 202010348294A CN 111539994 A CN111539994 A CN 111539994A
Authority
CN
China
Prior art keywords
semantic
map
robot
laser
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010348294.2A
Other languages
Chinese (zh)
Other versions
CN111539994B (en
Inventor
蒋林
刘奇
向超
方东君
朱建阳
雷斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN202010348294.2A priority Critical patent/CN111539994B/en
Publication of CN111539994A publication Critical patent/CN111539994A/en
Application granted granted Critical
Publication of CN111539994B publication Critical patent/CN111539994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a particle filter repositioning method based on semantic likelihood estimation, which comprises the following steps: step S1: constructing a grid map; step S2: identifying an object by a target detection method of a convolutional neural network, and obtaining semantic information of the object; step S3: simulating a laser radar by using a visual sensor to obtain semantic laser data; step S4: constructing a semantic map; step S5: obtaining an obstacle likelihood domain through a laser grid map, and obtaining an object semantic likelihood domain through a semantic map; step S6: the exact position of the robot is determined. The method fully combines barrier grid information and object semantic information for relocation, and avoids the problem of positioning failure in similar environments by using laser radar information alone. The method overcomes the defect that the original particle filtering method only utilizes environment structure information for matching, can effectively solve the problem of global relocation error of the robot, and simultaneously enhances the convergence rate of relocation.

Description

Particle filter repositioning method based on semantic likelihood estimation
Technical Field
The invention belongs to the field of robot navigation relocation, and particularly relates to a particle filter relocation method based on semantic likelihood estimation.
Background
In the robot field, people's demand is higher and higher nowadays, so that the robot navigation is required to be more accurate, and the repositioning problem of the robot is more important.
The relocation is the most important link in SLAM (simultaneous localization and mapping, instantaneous positioning and map building). the relocation of SLAM is defined as how to find out the self-positioning according to the current sensor observation data of the machine under the condition of a known environment map.
Researchers at home and abroad combine various information to help the robot to complete relocation, extra information is added to restrain particle sampling through two-dimensional code layout in an environment, ORB features are extracted from images collected by multiple cameras to detect the robot through multiple-camera monitoring in the environment, the position of the robot in a sub-map is determined, time efficiency of relocation after a binding frame appears is improved, and the robot positioning is corrected through landmark information in the environment and semantic laser obtained by the robot under an actual pose, so that positioning accuracy and positioning precision are improved.
The method is not direct and autonomous enough no matter manual marking such as two-dimensional code layout or robot positioning assisted by video monitoring. The robot posture correction and repositioning are carried out only through a single object landmark, the correlation with the detection accuracy of the single landmark is large, and the object landmark is required to be flattened as much as possible so as to easily extract the laser spot coordinate at the center of the laser cluster as the object position coordinate, so that when the repositioning is carried out, the target identification has large errors, and the repositioning accuracy is low.
Based on the problems in the prior art, the invention proposes a particle filter repositioning method based on semantic likelihood estimation, which increases semantic information matching of objects, overcomes the defect of matching only by using environmental structure information, can effectively solve the problem of global repositioning error of a robot, and enhances the convergence rate of repositioning.
Disclosure of Invention
The technical scheme adopted for solving the problems in the prior art is as follows:
a particle filter repositioning method based on semantic likelihood estimation is characterized by comprising the following steps:
step S1: constructing a grid map, wherein obvious roadblock characteristics in the environment are used for identifying obstacle information;
step S2: identifying an object by a target detection method of a convolutional neural network, and obtaining semantic information of the object;
step S3: simulating a laser radar by using a visual sensor to obtain semantic laser data;
step S4: constructing a semantic map according to the position relationship between the robot and the visual sensor, the position relationship between the global coordinate system and the robot, and semantic laser data of the object;
step S5: when deviation occurs in positioning, the robot recognizes the environment by using a laser radar and a visual sensor, obtains an obstacle likelihood domain through a laser grid map, and obtains an object semantic likelihood domain through a semantic map;
step S6: and predicting the initial pose of the robot, changing the weight of particles through simultaneous likelihood matching of the semantics of the laser and the object, resampling, gradually iterating, finishing estimation on the state of the robot after the convergence of a filter, and determining the accurate position of the robot.
The specific steps of step S1 are as follows:
step S101: selecting an obstacle, namely selecting an object which has higher identification degree and cannot move as the identified obstacle;
step S102: and establishing a grid map according to the laser radar data, moving the robot for one circle in the environment, acquiring laser information of the environment and odometer information of the robot according to the laser radar, and establishing the grid map by using a mapping algorithm.
The target detection method in step S2 is to use the SSD method for object detection to obtain semantic information of the object, and the sensor used for visual recognition is kinect v 2.
The specific steps of the step S3 of acquiring the object semantic laser data are as follows:
step S301: registering the KinectV2 color camera and the depth camera, and calibrating the relative position relationship between the laser radar and the color camera;
step S302: after parameter calibration and registration, point cloud data in object semantic information is simulated into laser radar hit points by removing the ground and only keeping the closest point of each row in a detection frame, so as to obtain semantic laser data of the object;
step S303: the obtained semantic laser data coordinates are as follows:
zk=d/s (1)
xk=(u-cx)*zk/fx(2)
yk=(v-cy)*zk/fy(3)
where d is the depth value of the coordinate (u, v) of the pixel in a frame of picture of the color camera in the corresponding depth map, s is the scaling factor of the depth map, cx、cy、fx、fyFocal length and aperture center, x, of RGB camera on two axes respectivelyk、yk、zkIs a three-dimensional coordinate in a kinect V2 coordinate system.
The specific steps of the step S4 of constructing the semantic map are as follows:
step S401: according to the transformation matrix T between the camera coordinate system and the robot coordinate systemrkTransformation matrix T between robot coordinate system and global coordinate systemwrConverting the semantic laser data coordinate into a robot coordinate system and then converting into a global map;
Figure BDA0002470979220000041
coordinate in the formula (x)ki,yki,zki) Coordinate of a certain point, coordinate (x) for semantic laser datawi,ywi,zwi) The coordinates of the corresponding points in the global coordinate system are obtained;
step S402: and transforming the finally obtained point cloud coordinate under the global coordinate system to a grid map coordinate system to obtain the coordinate of the object in the grid map, and connecting all coordinate points to obtain the semantic map:
Figure BDA0002470979220000042
where r is the ratio of the length of the real map to the length of the grid map, int is the rounding symbol, (x)w,yw) Is a coordinate value in the global plane coordinate system, (x)cell,ycell) Is the coordinate value of the semantic grid map.
The formula for obtaining the likelihood domain in step S5 is as follows:
Figure BDA0002470979220000043
dist is the Euclidean distance between a point in the map and the nearest barrier, sigma is the maximum distance between the sensor hitting point and the barrier point in the map,
Figure BDA0002470979220000044
probability of an object measured for sensor time t, ztFor the sensor information at the time t,
Figure BDA0002470979220000045
the pose information of the kth particle at the time t, and m is an environment map;
the probability of the obstacle is represented by a likelihood domain model, and the higher the probability is, the higher the possibility of the obstacle is; in the semantic map, semantic category information of the obstacles exists, the sensor hit point can search for the same type and nearest obstacle point in the map, and dist is obtained:
Figure BDA0002470979220000046
the specific steps of determining the accurate position of the robot in the step S6 are as follows:
step S601, pose prediction: the robot estimates x at time t by using the state at time t-1t-1And a control amount u from time t-1 to time ttThe state transition distribution p (x) is obtained by a motion modelt|xt-1,ut) And randomly sampling from the state transition distribution to obtain the assumed state at the time t
Figure BDA0002470979220000051
Predicting the pose of the robot;
step S602 matching update: returning the likelihood matching degree of the laser information and the camera detection information contained in each particle and the environment map, namely the particle weight through an observation model function at the time t
Figure BDA0002470979220000052
The formula is as follows:
Figure BDA0002470979220000053
in the formula: weight of
Figure BDA0002470979220000054
The weight of the kth particle at the moment t represents the integral of the measurement probability with time, and satisfies
Figure BDA0002470979220000055
Figure BDA0002470979220000056
And
Figure BDA0002470979220000057
likelihood probabilities measured for the laser and camera, respectively; lt、ktLidar data at respective times tAnd semantic point cloud data; m isl、mkA laser grid map and a semantic map are used;
step S603 resampling: when the robot motion exceeds a certain threshold, according to the weight of each particle
Figure BDA0002470979220000058
Reproducing new particles according to the roulette method
Figure BDA0002470979220000059
Adding the particles into a particle set, normalizing the particle weight, ensuring that the particles generated by resampling move to the high-weight particle pose, and approximating the posterior probability distribution by a particle swarm to approach the real pose of the robot;
after the robot moves enough and all the particles converge, the gravity centers of all the particles, namely the real positions of the robot, are obtained by summing and normalizing the weights of the particles, and the gravity center formula is as follows:
Figure BDA00024709792200000510
the invention has the following advantages:
the particle filter repositioning method based on the semantic likelihood estimation fully combines barrier grid information and object semantic information for repositioning, and avoids the problem of positioning failure in similar environments by using laser radar information alone. The method overcomes the defect that the original particle filtering method only utilizes the environmental structure information for matching, can effectively solve the problem of global relocation error of the robot, and simultaneously enhances the convergence rate of relocation.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a grid map;
FIG. 3 is a semantic map;
FIG. 4 is a graph of semantic obstacle and map likelihood matching;
FIG. 5 is a flow chart of particle filter positioning according to the present invention.
Detailed Description
The technical scheme of the invention is further specifically described below by embodiments and with reference to the accompanying drawings, and the particle filter repositioning method based on semantic likelihood estimation is characterized by comprising the following steps:
step S1: constructing a grid map, wherein obvious roadblock characteristics in the environment are used for identifying obstacle information;
step S2: identifying an object by a target detection method of a convolutional neural network, and obtaining semantic information of the object;
step S3: simulating a laser radar by using a visual sensor to obtain semantic laser data;
step S4: constructing a semantic map according to the position relationship between the robot and the visual sensor, the position relationship between the global coordinate system and the robot, and semantic laser data of the object;
step S5: when deviation occurs in positioning, the robot recognizes the environment by using a laser radar and a visual sensor, obtains an obstacle likelihood domain through a laser grid map, and obtains an object semantic likelihood domain through a semantic map;
step S6: and predicting the initial pose of the robot, changing the weight of particles through simultaneous likelihood matching of the semantics of the laser and the object, resampling, gradually iterating, finishing estimation on the state of the robot after the convergence of a filter, and determining the accurate position of the robot.
The specific steps of step S1 are as follows:
step S101: selecting an obstacle, namely selecting an object which has higher identification degree and cannot move as the identified obstacle;
step S102: and establishing a grid map according to the laser radar data, moving the robot for one circle in the environment, acquiring laser information of the environment and odometer information of the robot according to the laser radar, and establishing the grid map by using a mapping algorithm, as shown in fig. 2.
The target detection method in step S2 is to use the SSD method for object detection to obtain semantic information of the object, and the sensor used for visual recognition is kinect v 2.
The specific steps of the step S3 of acquiring the object semantic laser data are as follows:
step S301: registering the KinectV2 color camera and the depth camera, and calibrating the relative position relationship between the laser radar and the color camera;
step S302: after parameter calibration and registration, point cloud data in object semantic information is simulated into laser radar hit points by removing the ground and only keeping the closest point of each row in a detection frame, so as to obtain semantic laser data of the object;
step S303: the obtained semantic laser data coordinates are as follows:
zk=d/s (1)
xk=(u-cx)*zk/fx(2)
yk=(v-cy)*zk/fy(3)
where d is the depth value of the coordinate (u, v) of the pixel in a frame of picture of the color camera in the corresponding depth map, s is the scaling factor of the depth map, cx、cy、fx、fyFocal length and aperture center, x, of RGB camera on two axes respectivelyk、yk、zkIs a three-dimensional coordinate in a kinect V2 coordinate system.
The specific steps of the step S4 of constructing the semantic map are as follows:
step S401: according to the transformation matrix T between the camera coordinate system and the robot coordinate systemrkTransformation matrix T between robot coordinate system and global coordinate systemwrConverting the semantic laser data coordinate into a robot coordinate system and then converting into a global map;
Figure BDA0002470979220000081
coordinate in the formula (x)ki,yki,zki) Coordinate of a certain point, coordinate (x) for semantic laser datawi,ywi,zwi) The coordinates of the corresponding points in the global coordinate system are obtained;
step S402: transforming the finally obtained point cloud coordinate under the global coordinate system to a grid map coordinate system to obtain the coordinate of the object in the grid map, and connecting all coordinate points to obtain a semantic map, as shown in fig. 3, wherein the map has a plurality of different obstacle categories, such as an A-chair, a B-door, a C-cabinet, a D-garbage can and the like:
Figure BDA0002470979220000082
where r is the ratio of the length of the real map to the length of the grid map, int is the rounding symbol, (x)w,yw) Is a coordinate value in the global plane coordinate system, (x)cell,ycell) Is the coordinate value of the semantic grid map.
The formula for obtaining the likelihood domain in step S5 is as follows:
Figure BDA0002470979220000083
dist is the Euclidean distance between a point in the map and the nearest barrier, sigma is the maximum distance between the sensor hitting point and the barrier point in the map,
Figure BDA0002470979220000084
probability of an object measured for sensor time t, ztFor the sensor information at the time t,
Figure BDA0002470979220000085
the pose information of the kth particle at the time t, and m is an environment map;
the probability of the obstacle is represented by a likelihood domain model, and the higher the probability is, the higher the possibility of the obstacle is; fig. 4 shows a likelihood matching graph of the vision measured data and the original map data. In the figure, a line 1 is semantic data measured by a sensor, a line 2 is map data, the figure is divided into four different semantic categories of a, b, c and d, points on the line 1 are connected with the same and nearest points in the line 2, namely, in the semantic map, semantic category information of obstacles exists, the sensor hits points and searches for the same and nearest obstacle points in the map, and thus dist can be obtained:
Figure BDA0002470979220000091
the specific steps of determining the accurate position of the robot in step S6 are as follows:
step S601, pose prediction: the robot estimates x at time t by using the state at time t-1t-1And a control amount u from time t-1 to time ttThe state transition distribution p (x) is obtained by a motion modelt|xt-1,ut) And randomly sampling from the state transition distribution to obtain the assumed state at the time t
Figure BDA0002470979220000092
Predicting the pose of the robot;
step S602 matching update: returning the likelihood matching degree of the laser information and the camera detection information contained in each particle and the environment map, namely the particle weight through an observation model function at the time t
Figure BDA0002470979220000093
The formula is as follows:
Figure BDA0002470979220000094
in the formula: weight of
Figure BDA0002470979220000095
The weight of the kth particle at the moment t represents the integral of the measurement probability with time, and satisfies
Figure BDA0002470979220000096
Figure BDA0002470979220000097
And
Figure BDA0002470979220000098
respectively laser and phaseLikelihood probabilities of machine measurements; lt、tkRespectively laser radar data and semantic point cloud data at the time t; m isl、mkThe method comprises the steps of obtaining a laser grid map and a semantic map.
Step S603 resampling: when the robot motion exceeds a certain threshold, according to the weight of each particle
Figure BDA0002470979220000099
Reproducing new particles according to the roulette method
Figure BDA00024709792200000910
Adding the particles into a particle set, normalizing the particle weight, ensuring that the particles generated by resampling move to the high-weight particle pose, and approximating the posterior probability distribution by a particle swarm to approach the real pose of the robot;
after the robot moves enough and all the particles converge, the gravity centers of all the particles, namely the real positions of the robot, are obtained by summing and normalizing the weights of the particles, and the gravity center formula is as follows:
Figure BDA00024709792200000911
the protective scope of the present invention is not limited to the above-described embodiments, and it is apparent that various modifications and variations can be made to the present invention by those skilled in the art without departing from the scope and spirit of the present invention. It is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims (7)

1. A particle filter repositioning method based on semantic likelihood estimation is characterized by comprising the following steps:
step S1: constructing a grid map, wherein obvious roadblock characteristics in the environment are used for identifying obstacle information;
step S2: identifying an object by a target detection method of a convolutional neural network, and obtaining semantic information of the object;
step S3: simulating a laser radar by using a visual sensor to obtain semantic laser data;
step S4: constructing a semantic map according to the position relationship between the robot and the visual sensor, the position relationship between the global coordinate system and the robot, and semantic laser data of the object;
step S5: when deviation occurs in positioning, the robot recognizes the environment by using a laser radar and a visual sensor, obtains an obstacle likelihood domain through a laser grid map, and obtains an object semantic likelihood domain through a semantic map;
step S6: and predicting the initial pose of the robot, changing the weight of particles through simultaneous likelihood matching of the semantics of the laser and the object, resampling, gradually iterating, finishing estimation on the state of the robot after the convergence of a filter, and determining the accurate position of the robot.
2. The particle filter relocation method based on semantic likelihood estimation as claimed in claim 1, wherein the specific steps of said step S1 are as follows:
step S101: selecting an obstacle, namely selecting an object which has higher identification degree and cannot move as the identified obstacle;
step S102: and establishing a grid map according to the laser radar data, moving the robot for one circle in the environment, acquiring laser information of the environment and odometer information of the robot according to the laser radar, and establishing the grid map by using a mapping algorithm.
3. The particle filter repositioning method based on semantic likelihood estimation of claim 1, wherein the target detection method in step S2 is to use SSD method for object detection, to obtain semantic information of the object, and the sensor used for visual recognition is kinect v 2.
4. The particle filter repositioning method based on semantic likelihood estimation as claimed in claim 1, wherein the step S3 of acquiring semantic laser data of the object comprises the following specific steps:
step S301: registering the KinectV2 color camera and the depth camera, and calibrating the relative position relationship between the laser radar and the color camera;
step S302: after parameter calibration and registration, point cloud data in object semantic information is simulated into laser radar hit points by removing the ground and only keeping the closest point of each row in a detection frame, so as to obtain semantic laser data of the object;
step S303: the obtained semantic laser data coordinates are as follows:
zk=d/s (1)
xk=(u-cx)*zk/fx(2)
yk=(v-cy)*zk/fy(3)
where d is the depth value of the coordinate (u, v) of the pixel in a frame of picture of the color camera in the corresponding depth map, s is the scaling factor of the depth map, cx、cy、fx、fyThe focal length and aperture center, x, of the RGB camera on two axesk、yk、zkIs a three-dimensional coordinate in a kinect V2 coordinate system.
5. The particle filter repositioning method based on semantic likelihood estimation as claimed in claim 1, wherein the step S4 is to construct a semantic map by the following steps:
step S401: according to the transformation matrix T between the camera coordinate system and the robot coordinate systemrkTransformation matrix T between robot coordinate system and global coordinate systemwrConverting the semantic laser data coordinate into a robot coordinate system and then converting into a global map;
Figure FDA0002470979210000021
coordinate in the formula (x)ki,yki,zki) Coordinate of a certain point, coordinate (x) for semantic laser datawi,ywi,zwi) The coordinates of the corresponding points in the global coordinate system are obtained;
step S402: and transforming the finally obtained point cloud coordinate under the global coordinate system to a grid map coordinate system to obtain the coordinate of the object in the grid map, and connecting all coordinate points to obtain the semantic map:
Figure FDA0002470979210000031
where r is the ratio of the length of the real map to the length of the grid map, int is the rounding symbol, (x)w,yw) Is a coordinate value in the global plane coordinate system, (x)cell,ycell) Is the coordinate value of the semantic grid map.
6. The particle filter relocation method based on semantic likelihood estimation as claimed in claim 1, wherein the formula for obtaining the likelihood domain in step S5 is as follows:
Figure FDA0002470979210000032
dist is the Euclidean distance between a point in the map and the nearest barrier, sigma is the maximum distance between the sensor hitting point and the barrier point in the map,
Figure FDA0002470979210000033
probability of an object measured for sensor time t, ztFor the sensor information at the time t,
Figure FDA0002470979210000034
the pose information of the kth particle at the time t, and m is an environment map;
the probability of the obstacle is represented by a likelihood domain model, and the higher the probability is, the higher the possibility of the obstacle is; in the semantic map, semantic category information of the obstacles exists, the sensor hit point can search for the same type and nearest obstacle point in the map, and dist is obtained:
Figure FDA0002470979210000035
7. the particle filter repositioning method based on semantic likelihood estimation as claimed in claim 1, wherein the step S6 is to determine the accurate position of the robot by the following steps:
step S601, pose prediction: the robot estimates x at time t by using the state at time t-1t-1And a control amount u from time t-1 to time ttObtaining state transition distributions from motion modelsp(xt|xt-1,ut) And randomly sampling from the state transition distribution to obtain the assumed state at the time t
Figure FDA0002470979210000036
Predicting the pose of the robot;
step S602 matching update: returning the likelihood matching degree of the laser information and the camera detection information contained in each particle and the environment map, namely the particle weight through an observation model function at the time t
Figure FDA0002470979210000041
The formula is as follows:
Figure FDA0002470979210000042
in the formula: weight of
Figure FDA0002470979210000043
The weight of the kth particle at the moment t represents the integral of the measurement probability with time, and satisfies
Figure FDA0002470979210000044
Figure FDA0002470979210000045
And
Figure FDA0002470979210000046
likelihood probabilities measured for the laser and camera, respectively; lt、ktRespectively laser radar data and semantic point cloud data at the time t; m isl、mkA laser grid map and a semantic map are used;
step S603 resampling: when the robot motion exceeds a certain threshold, according to the weight of each particle
Figure FDA0002470979210000047
Reproducing new particles according to the roulette method
Figure FDA0002470979210000048
Adding the particles into a particle set, normalizing the particle weight, ensuring that the particles generated by resampling move to the high-weight particle pose, and approximating the posterior probability distribution by a particle swarm to approach the real pose of the robot;
after the robot moves enough and all the particles converge, the gravity centers of all the particles, namely the real positions of the robot, are obtained by summing and normalizing the weights of the particles, and the gravity center formula is as follows:
Figure FDA0002470979210000049
CN202010348294.2A 2020-04-28 2020-04-28 Particle filter repositioning method based on semantic likelihood estimation Active CN111539994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010348294.2A CN111539994B (en) 2020-04-28 2020-04-28 Particle filter repositioning method based on semantic likelihood estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010348294.2A CN111539994B (en) 2020-04-28 2020-04-28 Particle filter repositioning method based on semantic likelihood estimation

Publications (2)

Publication Number Publication Date
CN111539994A true CN111539994A (en) 2020-08-14
CN111539994B CN111539994B (en) 2023-04-18

Family

ID=71975812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010348294.2A Active CN111539994B (en) 2020-04-28 2020-04-28 Particle filter repositioning method based on semantic likelihood estimation

Country Status (1)

Country Link
CN (1) CN111539994B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112433528A (en) * 2020-11-27 2021-03-02 深圳优地科技有限公司 Method and device for robot to take advantage of elevator, robot and storage medium
CN112612862A (en) * 2020-12-24 2021-04-06 哈尔滨工业大学芜湖机器人产业技术研究院 Grid map positioning method based on point cloud registration
CN112767477A (en) * 2020-12-31 2021-05-07 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN112833892A (en) * 2020-12-31 2021-05-25 杭州普锐视科技有限公司 Semantic mapping method based on track alignment
CN112836698A (en) * 2020-12-31 2021-05-25 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN113238251A (en) * 2021-04-02 2021-08-10 华南理工大学 Target-level semantic positioning method based on vehicle-mounted laser radar
CN113465620A (en) * 2021-06-02 2021-10-01 上海追势科技有限公司 Parking lot particle filter positioning method based on semantic information
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot
CN113551677A (en) * 2021-08-16 2021-10-26 河南牧原智能科技有限公司 Method for relocating a robot and related product
CN113657205A (en) * 2021-07-29 2021-11-16 北京中科慧眼科技有限公司 Obstacle detection method and system based on motion grid and intelligent terminal
CN114200481A (en) * 2020-08-28 2022-03-18 华为技术有限公司 Positioning method, positioning system and vehicle
CN116300480A (en) * 2023-05-23 2023-06-23 西南科技大学 Radioactive source searching method based on improved particle filtering and biological heuristic neural network
CN116399328A (en) * 2023-04-17 2023-07-07 石家庄铁道大学 Map construction and positioning method of indoor mobile robot based on BIM

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107908185A (en) * 2017-10-14 2018-04-13 北醒(北京)光子科技有限公司 A kind of robot autonomous global method for relocating and robot
CN108897836A (en) * 2018-06-25 2018-11-27 广州视源电子科技股份有限公司 Method and device for robot to map based on semantics
CN109724603A (en) * 2019-01-08 2019-05-07 北京航空航天大学 A kind of Indoor Robot air navigation aid based on environmental characteristic detection
US20190163982A1 (en) * 2017-11-28 2019-05-30 Visual Semantics, Inc. Method and apparatus for integration of detected object identifiers and semantic scene graph networks for captured visual scene behavior estimation
CN111061276A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 Mobile robot repositioning method based on dynamic area division
CN111061287A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 Mobile robot repositioning method based on particle self-convergence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107908185A (en) * 2017-10-14 2018-04-13 北醒(北京)光子科技有限公司 A kind of robot autonomous global method for relocating and robot
US20190163982A1 (en) * 2017-11-28 2019-05-30 Visual Semantics, Inc. Method and apparatus for integration of detected object identifiers and semantic scene graph networks for captured visual scene behavior estimation
CN108897836A (en) * 2018-06-25 2018-11-27 广州视源电子科技股份有限公司 Method and device for robot to map based on semantics
CN109724603A (en) * 2019-01-08 2019-05-07 北京航空航天大学 A kind of Indoor Robot air navigation aid based on environmental characteristic detection
CN111061276A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 Mobile robot repositioning method based on dynamic area division
CN111061287A (en) * 2019-12-31 2020-04-24 芜湖哈特机器人产业技术研究院有限公司 Mobile robot repositioning method based on particle self-convergence

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114200481A (en) * 2020-08-28 2022-03-18 华为技术有限公司 Positioning method, positioning system and vehicle
CN112433528A (en) * 2020-11-27 2021-03-02 深圳优地科技有限公司 Method and device for robot to take advantage of elevator, robot and storage medium
CN112612862A (en) * 2020-12-24 2021-04-06 哈尔滨工业大学芜湖机器人产业技术研究院 Grid map positioning method based on point cloud registration
CN112767477A (en) * 2020-12-31 2021-05-07 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN112833892A (en) * 2020-12-31 2021-05-25 杭州普锐视科技有限公司 Semantic mapping method based on track alignment
CN112836698A (en) * 2020-12-31 2021-05-25 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN113238251A (en) * 2021-04-02 2021-08-10 华南理工大学 Target-level semantic positioning method based on vehicle-mounted laser radar
CN113238251B (en) * 2021-04-02 2023-09-29 华南理工大学 Target level semantic positioning method based on vehicle-mounted laser radar
CN113465620A (en) * 2021-06-02 2021-10-01 上海追势科技有限公司 Parking lot particle filter positioning method based on semantic information
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot
CN113657205A (en) * 2021-07-29 2021-11-16 北京中科慧眼科技有限公司 Obstacle detection method and system based on motion grid and intelligent terminal
CN113657205B (en) * 2021-07-29 2024-04-19 北京中科慧眼科技有限公司 Obstacle detection method and system based on motion grid and intelligent terminal
CN113551677A (en) * 2021-08-16 2021-10-26 河南牧原智能科技有限公司 Method for relocating a robot and related product
CN116399328A (en) * 2023-04-17 2023-07-07 石家庄铁道大学 Map construction and positioning method of indoor mobile robot based on BIM
CN116300480A (en) * 2023-05-23 2023-06-23 西南科技大学 Radioactive source searching method based on improved particle filtering and biological heuristic neural network

Also Published As

Publication number Publication date
CN111539994B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN111539994B (en) Particle filter repositioning method based on semantic likelihood estimation
US11579623B2 (en) Mobile robot system and method for generating map data using straight lines extracted from visual images
JP2019527832A (en) System and method for accurate localization and mapping
CN107741234A (en) The offline map structuring and localization method of a kind of view-based access control model
WO2020224305A1 (en) Method and apparatus for device positioning, and device
CN107544501A (en) A kind of intelligent robot wisdom traveling control system and its method
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
CN110132284B (en) Global positioning method based on depth information
CN113108773A (en) Grid map construction method integrating laser and visual sensor
CN111880191B (en) Map generation method based on multi-agent laser radar and visual information fusion
CN110487286B (en) Robot pose judgment method based on point feature projection and laser point cloud fusion
WO2019136613A1 (en) Indoor locating method and device for robot
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
WO2022247045A1 (en) Laser radar information-based mobile robot location re-identification method
CN113096183A (en) Obstacle detection and measurement method based on laser radar and monocular camera
CN111429515A (en) Learning method of robot obstacle avoidance behavior based on deep learning
CN112150448A (en) Image processing method, device and equipment and storage medium
Marangoz et al. Fruit mapping with shape completion for autonomous crop monitoring
US20220284623A1 (en) Framework For 3D Object Detection And Depth Prediction From 2D Images
CN116912786A (en) Intelligent network-connected automobile multi-mode fusion detection method based on vehicle-road cooperation
CN114529585A (en) Mobile equipment autonomous positioning method based on depth vision and inertial measurement
CN112405526A (en) Robot positioning method and device, equipment and storage medium
CN111862146B (en) Target object positioning method and device
Kohlbrecher et al. Grid-based occupancy mapping and automatic gaze control for soccer playing humanoid robots
Ballardini et al. Ego-lane estimation by modeling lanes and sensor failures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant