WO2022036792A1 - 多数据源的slam方法、设备及计算机可读存储介质 - Google Patents
多数据源的slam方法、设备及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2022036792A1 WO2022036792A1 PCT/CN2020/115672 CN2020115672W WO2022036792A1 WO 2022036792 A1 WO2022036792 A1 WO 2022036792A1 CN 2020115672 W CN2020115672 W CN 2020115672W WO 2022036792 A1 WO2022036792 A1 WO 2022036792A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- map
- sweeping robot
- data
- state information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000005259 measurement Methods 0.000 claims abstract description 39
- 230000003068 static effect Effects 0.000 claims abstract description 20
- 238000004140 cleaning Methods 0.000 claims abstract description 14
- 238000010408 sweeping Methods 0.000 claims description 114
- 230000004927 fusion Effects 0.000 claims description 41
- 238000001914 filtration Methods 0.000 claims description 21
- 238000000605 extraction Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 230000007613 environmental effect Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 11
- 239000000463 material Substances 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000001133 acceleration Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 239000011358 absorbing material Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000005293 physical law Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000001559 infrared map Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/028—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0285—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Definitions
- the present application relates to the field of robotics, and in particular, to a multi-data source SLAM method, device, and computer-readable storage medium.
- the current sweeping robot can use sensor data to construct a map, it still lacks the ability to perceive the real external environment, especially for the real ground environment.
- the current sweeping robot can only obtain discrete two-dimensional sampling information at a specific height (usually the height of the sensor) or the sparse feature point cloud information in the top space of the sweeping robot, but lacks the ground and object information in the lower space on the ground.
- the geometric structure and semantic information of the ground and the lower space on the ground cannot be reconstructed in the map, so that the current cleaning robot cannot solve the problem of collision with low obstacles.
- the main purpose of this application is to provide a multi-data source SLAM method, a sweeping robot, a device and a readable storage medium, which aims to solve the problem of the sweeping robot caused by the lack of information about the low-level environment in the current environment map modeling process of the sweeping robot. Problems with poor handling of low obstacles.
- the present application provides a multi-data source SLAM method
- the multi-data source SLAM method includes the following steps:
- the motion equation and the observation equation are fused by using the Bayesian recursive estimation algorithm to obtain the fused state information of the sweeping robot;
- the depth map obtained by the sweeping robot is used to update the probability information of voxels in the map through a static Bayesian filtering algorithm to construct an environment map.
- the step of establishing an observation equation for each sensor includes:
- a first type of observation equation and a second type of observation equation are established, respectively.
- the method before the step of fusing the motion equation and the observation equation by using a Bayesian recursive estimation algorithm, the method further includes:
- Bayesian inference is performed on the fusion result and the registration result to obtain preliminary state information of the cleaning robot.
- the motion equation and the observation equation are fused using a Bayesian recursive estimation algorithm
- the step of acquiring the fused state information of the sweeping robot includes:
- the Bayesian recursive estimation algorithm is used for fusion to obtain the fused state information of the sweeping robot at the current moment.
- the steps before the step of updating the probability information of voxels in the map through a static Bayesian filtering algorithm to construct a map by using the depth map obtained by the sweeping robot according to the fused state information, the steps further include:
- the step of updating the probability information of voxels in the map through a static Bayesian filtering algorithm to construct the map includes:
- the probability information of voxels in the map is updated based on a static Bayesian filtering algorithm to construct a map.
- the step of updating the probability information of voxels in the map through a static Bayesian filtering algorithm to construct a map using the depth map obtained by the sweeping robot according to the fused state information it further includes:
- Semantic segmentation and edge extraction are performed on the objects in the infrared image to obtain object information
- Bayesian inference is performed according to the object information to obtain an inference result, and the inference result is updated into the environment map.
- the step of acquiring the infrared image it further includes:
- the method further includes:
- Bayesian inference is performed on the object information based on the reprojection error.
- the multi-data source SLAM method further includes:
- the present application also provides a SLAM device with multiple data sources, and the SLAM device with multiple data sources includes:
- the first establishment module is used to establish the motion equation of the sweeping robot according to the motion constraints
- the second establishment module is used to establish an observation equation for each sensor according to the acquired measurement data of each sensor of the sweeping robot;
- a first fusion module configured to use a Bayesian recursive estimation algorithm to fuse the motion equation and the observation equation to obtain the fused state information of the sweeping robot;
- the building module is used to update the probability information of voxels in the map by using the depth map obtained by the sweeping robot according to the fused state information to construct an environment map.
- the first establishment module includes:
- a first acquisition unit used for acquiring measurement data of each sensor of the sweeping robot
- a dividing unit configured to divide the measurement data into fast-changing data and slow-changing data according to the sensor type corresponding to the measurement data
- a establishing unit configured to establish a first type of observation equation and a second type of observation equation according to the measurement data belonging to the fast-changing data type and the slowly-changing data type, respectively.
- the multi-data source SLAM device further includes:
- a second fusion module configured to fuse the observation equations of the first type to obtain a fusion result
- an initial value module for using the fusion result as the initial value of the point cloud registration algorithm of the second type of observation equation
- a registration module configured to obtain a registration result according to the point cloud registration algorithm and the initial value
- the first inference module is configured to perform Bayesian inference on the fusion result and the registration result to obtain preliminary state information of the cleaning robot.
- the first fusion module includes:
- a second obtaining unit configured to obtain the preliminary state information obtained by the observation equation at the current moment
- the fusion unit is configured to use the Bayesian recursive estimation algorithm to perform fusion according to the preliminary state information and the motion equation to obtain the fusion state information of the sweeping robot at the current moment.
- the building blocks include:
- a determining unit configured to determine the location information of the sweeping robot in the map according to the fused state information
- a third obtaining unit configured to obtain the distance information between each object and the position information by using the point cloud information projected by the depth map
- the updating unit is configured to update the probability information of voxels in the map based on the static Bayesian filtering algorithm according to the distance information to construct the map.
- the multi-data source SLAM device further includes:
- a first acquisition module used for acquiring an infrared image
- An update module configured to perform Bayesian inference according to the object information to obtain an inference result, and update the inference result to the environment map.
- the multi-data source SLAM device further includes:
- a second acquiring module configured to acquire the confidence level of each point in the infrared image and display the confidence level through a confidence level histogram
- a deletion module is used to delete the noise points whose confidence level does not meet the preset condition in the infrared image.
- the multi-data source SLAM device further includes:
- a reprojection module configured to reproject the inference result to obtain a reprojection error
- a second inference module configured to perform Bayesian inference on the object information based on the reprojection error.
- the present application also provides a multi-data source SLAM device, the multi-data source SLAM device comprising: a memory, a processor, and a multi-data source SLAM program stored on the memory and executable on the processor , when the multi-data source SLAM program is executed by the processor to realize the steps of the above-mentioned multi-data source SLAM method
- the present application further provides a readable storage medium, where a computer program is stored on the readable storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned multi-data source SLAM method are implemented.
- the motion equation of the sweeping robot is established according to the motion constraints; the observation equation is established for each sensor according to the acquired measurement data of each sensor of the sweeping robot; the Bayesian recursive estimation algorithm is used to calculate the motion equation and the observation equation. Fusion to obtain the fused state information of the sweeping robot; according to the fused state information, using the depth map obtained by the sweeping robot, the probability information of voxels in the map is updated through a static Bayesian filtering algorithm to construct an environment map.
- the observation equation is established by using the data of a variety of different sensors, so that more accurate position information of the sweeping robot can be obtained. At the same time, the motion equation can ensure the accuracy of the obtained position information. It makes the information in the final map more abundant, and improves the perception and processing ability of the sweeping robot to surrounding obstacles.
- FIG. 1 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the present application;
- FIG. 2 is a schematic flowchart of the first embodiment of the SLAM method for multiple data sources of the present application
- step S30 in FIG. 2 is a schematic flowchart of the steps before step S30 in FIG. 2 in the third embodiment of the SLAM method for multiple data sources of the present application;
- FIG. 4 is a schematic flowchart of the steps after step S40 in FIG. 2 in the sixth embodiment of the multi-data source SLAM method of the present application;
- FIG. 5 is a schematic diagram of a system structure of an embodiment of a SLAM device with multiple data sources of the present application.
- FIG. 1 is a schematic structural diagram of a terminal of a hardware operating environment involved in the solution of the embodiment of the present application.
- the terminal in this embodiment of the present application is a SLAM device with multiple data sources.
- the terminal may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
- the communication bus 1002 is used to realize the connection and communication between these components.
- the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
- the network interface 1004 may include a standard wired interface and a wireless interface (eg, a WI-FI interface).
- the memory 1005 may be high-speed RAM memory, or may be non-volatile memory, such as disk memory.
- the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
- the terminal may further include a camera, an RF (Radio Frequency, radio frequency) circuits, sensors, audio circuits, WiFi modules, etc.
- sensors such as light sensors, motion sensors and other sensors.
- the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display screen according to the brightness of the ambient light, and the proximity sensor may turn off the display screen and/or when the terminal device moves to the ear Backlight.
- the terminal device may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be repeated here.
- terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than the one shown, or combine some components, or arrange different components.
- the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a SLAM program with multiple data sources.
- the network interface 1004 is mainly used to connect to the background server and perform data communication with the background server;
- the user interface 1003 is mainly used to connect to the client (client) and perform data communication with the client;
- the processor 1001 can be used to invoke a multi-source SLAM program stored in memory 1005 and perform the following operations:
- the motion equation and the observation equation are fused by using the Bayesian recursive estimation algorithm to obtain the fused state information of the sweeping robot;
- the depth map obtained by the sweeping robot is used to update the probability information of voxels in the map through a static Bayesian filtering algorithm to construct an environment map.
- the present application provides a multi-data source SLAM method.
- the method includes:
- Step S10 establishing the motion equation of the sweeping robot according to the motion constraints
- the SLAM (Simultaneous Localization And Mapping) method is a commonly used method in the field of sweeping robots, which enables the sweeping robot to scan a map of the surrounding environment while moving, so that the sweeping robot can autonomously locate and navigate.
- the sweeping robot needs to obtain the relevant data of the surrounding environment through sensors such as radar for its own positioning and the construction of the surrounding map.
- sensors such as radar for its own positioning and the construction of the surrounding map.
- the motion equation of the sweeping robot can be established through motion constraints such as Newton's second law.
- the equation is used to express the conditional relationship that should be satisfied between two different states of the sweeping robot when the motion constraints are satisfied, wherein one independent variable in the motion equation is the time interval between the two states. For each sweeping robot, its motion equation is uniquely determined. At the same time, the motion equation is the most basic condition that the sweeping robot should meet, and it is also the prior information for judging the current state of the sweeping robot.
- Step S20 establishing an observation equation for each sensor according to the acquired measurement data of each sensor of the sweeping robot;
- the sensors to be used mainly include IMU (Inertial Measurement Unit, inertial measurement unit), wheel encoder, radar, depth camera, etc., among which IMU and vehicle encoder are mainly used to obtain motion data such as acceleration, angular velocity, rotational speed, etc. of the sweeping robot, while radar and depth camera can be used with It is used to obtain information about the surrounding environment or objects, such as the location and size of obstacles.
- IMU Inertial Measurement Unit
- inertial measurement unit wheel encoder
- radar depth camera
- It is used to obtain information about the surrounding environment or objects, such as the location and size of obstacles.
- Step S30 using a Bayesian recursive estimation algorithm to fuse the motion equation and the observation equation to obtain the fused state information of the sweeping robot;
- the state judgment of the sweeping robot at the current moment is an estimated value.
- this estimated value may be exactly the same as the actual state, or there may be certain errors. Of course, , even if there is an error, this error is in the controllable range.
- the motion equation is used as a priori information. The motion equation can be used to ensure that the error between the final state quantity and the real state quantity is small, and the observation equation at the current moment can be obtained.
- the Bayesian recursive estimation algorithm obtains the posterior distribution of the parameter according to the prior distribution of the parameter and a series of observed values, and then obtains the expected value of the parameter as its final value.
- a variance amount of the parameter is defined to evaluate the accuracy or confidence of the parameter estimate.
- the motion equation is the prior distribution in the Bayesian recursive estimation algorithm, and the observation equation can provide a series of observation values at different times, so that the state quantity of the sweeping robot at the current moment and the corresponding confidence level can be obtained.
- the confidence level represents the estimation The probability that the value and state parameters are within a certain allowable error range.
- Bayesian will continuously use the state quantity obtained in the previous time to calculate the next state quantity, thus forming a recursive process.
- the reliability of the motion equation and the observation equation will affect the fusion state information of the sweeping robot finally obtained by the Bayesian recursive estimation algorithm.
- the state information display includes the position, attitude, linear velocity, angular velocity and acceleration of the sweeping robot.
- Step S40 according to the fused state information, using the depth map obtained by the sweeping robot, and updating the probability information of the voxels in the map through a static Bayesian filtering algorithm to construct an environment map;
- the current position and posture of the sweeping robot can be known.
- the depth map can be obtained by the depth camera, and the depth map can directly record the absolute distance between the object in the world space and the depth camera.
- the positional relationship between the objects around the sweeping robot and the sweeping robot can be known.
- the state at the current moment is determined by the state at the previous moment and the action at the current moment, and the process at the current moment is only related to the state at the current moment.
- the probability information of the voxels in the map at the current moment can be obtained from the position information of the object provided by the depth map at the current moment and the corresponding probability information of the voxels in the map at the previous moment to construct the environment.
- Map that is, through the probability information of each voxel in the map to indicate whether there is an object in the voxel, and by combining the information of a voxel, you can know where the environmental information in the map exists and where the object's edge is.
- the motion equation of the sweeping robot is established according to the motion constraints; the observation equation is established for each sensor according to the acquired measurement data of each sensor of the sweeping robot; the Bayesian recursive estimation algorithm is used to analyze the motion equation and the The observation equations are fused to obtain the fused state information of the sweeping robot; according to the fused state information, the depth map obtained by the sweeping robot is used to update the probability information of voxels in the map through a static Bayesian filtering algorithm to construct an environment map.
- the observation equation is established by using the data of a variety of different sensors, so that more accurate position information of the sweeping robot can be obtained.
- the motion equation can ensure the accuracy of the obtained position information. It makes the information in the final map more abundant, and improves the perception and processing ability of the sweeping robot to surrounding obstacles.
- a second embodiment of the multi-data source SLAM method is provided.
- the second embodiment a second embodiment of the multi-data source SLAM method is provided.
- Step S20 includes:
- Step A1 acquiring the measurement data of each sensor of the sweeping robot
- Sensors include IMU, wheel encoder, radar, depth camera, etc.
- the gyroscope and accelerometer in the IMU can obtain various accelerations in the three-axis direction of the sweeping robot.
- the radar can obtain the point cloud information of the surrounding environment, and the depth map and infrared map of the surrounding environment can be obtained through the depth camera.
- Step A2 according to the sensor type corresponding to the measurement data, divide the measurement data into fast-changing data and slow-changing data;
- Each sensor has corresponding data acquisition characteristics.
- One of the important images is the frequency of data acquisition by the sensor.
- it is divided into fast-changing data and slow-changing data.
- Fast-changing data is a sensor with a higher scanning frequency, such as The data obtained by gyroscopes and accelerometers, etc.
- the slow-changing data is the data obtained by sensors with low scanning frequency, such as radar, depth camera, etc.
- the data obtained by the sensors corresponding to the fast-changing data such as gyroscopes or accelerometers
- the acquisition period of the fast-changing data is shorter, and the acquisition period of the second slow-changing data is longer.
- Step A3 according to the measurement data belonging to the fast-changing data type and the slow-changing data type, respectively establishing a first type of observation equation and a second type of observation equation;
- observation equation After dividing the data into fast-changing data and slow-changing data, correspondingly, the established observation equations are divided into a first type of observation equation and a second type of observation equation.
- the observation equation can be abstracted into the following expression:
- z k,j is the observation data
- y j is the landmark point
- x k is the position of the sweeping robot
- v k, j is the noise existing during the observation
- h() is an abstract functional relationship.
- the time interval between time k and time k+1 of the first type of observation equation and the second type of observation equation is different.
- the data obtained by the observation equations are classified according to the types of sensors, and the corresponding types of observation equations are established respectively, and the observation equations are divided into different types to make the fusion result more accurate in the subsequent fusion process.
- a third embodiment of the multi-data source SLAM method is provided.
- the third embodiment
- step S30 it includes:
- Step S31 fuse the observation equations of the first type to obtain a fusion result
- the first type of observation equation is an observation equation constructed by using fast-changing data, and by fusing the first type of observation equation at the initial moment, such as the speed, acceleration, angular velocity and other observation data obtained from a gyroscope and an accelerometer, so that The fusion result is obtained to obtain the initial state information of the sweeping robot.
- Step S32 using the fusion result as the initial value of the point cloud registration algorithm of the second type of observation equation
- the second type of observation equation is the observation equation constructed by using slowly changing data. Because the time interval of data acquisition for slow-changing data is longer, and the time interval for data acquisition for fast-changing data is shorter. When the sweeping robot does not fail, the state data of the sweeping robot obtained through fast-changing data or slow-changing data should be the same. Therefore, the fusion result of the first type of observation equation can be used as the initial value of the second type of observation equation.
- Step S33 obtaining a registration result according to the point cloud registration algorithm and the initial value
- the point cloud registration algorithm is used to obtain the pose transformation between two different frame images. Because two different images need to be compared, the fusion result of the first type of observation equation is used as the initial value, that is, the initial frame, and then the point cloud registration algorithm starts from the initial frame to obtain the position of the adjacent two frames of images.
- the pose transformation result is the registration result, so that the pose information of the sweeping robot at each moment can be obtained.
- Step S34 performing Bayesian inference on the fusion result and the registration result to obtain preliminary state information of the sweeping robot
- Bayesian inference can use the existing knowledge, which is the existing knowledge in this application.
- the fusion result of the first type and the registration result of the observation equation of the second type are used to infer the probability of a new event.
- what is obtained by Bayesian inference is the preliminary state of the sweeping robot at the current moment. The best estimate of the information and the corresponding confidence.
- the corresponding state information is obtained for the first type of observation equation and the second type of observation equation respectively, and then Bayesian inference is used to fuse the results obtained from the two types of observation equations to obtain the sweeping robot the initial state information, so that the obtained initial state information is more accurate.
- a fourth embodiment of the multi-data source SLAM method is provided.
- the fourth embodiment
- Step S30 includes:
- Step B1 obtaining the preliminary state information obtained by the observation equation at the current moment
- the preliminary state information of the current moment is obtained through the methods of steps S31 to S34 in the third embodiment.
- Step B2 according to the preliminary state information, combined with the equation of motion, using the Bayesian recursive estimation algorithm to perform fusion, to obtain the fusion state information of the sweeping robot at the current moment;
- the preliminary state information is obtained from the observation equation, but not combined with the motion equation.
- the equations of motion can also be expressed as expressions related to the time interval between two states.
- the motion equation is related to the motion constraint, and the motion constraint is to ensure that the final state information will not violate the objective physical laws, that is, the accuracy of the state information can be guaranteed.
- the Bayesian recursive estimation algorithm obtains the posterior distribution of the parameter based on the prior distribution of the parameter and a series of observed values, and then obtains the expected value of the parameter as its final value.
- the motion equation is the prior distribution in the Bayesian recursive estimation algorithm, and the observation equation can provide a series of observation values at different times, so that the state quantity and corresponding confidence of the sweeping robot at the current moment can be obtained. Therefore, through the Bayesian recursive estimation algorithm, the sweeping robot uses the confidence of the previous moment to calculate the prediction confidence of the current moment, and then uses the observation value of the observation equation and the prediction confidence of the current moment to obtain the Bayesian confidence and use this to update the current status information of the sweeping robot.
- the current state information and the corresponding confidence level of the sweeping robot are continuously updated through the Bayesian recursive estimation algorithm by using the motion equation and the observation equation.
- the Bayesian recursive estimation algorithm makes the obtained state information of the sweeping robot. The closest to the real state information, thereby improving the accuracy of the entire positioning.
- a fifth embodiment of the multi-data source SLAM method is provided.
- the fifth embodiment
- step S40 it also includes:
- Step C1 performing down-sampling processing on the depth map obtained by the sweeping robot
- the depth map obtained by the depth camera is down-sampled, and the image resolution of the depth map is appropriately reduced. While reducing the amount of data contained in the depth map, try not to affect the probability update of voxels in the map using the depth map later.
- Step S40 includes:
- Step C2 determining the location information of the sweeping robot in the map according to the fused state information
- the state information includes the posture, linear velocity, angular velocity, acceleration and other information of the sweeping robot. Therefore, the optimal estimation of the state information of the sweeping robot obtained by the improved Bayesian recursive estimation algorithm is used, and then obtained according to the estimated state information.
- the current posture of the sweeping robot is the position and posture information.
- Step C3 using the point cloud information projected by the depth map to obtain the distance information of each object and the position information;
- Each pixel value in the depth map represents the distance from the object to the camera plane.
- the corresponding point cloud information can be projected according to the depth map, that is, the position of each object in the surrounding environment. and edge information.
- the distance information between each object and the current position information of the cleaning robot can be known.
- Step C4 according to the distance information, update the probability information of voxels in the map based on the static Bayesian filtering algorithm to construct the map;
- the specific Bayesian filtering algorithm is to judge the state of the current time by the state information of the previous time and the action of the current time. In this application, it uses the depth map according to the probability information of each voxel at the previous time and the current time. The obtained observation results are used to obtain the new probability information of each voxel at the current moment, so that the sweeping robot can continuously update and construct the map.
- the depth map can accurately detect the location information of low objects in the environment, so that more abundant and complete map information can be obtained in the process of constructing the map.
- the state information of the sweeping robot can be determined by combining the motion equation and the observation equation. , so that the accurate positioning of the sweeping robot can be realized, and the accuracy of the established map can be further improved.
- the point cloud information of the depth map is used to determine the position information of each object in the environment, and the probability information of voxels in the map is continuously updated according to the static Bayesian filtering algorithm, so as to construct the map.
- the depth map can contain low
- the information of dwarf objects ensures the accuracy of the object information of the final constructed map.
- a sixth embodiment of the multi-data source SLAM method is provided.
- the sixth embodiment
- step S40 it also includes:
- Step S50 acquiring an infrared image
- decoration materials are complex and diverse, and some materials are difficult to measure by optical sensors, such as light-transmitting materials, reflective materials, and light-absorbing materials, so that the depth camera has problems in the measurement of problems involving these materials.
- optical sensors such as light-transmitting materials, reflective materials, and light-absorbing materials
- the depth camera has problems in the measurement of problems involving these materials.
- Inability to measure or inaccurate measurement leads to defects in the positioning accuracy of the final map and its consistency with the actual environment. Therefore, it is also necessary to use the infrared image obtained by the depth camera.
- Step S60 performing semantic segmentation and edge extraction on the objects in the infrared image to obtain object information
- the one in the infrared image shows the spectral information of different objects, and the position and contour information of the corresponding material can be obtained for light-transmitting materials, reflective materials and light-absorbing materials.
- For the objects in the infrared image perform semantic return and edge extraction, that is, according to the previously trained semantic information such as desks, stools, etc., to judge each object in the infrared image, and then perform preliminary segmentation of the corresponding objects and obtain the preliminary information of the objects. position information, and then use the spectral information in the infrared image to extract the edge of each object to obtain accurate object position information and edge information.
- Step S70 performing Bayesian inference according to the object information to obtain an inference result, and updating the inference result to the environment map;
- the object information includes the position information and edge information of the object, that is, where the object is in the room and the position information corresponding to the boundary points in the edge of the object, so that the specific information of the object can be constructed through the position information and edge information of the object.
- the Bayesian inference here is to use the existing probability information of each voxel in the existing map, combined with the object information extracted through semantic segmentation and edge extraction, to update the probability information of each voxel in the map, so as to update the map and be able to
- the relevant information of the objects that are difficult to be measured by the optical sensor is added to the constructed map, so as to ensure the consistency of the final constructed map information and the actual environment information.
- more object information in the environment is provided through the infrared image, and the probability information of each voxel in the map is updated through the Bayesian inference method to obtain a map that is more consistent with the actual environment.
- a seventh embodiment of the multi-data source SLAM method is provided.
- the seventh embodiment
- step S50 it also includes:
- Step D1 obtaining the confidence level of each point in the infrared image and displaying the confidence level through a confidence level histogram
- the confidence histogram method is to count the confidence information of each point in the infrared image, and count the confidence of each point through the histogram, and display the statistical results of the confidence according to the histogram, and the distribution of the overall confidence in the infrared image can be obtained. condition.
- Step D2 delete the noise whose confidence level does not meet the preset condition in the infrared image
- the number of points in the infrared image that exist in different confidence value intervals can be known through the histogram, where the confidence value is lower than the preset value and the confidence interval in which the number of points is contained is smaller than the preset number of intervals That is, the points that do not meet the preset conditions are also called noise points in the infrared image, and the noise points are removed to improve the accuracy of the subsequent map construction process.
- the infrared image can also be down-sampled to reduce the data pressure on the processor of the cleaning robot.
- the points in the infrared image are analyzed through the confidence histogram, and the noise points that do not meet the preset conditions are deleted, thereby improving the accuracy when constructing the map.
- an eighth embodiment of the multi-data source SLAM method is provided.
- the eighth embodiment
- step S60 it also includes:
- Step E1 based on the environment map, reproject the inference result to obtain a reprojection error
- the point cloud information in the infrared image obtained by the depth camera is reprojected to the point cloud information with the radar as the reference plane, and the error between the two point cloud information is judged as the reprojection error.
- Step E2 performing Bayesian inference on the object information based on the reprojection error
- reprojection is performed according to different coordinate systems and the reprojection error is obtained, and Bayesian inference is performed by using the reprojection error to ensure the consistency of the information of each voxel in the constructed environment map.
- a ninth embodiment of the multi-data source SLAM method is provided.
- the ninth embodiment
- Multi-source SLAM methods also include:
- Step F1 compare the result error of the observation equation and the motion equation
- Step F2 if the result error is greater than a preset threshold, increase the corresponding error times;
- Step F3 if the number of errors is greater than the preset number of times, output prompt information
- the motion equation is an equation constructed based on motion constraints to ensure the rationality of the finally obtained state information of the sweeping robot.
- the observation equation is established by the observation data of the sensor. Therefore, if the error between the process equation and the motion equation exceeds the preset threshold.
- the accuracy of the sensor is detected by comparing the error between the motion equation and the observation equation.
- prompt information is output to prompt the user.
- an embodiment of the present application further proposes a SLAM device with multiple data sources, and the SLAM device with multiple data sources includes:
- the first establishment module is used to establish the motion equation of the sweeping robot according to the motion constraints
- the second establishment module is used to establish an observation equation for each sensor according to the acquired measurement data of each sensor of the sweeping robot;
- a first fusion module configured to use a Bayesian recursive estimation algorithm to fuse the motion equation and the observation equation to obtain the fused state information of the sweeping robot;
- the building module is used to update the probability information of voxels in the map by using the depth map obtained by the sweeping robot according to the fused state information to construct an environment map.
- the first establishment module includes:
- a first acquisition unit used for acquiring measurement data of each sensor of the sweeping robot
- a dividing unit configured to divide the measurement data into fast-changing data and slow-changing data according to the sensor type corresponding to the measurement data
- a establishing unit configured to establish a first type of observation equation and a second type of observation equation according to the measurement data belonging to the fast-changing data type and the slowly-changing data type, respectively.
- the multi-data source SLAM device further includes:
- a second fusion module configured to fuse the observation equations of the first type to obtain a fusion result
- an initial value module for using the fusion result as the initial value of the point cloud registration algorithm of the second type of observation equation
- a registration module configured to obtain a registration result according to the point cloud registration algorithm and the initial value
- the first inference module is configured to perform Bayesian inference on the fusion result and the registration result to obtain preliminary state information of the cleaning robot.
- the first fusion module includes:
- a second obtaining unit configured to obtain the preliminary state information obtained by the observation equation at the current moment
- the fusion unit is configured to use the Bayesian recursive estimation algorithm to perform fusion according to the preliminary state information and the motion equation to obtain the fusion state information of the sweeping robot at the current moment.
- the multi-data source SLAM device further includes:
- the downsampling module is used to downsample the depth map obtained by the sweeping robot.
- the building blocks include:
- a determining unit configured to determine the location information of the sweeping robot in the map according to the fused state information
- a third obtaining unit configured to obtain the distance information between each object and the position information by using the point cloud information projected by the depth map
- the updating unit is configured to update the probability information of voxels in the map based on the static Bayesian filtering algorithm according to the distance information to construct the map.
- the multi-data source SLAM device further includes:
- a first acquisition module used for acquiring an infrared image
- An update module configured to perform Bayesian inference according to the object information to obtain an inference result, and update the inference result to the environment map.
- the multi-data source SLAM device further includes:
- a second acquiring module configured to acquire the confidence level of each point in the infrared image and display the confidence level through a confidence level histogram
- a deletion module is used to delete the noise points whose confidence level does not meet the preset condition in the infrared image.
- the multi-data source SLAM device further includes:
- a reprojection module configured to reproject the inference result to obtain a reprojection error
- the second inference module is configured to perform Bayesian inference on the object information based on the reprojection error.
- the multi-data source SLAM device further includes:
- a comparison module for comparing the result error of the observation equation and the motion equation
- the output module is configured to output prompt information if the number of errors is greater than the preset number of times.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Mathematical Analysis (AREA)
- Artificial Intelligence (AREA)
- Algebra (AREA)
- Electromagnetism (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
Claims (12)
- 一种多数据源的SLAM方法,其中,所述多数据源的SLAM方法包括以下步骤:根据运动约束建立扫地机器人的运动方程;根据获取的扫地机器人的各传感器的测量数据,对各传感器建立观测方程;利用贝叶斯递归估计算法对所述运动方程和所述观测方程进行融合,获取扫地机器人的融合后的状态信息;根据融合后的状态信息,利用扫地机器人获取的深度图,通过静态贝叶斯滤波算法更新地图中体素的概率信息以构建环境地图。
- 如权利要求1所述的多数据源的SLAM方法,其中,所述根据获取的扫地机器人的各传感器的测量数据,对各传感器建立观测方程的步骤包括:获取扫地机器人的各传感器的测量数据;根据所述测量数据对应的传感器类型,将所述测量数据划分为快变数据和慢变数据;根据属于所述快变数据类型和所述慢变数据类型的所述测量数据,分别建立第一类型的观测方程和第二类型的观测方程。
- 如权利要求2所述的多数据源的SLAM方法,其中,所述利用贝叶斯递归估计算法对所述运动方程和所述观测方程进行融合的步骤之前,还包括:将所述第一类型的观测方程进行融合以得到融合结果;将所述融合结果作为第二类型的观测方程的点云配准算法的初值;根据所述点云配准算法以及所述初值,得到配准结果;对所述融合结果与所述配准结果进行贝叶斯推断以获取扫地机器人的初步状态信息。
- 如权利要求3所述的多数据源的SLAM方法,其中,所述利用贝叶斯递归估计算法对所述运动方程和所述观测方程进行融合,获取扫地机器人的融合后的状态信息的步骤包括:获取当前时刻由所述观测方程得到的初步状态信息;根据所述初步状态信息,结合运动方程,利用所述贝叶斯递归估计算法进行融合,得到当前时刻扫地机器人的融合后的状态信息。
- 如权利要求4所述的多数据源的SLAM方法,其中,所述根据融合后的状态信息,利用扫地机器人获取的深度图,通过静态贝叶斯滤波算法更新地图中体素的概率信息以构建地图的步骤之前还包括:对扫地机器人获取的深度图进行降采样处理。
- 如权利要求4所述的多数据源的SLAM方法,其中,所述根据融合后的状态信息,利用扫地机器人获取的深度图,通过静态贝叶斯滤波算法更新地图中体素的概率信息以构建地图的步骤包括:根据所述融合后的状态信息确定扫地机器人在地图中的位置信息;利用深度图投影的点云信息,获取各物体与所述位置信息的距离信息;根据所述距离信息,基于静态贝叶斯滤波算法更新地图中体素的概率信息以构建地图。
- 如权利要求1所述的多数据源的SLAM方法,其中,所述根据融合后的状态信息,利用扫地机器人获取的深度图,通过静态贝叶斯滤波算法更新地图中体素的概率信息以构建地图的步骤之后,还包括:获取红外图;对所述红外图中的物体进行语义分割和边缘提取以得到物体信息;根据所述物体信息进行贝叶斯推断以获取推断结果,并将所述推断结果更新到所述环境地图中。
- 如权利要求7所述的多数据源的SLAM方法,其中,所述获取红外图的步骤之后,还包括:获取所述红外图中各点的置信度并通过置信度直方图显示所述置信度;删除所述红外图中置信度不满足预设条件的噪点。
- 如权利要求7所述的多数据源的SLAM方法,其中,所述根据所述物体信息进行贝叶斯推断以获取推断结果的步骤之后,还包括:对所述推断结果进行重投影以获取重投影误差;基于所述重投影误差对所述物体信息进行贝叶斯推断。
- 如权利要求1所述的多数据源的SLAM方法,其中,所述多数据源的SLAM方法还包括:比较所述观测方程与所述运动方程的结果误差;若所述结果误差大于预设阈值,则增加相应的出错次数;若所述出错次数大于预设次数,则输出提示信息。
- 一种多数据源的SLAM设备,其中,所述多数据源的SLAM设备包括处理器、存储器、以及存储在所述存储器上并可被所述处理器执行的计算机程序,所述计算机程序被所述处理器执行时,实现如权利要求1至10中任一项所述的多数据源的SLAM方法的步骤。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时,实现如权利要求1至10中任一项所述的多数据源的SLAM方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010853891.0 | 2020-08-21 | ||
CN202010853891.0A CN114077245A (zh) | 2020-08-21 | 2020-08-21 | 多数据源的slam方法、装置、扫地机器人及可读介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022036792A1 true WO2022036792A1 (zh) | 2022-02-24 |
Family
ID=80282769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/115672 WO2022036792A1 (zh) | 2020-08-21 | 2020-09-16 | 多数据源的slam方法、设备及计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114077245A (zh) |
WO (1) | WO2022036792A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117292208A (zh) * | 2023-11-24 | 2023-12-26 | 广州中医药大学(广州中医药研究院) | 一种数据处理过程中错误图案的分类方法与系统 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115601432B (zh) * | 2022-11-08 | 2023-05-30 | 肇庆学院 | 一种基于fpga的机器人位置最优估计方法及系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100094460A1 (en) * | 2008-10-09 | 2010-04-15 | Samsung Electronics Co., Ltd. | Method and apparatus for simultaneous localization and mapping of robot |
CN104793182A (zh) * | 2015-04-21 | 2015-07-22 | 东南大学 | 非高斯噪声条件下基于粒子滤波的室内定位方法 |
CN104807465A (zh) * | 2015-04-27 | 2015-07-29 | 安徽工程大学 | 机器人同步定位与地图创建方法及装置 |
CN108387236A (zh) * | 2018-02-08 | 2018-08-10 | 北方工业大学 | 一种基于扩展卡尔曼滤波的偏振光slam方法 |
CN109062230A (zh) * | 2018-08-06 | 2018-12-21 | 江苏科技大学 | 水下辅助采油机器人控制系统及动力定位方法 |
CN110260866A (zh) * | 2019-07-19 | 2019-09-20 | 闪电(昆山)智能科技有限公司 | 一种基于视觉传感器的机器人定位与避障方法 |
-
2020
- 2020-08-21 CN CN202010853891.0A patent/CN114077245A/zh active Pending
- 2020-09-16 WO PCT/CN2020/115672 patent/WO2022036792A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100094460A1 (en) * | 2008-10-09 | 2010-04-15 | Samsung Electronics Co., Ltd. | Method and apparatus for simultaneous localization and mapping of robot |
CN104793182A (zh) * | 2015-04-21 | 2015-07-22 | 东南大学 | 非高斯噪声条件下基于粒子滤波的室内定位方法 |
CN104807465A (zh) * | 2015-04-27 | 2015-07-29 | 安徽工程大学 | 机器人同步定位与地图创建方法及装置 |
CN108387236A (zh) * | 2018-02-08 | 2018-08-10 | 北方工业大学 | 一种基于扩展卡尔曼滤波的偏振光slam方法 |
CN109062230A (zh) * | 2018-08-06 | 2018-12-21 | 江苏科技大学 | 水下辅助采油机器人控制系统及动力定位方法 |
CN110260866A (zh) * | 2019-07-19 | 2019-09-20 | 闪电(昆山)智能科技有限公司 | 一种基于视觉传感器的机器人定位与避障方法 |
Non-Patent Citations (1)
Title |
---|
LI XIUZHI, JIA SONG-MIN: "3D Map Building for Mobile Robot Based on Multi-Sensor Fusion Aided SLAM", BEIJING LIGONG DAXUE XUEBAO = TRANSACTION OF BEIJING INSTITUTE OF TECHNOLOGY, BEIJING LIGONG DAXUE, BEIJING, CN, vol. 35, no. 3, 31 March 2015 (2015-03-31), CN , pages 262 - 267, XP055902205, ISSN: 1001-0645, DOI: 10.15918/j.tbit1001-0645.2015.03.009 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117292208A (zh) * | 2023-11-24 | 2023-12-26 | 广州中医药大学(广州中医药研究院) | 一种数据处理过程中错误图案的分类方法与系统 |
CN117292208B (zh) * | 2023-11-24 | 2024-02-23 | 广州中医药大学(广州中医药研究院) | 一种数据处理过程中错误图案的分类方法与系统 |
Also Published As
Publication number | Publication date |
---|---|
CN114077245A (zh) | 2022-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11551422B2 (en) | Floorplan generation based on room scanning | |
WO2020223974A1 (zh) | 更新地图的方法及移动机器人 | |
JP6978330B2 (ja) | オブジェクト形状および設計からのずれの監視 | |
CN107748569B (zh) | 用于无人机的运动控制方法、装置及无人机系统 | |
CN111402339B (zh) | 一种实时定位方法、装置、系统及存储介质 | |
CN111006676B (zh) | 地图构建方法、装置及系统 | |
US20170116353A1 (en) | Methods, apparatuses and computer program products for automatic, non-parametric, non-iterative three dimensional geographic modeling | |
WO2022036792A1 (zh) | 多数据源的slam方法、设备及计算机可读存储介质 | |
CN110544294B (zh) | 一种基于全景视频的稠密三维重构方法 | |
WO2020063878A1 (zh) | 一种处理数据的方法和装置 | |
CN115578433B (zh) | 图像处理方法、装置、电子设备及存储介质 | |
JP2022050311A (ja) | 車両の車線変更を検出するための方法、装置、電子機器、記憶媒体、路側機、クラウド制御プラットフォーム、及びコンピュータプログラム | |
KR20220025028A (ko) | 시각적 비콘 기반의 비콘 맵 구축 방법, 장치 | |
CN112083403A (zh) | 用于虚拟场景的定位追踪误差校正方法及系统 | |
CN109997123B (zh) | 用于改进空间-时间数据管理的方法、系统和装置 | |
JP7351892B2 (ja) | 障害物検出方法、電子機器、路側機器、及びクラウド制御プラットフォーム | |
US11640692B1 (en) | Excluding objects during 3D model generation | |
CN113160401B (zh) | 一种面向物体的视觉slam轻量化语义地图创建方法 | |
US11763478B1 (en) | Scan-based measurements | |
CN113140032A (zh) | 基于房间扫描的平面图生成 | |
CN117232499A (zh) | 多传感器融合的点云地图构建方法、装置、设备及介质 | |
CN115578432B (zh) | 图像处理方法、装置、电子设备及存储介质 | |
CN114299192B (zh) | 定位建图的方法、装置、设备和介质 | |
CN111899277A (zh) | 运动目标检测方法及装置、存储介质、电子装置 | |
CN112904365B (zh) | 地图的更新方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20949996 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20949996 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20949996 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26-09-23) |