WO2020223974A1 - 更新地图的方法及移动机器人 - Google Patents
更新地图的方法及移动机器人 Download PDFInfo
- Publication number
- WO2020223974A1 WO2020223974A1 PCT/CN2019/086281 CN2019086281W WO2020223974A1 WO 2020223974 A1 WO2020223974 A1 WO 2020223974A1 CN 2019086281 W CN2019086281 W CN 2019086281W WO 2020223974 A1 WO2020223974 A1 WO 2020223974A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- positioning
- map
- data set
- feature information
- current
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 230000004927 fusion Effects 0.000 claims abstract description 28
- 238000007499 fusion processing Methods 0.000 claims abstract description 17
- 230000033001 locomotion Effects 0.000 claims description 109
- 238000005259 measurement Methods 0.000 claims description 43
- 230000000007 visual effect Effects 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 25
- 238000004891 communication Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 abstract description 9
- 230000002688 persistence Effects 0.000 abstract description 3
- 238000006073 displacement reaction Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 12
- 230000002085 persistent effect Effects 0.000 description 10
- 239000013598 vector Substances 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000010354 integration Effects 0.000 description 9
- 238000004140 cleaning Methods 0.000 description 7
- 238000010276 construction Methods 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 238000005192 partition Methods 0.000 description 6
- 230000036544 posture Effects 0.000 description 6
- 238000010408 sweeping Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0234—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
- G05D1/0236—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0278—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- This application relates to the field of navigation control technology, in particular to a method for updating a map, a server and a mobile robot.
- Autonomous mobile devices represented by robots navigate and move based on maps. Among them, when there is no map available for navigation, the autonomous mobile device needs to update the map. At present, the maps constructed by autonomous mobile devices during navigation and movement cannot be permanently saved because the initial positions, initial attitudes, and physical spaces in which the navigational movement is performed cannot be guaranteed to be completely consistent.
- the purpose of this application is to provide a method for updating a map and a mobile robot to solve the problem of non-persistence of the map in the prior art.
- the first aspect of the present application provides a method for updating a map, including: acquiring a current map constructed by a first mobile device to perform a navigational movement operation in a physical space and its current positioning data set Wherein, the first mobile device uses a pre-stored reference map corresponding to the physical space and its reference positioning data set for navigation and movement; the reference map and its reference positioning data set and the current map and The current positioning data set is subjected to data fusion processing; the map and the positioning data set after data fusion are used as the new reference map and the reference positioning data set in the first mobile device.
- the reference map and its reference positioning data set are based on the first mobile device and/or at least one second mobile device respectively executing at least once in the physical space Built for navigation and mobile operations.
- the step of performing data fusion processing on the reference map and its reference positioning data set and the current map and its current positioning data set includes: determining the reference positioning data set The first positioning feature information and its first positioning coordinate information match the second positioning feature information and its second positioning coordinate information in the current positioning data set; based on the matching first positioning feature information and its first The positioning coordinate information, the second positioning feature information and the second positioning coordinate information are merged with the reference map and its reference positioning data set and the current map and its current positioning data set.
- the first positioning feature information and the first positioning coordinate information and the second positioning feature information that match the reference positioning data set and the current positioning data set includes: matching each first positioning feature information in the reference positioning data set with each second positioning feature information in the current positioning data set; and determining the matching first based on the obtained matching result One positioning feature information and its first positioning coordinate information, and second positioning feature information and its second positioning coordinate information.
- the first positioning feature information and its first positioning coordinate information, and the second positioning feature information and its second positioning coordinate that are matched are determined based on the obtained matching result
- the information step includes: based on the matched first positioning feature information and second positioning feature information, matching respective first positioning coordinate information and second positioning coordinate information to obtain matching first positioning coordinate information and The second positioning coordinate information.
- the first location feature information includes first measurement location feature information determined based on spatial features in a reference map, and the second location feature information includes And/or the first location feature information includes the first visual location feature information extracted from the first key frame image in the reference location data set; the second The positioning feature information includes second visual positioning feature information extracted from the second key frame image in the current positioning data set.
- the first measurement positioning feature information includes at least one of the following: measurement data determined based on a combination of coordinate information of spatial features in a reference map
- the second measurement positioning feature information includes at least one of the following: the measurement data determined based on the combination of the coordinate information of the corresponding spatial feature in the current map, according to the measurement data used to describe the current map Measurement data determined by combining the depth information of the spatial features.
- the step of matching each first positioning feature information in the reference positioning data set with each second positioning feature information in the current positioning data set includes: The second positioning feature information in each second key frame image in the data set is matched with the first positioning feature information in each first key frame image in the reference positioning data set to determine the first key frame image and the second key frame image. The first location feature information and the second location feature information that match in the frame image.
- the method further includes: analyzing the first key frame image in the reference positioning data set, and determining that the first image coordinate information corresponding to the first key frame image is relatively A first relative orientation relationship in the main direction of the physical space; and adjusting the pixel position of the first positioning feature information in the first key frame image based on the first relative orientation relationship; and/or analyzing the current Locate the second key frame image in the data set, determine the second relative orientation relationship of the second image coordinate information corresponding to the second key frame image with respect to the main direction of the physical space; and based on the second relative orientation relationship Adjust the pixel position of the second positioning feature information in the second key frame image; so as to match the second positioning feature information in the adjusted second key frame image with the first in the adjusted first key frame image Location feature information.
- the method further includes: adjusting the reference map or the current map until the adjusted two maps meet a preset overlap condition; so as to determine based on the adjusted two maps The first positioning feature information and its first positioning coordinate information, and the second positioning feature information and its second positioning coordinate information that match the reference positioning data set and the current positioning data set.
- the first locating feature information and its first locating coordinate information, and the second locating feature information and its second locating coordinate information are fused based on the matching of the reference
- the steps of the map and its reference positioning data set and the current map and its current positioning data set include: correcting the reference map and/or the current map based on the coordinate deviation information between the matched first positioning coordinate information and the second positioning coordinate information The coordinate error in the map; the merge operation is performed based on at least one of the corrected maps to obtain a new reference map; and the first positioning feature information and the second positioning feature information that at least match the reference positioning data set and the current positioning data set Mark on the new reference map to get new positioning coordinate information.
- the first locating feature information and its first locating coordinate information, and the second locating feature information and its second locating coordinate information are fused based on the matching of the reference
- the steps of the map and its reference positioning data set and the current map and its current positioning data set include at least one of the following steps to obtain a new reference positioning data set: based on matching first positioning feature information and second positioning feature information Adjust the reference positioning data set or the current positioning data set by the positioning feature deviation information; add each second positioning feature information that is not matched in the current positioning data set to the reference positioning data set, or add each unmatched reference positioning data set
- the first positioning feature information is added to the current positioning data set.
- the method further includes the following steps: detecting the completeness of the current map, and/or detecting the amount of information of the current positioning data set; and executing the method based on the obtained detection result. Operation of data fusion processing.
- it further includes the step of sending a new reference map and its reference positioning data set to the first mobile device located in the physical space.
- the method further includes the step of marking the position of at least one second device equipped with a camera device located in the physical space on the reference map.
- a second aspect of the present application provides a server, including: an interface device, used to communicate with a first mobile device located in a physical space; a storage device, used to store information provided to the first mobile device A reference map and its reference positioning data set, storing the current map and its current positioning data set constructed by the first mobile device performing navigation and movement operations in the physical space, and storing at least one program; a processing device, and The storage device and the interface device are connected, and are used to call and execute the at least one program to coordinate the storage device and the interface device to execute the method according to any one of the first aspect.
- the third aspect of the present application provides a mobile robot, including: a storage device for storing a reference map and its reference positioning data set, the current map and the current positioning data set, and at least one program; wherein, the current map and the current The positioning data set is constructed by the mobile robot performing a navigation movement operation; the reference map and its reference positioning data set are used by the mobile robot to perform the navigation movement operation; the mobile device is used for The navigation route determined based on the reference map performs the movement operation; the positioning sensing device is used to collect the second positioning feature information during the navigation movement operation to form the current positioning data set; the processing device, and the storage device, the camera device and the mobile The device is connected and is used to call and execute the at least one program to coordinate the storage device, the camera device and the mobile device to execute the method for updating the map described in the first aspect.
- the stored reference map and its reference positioning data set are based on the mobile robot itself and/or at least one second mobile device respectively performing at least one navigation in the same physical space Built for mobile operation.
- an interface device is further included for data communication with at least one second mobile device; the processing device also executes obtaining a third map provided by the second mobile device And the third positioning data set, so as to perform data on the reference map and its reference positioning data set, the second map and its second positioning data set, and the third map and its third positioning data set Fusion processing.
- a fourth aspect of the present application provides a mobile robot, including: an interface device for data communication with a server; a storage device for storing a reference map and a reference map used to provide navigation services during navigation and mobile operations in a physical space
- Its reference positioning data set stores the current map and its current positioning data set constructed during the execution of the navigation movement operation, and stores at least one program
- the processing device is connected to the storage device and the interface device for calling and The at least one program is executed to coordinate the storage device and the interface device to execute the following method: sending the current map and its current positioning data set to the server; and obtaining a new reference map returned by the server And its reference positioning data set, and update the stored reference map and its reference positioning data set; wherein the acquired new reference map and its reference positioning data set are the reference map and its reference before the server will update
- the positioning data set is obtained after data fusion with the current map and the current positioning data set.
- the new reference map and its reference positioning data set are further integrated with a third map and a third positioning data set provided by at least one second mobile device.
- this application provides a solution for map persistence, that is, when the mobile device is restarted, its map is the same as the last working map. In the coordinate system.
- the user can mark the map on the terminal device to set the area where the mobile device works and how it works.
- the map in this application will cover different scenarios as time accumulates. Therefore, the map can provide positioning information for mobile devices under different time periods and lighting conditions.
- FIG. 1 shows a schematic flowchart of an implementation manner of a method for updating a map according to this application.
- Figure 2 shows a flow chart of the steps for fusing the reference map and its reference positioning data set with the current map and its current positioning data set in this application.
- Fig. 3 shows a schematic structural diagram of an embodiment of the server of this application.
- FIG. 4 shows a schematic diagram of an embodiment of a module structure of a mobile robot in this application.
- FIG. 5 shows a schematic diagram of an embodiment of a process of a mobile robot in a work in this application.
- FIG. 6 shows a schematic diagram of another embodiment of the mobile robot in this application.
- first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element.
- the first mobile device may be referred to as the second mobile device, and similarly, the second mobile device may be referred to as the first mobile device without departing from the scope of the various described embodiments.
- Both the first mobile device and the mobile device describe one device, but unless the context clearly indicates otherwise, they are not the same mobile device. Similar situations also include the first positioning feature information and the second positioning feature information, the first key frame image and the second key frame image, the first positioning feature information and the second positioning feature information.
- A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C” .
- An exception to this definition will only occur when the combination of elements, functions, steps or operations is inherently mutually exclusive in some way.
- the autonomous mobile devices mentioned in the background technology are extended to other mobile devices that need to perform navigation operations based on maps, such as cars, vehicle-mounted terminals configured on cars, etc.
- maps such as cars, vehicle-mounted terminals configured on cars, etc.
- satellite positioning technology or other positioning technology
- positioning in map data other positioning information needs to be used for positioning, mobile devices (such as mobile robots) will not be able to obtain corresponding positioning information and maps in the corresponding physical space.
- this application provides a method for updating a map and a mobile robot, so as to provide a mobile device with a persistent reference map and its reference positioning data set that can be used for navigation and movement in a physical space.
- the physical space refers to a physical space provided for navigation and movement of a mobile device, and the physical space includes, but is not limited to, any of the following: indoor/outdoor space, road space, flight space, etc.
- the physical space corresponds to flight space; in other embodiments, if the mobile device is a vehicle with autopilot function, then the physical space corresponds to It is a tunnel road where positioning cannot be obtained or a road space where the network signal is weak but navigation is needed; in some other embodiments, the mobile device is a sweeping robot, and the physical space corresponds to an indoor or outdoor space.
- the reference map and its reference positioning data set are constructed based on the first mobile device and/or at least one second mobile device respectively performing at least one navigation movement operation in the physical space. That is, the reference map and its reference positioning data set are fused with a map and its positioning data set constructed by at least one mobile device for multiple navigation movements in the same physical space.
- the reference map and its reference positioning data set constitute the aforementioned map data.
- the mobile device is equipped with a camera device, a motion sensing device, and other sensing devices that provide navigation data for autonomous movement; it includes a first mobile device and/or at least one second mobile device, wherein the first mobile device
- the device and the second mobile device may be devices of the same type or different types of devices.
- both the first mobile device and the second mobile device are trucks with autonomous navigation capabilities.
- the first mobile device is a cleaning robot
- the second mobile device is a family companion robot.
- both the first mobile device and the second mobile device are vehicle-mounted terminals.
- the first mobile device or the second mobile device may also be a patrol robot or the like.
- the method for updating the map and the mobile robot provided by the present application are used to store the map and positioning data set constructed in each work in a storage medium, and merge with the reference map and the reference positioning data set in the storage medium,
- the reference map and the reference positioning data set are continuously optimized, so that the reference map and the reference positioning data set can cover more scenes.
- the first mobile device can move autonomously by using the reference map and its reference positioning data set, so that the first mobile device can accurately locate its environmental location under different time periods and lighting conditions.
- the first mobile device is positioned in the same coordinate system every time it navigates and moves, which provides a basis for persistent map applications.
- the map constructed by the first mobile device during navigation and movement may be any of the following types: grid map, topological map, and so on.
- a grid map is constructed during path planning.
- the physical space is represented as a topological structure diagram with nodes and related connecting lines, where the nodes represent important locations in the environment (corners, doors, elevators, stairs, etc.), and the edges represent the connection relationship between the nodes. Such as corridors.
- various maps can be marked with semantic tags for the user to perform semantic-based navigation control with the first mobile device.
- the semantic tags may be the names of objects in the physical space, such as desks, notebooks, etc. .
- the user sends a voice command to the first mobile device, such as the position where the bedroom turns right 6 meters in front, so the map is also called a semantic map.
- the navigation movement operation refers to a process in which the first mobile device uses navigation data to move and update map data.
- the first mobile device uses the constructed map data to implement subsequent navigation during the navigation movement. .
- the navigation of the data goes to the designated location to complete the work. For another example, a vehicle in a cruising state implements navigation based on map data on a road where a location cannot be obtained, such as a tunnel.
- the first mobile device constructs a map of the working environment and stores it on a storage medium during navigation and movement, for example, a sweeping robot or vehicle constructs a map of its working environment while working, and constructs a map of the working environment.
- Transfer to storage media may be separated from the side of the first mobile device, such as a storage device configured on the server side, or a storage device of a smart terminal that communicates with the first mobile device.
- storage media such as SD card and flash configured in smart terminals.
- storage media such as solid-state hard drives on the server side.
- the storage medium may also be a storage device configured by the first mobile device itself, for example, a storage medium such as an SD card or flash configured in the first mobile device.
- this application uses the current map data constructed by the first mobile device to perform a navigation operation, that is, the current map data including the current map and the current positioning data set as an example, to describe the update to provide a persistent reference map and its reference The execution process of the positioning data set.
- the first mobile device autonomously moves according to the reference map and its reference positioning data set during the current navigation movement, and at the same time constructs the current map and its current positioning data set corresponding to the current navigation movement.
- a vehicle with cruise function can be based on the reference map and its reference positioning data set based on VSLAM (Visual Simultaneous Localization and Mapping, visual-based instant positioning and Map construction) or SLAM (Simultaneous Localization and Mapping, real-time positioning and map construction) for autonomous driving, while constructing the current map of the physical space where the road traveled this time and its current visual data set according to the path taken.
- VSLAM Visual Simultaneous Localization and Mapping, visual-based instant positioning and Map construction
- SLAM Simultaneous Localization and Mapping, real-time positioning and map construction
- a sweeping robot when a sweeping robot is performing a cleaning task, it can autonomously move and clean based on the reference map and its reference positioning data set based on VSLAM or SLAM, and at the same time construct the current physical space where the path it has passed is based on the cleaned path.
- the map and its current visual data set for another example, the navigation robot in the hotel can perform navigation services for the customer after receiving the semantic instruction from the customer based on the reference map and the reference positioning data set based on VSLAM or SLAM, and at the same time according to the path taken Construct the current map of the physical space where the path passed this time and its current visual data set, etc.
- the current map and the reference map are of the same type.
- the same is a raster map, or the same is a topological map.
- the current map and the reference map can be converted to the same type.
- the raster map is converted into a topological map by using pixel data with a preset resolution.
- the current map depicts the geographic information of the physical space detected by the first mobile device along the route of a navigation movement; in other words, the current map is determined based on the position and posture of the first mobile device during the movement.
- the current map includes: coordinate information corresponding to the starting position of the first mobile device, and geographic information such as coordinate information corresponding to obstacles sensed during the movement.
- the reference map depicts the geographic information of the physical space detected by integrating the route of the first mobile device and/or the at least one second mobile device in the same physical space after multiple navigation movements.
- the reference map includes: coordinate information corresponding to the starting position determined by the first mobile device based on positioning matching, and coordinate information corresponding to obstacles in the physical space determined after positioning matching, etc. Geographical information.
- the current positioning data set includes each second positioning feature information collected based on the current navigation movement and the second positioning coordinate information in the current map.
- the second positioning feature information includes second measurement positioning feature information determined based on the spatial feature depicting the current map.
- the first mobile device includes measurement and sensing devices such as a laser ranging sensor and an infrared ranging sensor, and includes an angle set on a movement control system (such as a drive motor, a roller, etc.) of the first mobile device.
- the first The mobile device extracts the second measurement positioning feature information according to the spatial feature formed by the obstacle corresponding to the geographic information in the current map.
- the spatial feature includes: at least one of obstacle outlines, feature points, and feature lines depicted in the current map.
- the second measurement positioning feature includes at least one of the following description methods: measurement data determined based on a combination of coordinate information corresponding to the spatial feature in the current map, and measurement data determined based on a combination of depth information used to describe the spatial feature in the current map .
- the coordinate information of a measurement point in the contour of the obstacle in the current map is used as a starting point to construct the position offset vector of the obstacle contour, so as to obtain the second measurement positioning feature including at least the first connected position offset vector.
- the second measurement positioning feature is obtained at least Including: a depth offset vector used to describe the outline of the obstacle.
- the second measurement positioning feature may also include a depth offset vector and a position offset vector describing the contour of the same obstacle.
- the second measurement positioning feature information is obtained based on a combination of the coordinate information of an inflection point in the current map and the coordinate information of the surrounding measurement points.
- the second positioning feature information includes second visual positioning feature information extracted from a second key frame image in the current positioning data set.
- the second key frame image is a key frame image taken by the first mobile device during navigation movement.
- the second visual positioning feature information is obtained from multiple second key frame images by using image feature extraction and matching methods.
- the second visual positioning feature information includes but is not limited to: feature points, feature lines, etc. in the second key frame image. An example of the second visual positioning feature information is described by a descriptor.
- the positioning feature information is extracted from multiple second key frame images, and based on the images containing visual positioning feature information in the multiple second key frame images
- the block obtains the gray value sequence used to describe the visual positioning feature information, and uses the gray value sequence as the descriptor.
- the descriptor is used to describe the second visual positioning feature information by encoding the surrounding brightness information of the visual positioning feature information, and sampling several points in a circle around the second visual positioning feature information as the center , Where the number of sampling points is but not limited to 256 or 512. Compare these sampling points in pairs to obtain the brightness relationship between these sampling points and convert the brightness relationship into a binary string or other encoding format.
- the reference positioning data set includes the first mobile device and/or at least one second mobile device collected based on each movement during previous navigation movements And the first set of positioning feature information obtained by fusion.
- the first positioning feature information includes a first measurement positioning determined based on the spatial feature in the reference map Characteristic information.
- the first measurement positioning feature information includes at least one of the following: measurement data determined based on a combination of coordinate information of spatial features in a reference map, and measurement data determined based on a combination of depth information used to describe spatial features in the reference map .
- the first positioning feature information includes first visual positioning feature information extracted from a first key frame image in a reference positioning data set.
- the method of obtaining the first key frame image is the same as or similar to the method of obtaining the second key frame image, which will not be described in detail here; and the method of describing the positioning feature in the image by the first visual positioning feature information is the same as that of the second
- the visual positioning feature information is the same or similar and will not be detailed here.
- a frame refers to a single image frame in the smallest unit of animation, and the frame is represented as a grid or a mark on the time axis of the animation software.
- the key frame is equivalent to the original picture in the two-dimensional animation, and refers to the frame where the key action of the object is in motion or change.
- the vision sensor will continuously capture surrounding images during the movement of the mobile device, and the images of adjacent frames have a high degree of similarity. Therefore, if adjacent frames are compared, the movement process of the mobile device may not be clearly judged. Therefore, the movement process of the mobile device can be judged more prominently through the comparison between key frames.
- each of the first key frame images acquired by the first mobile device during the navigation movement corresponds to a different position and posture of the first mobile device in the physical space.
- Different first key frame images captured by the first mobile device at different positions and postures can be used to determine the matching positioning feature in the image, and use it as the first positioning feature information.
- the first positioning feature information is currently Coordinate information in the map.
- the method of using at least two key frame images before and after to match the positioning feature information and determining the position and posture of each key frame image taken by the first mobile device can be found in the patent application with publication number CN107907131A, and the patent application is fully cited in this.
- the first mobile device determines its current position by using the reference map and its reference positioning data set and controls the attitude, movement direction, speed, etc. along the navigation route. At the same time, the first mobile device also constructs the current map and its current positioning data set based on the starting position of the navigation movement. In some examples, after the first mobile device completes the navigation movement, the current map and its current positioning data set are saved, and step S110 in the method for updating a map of the present application is started at an appropriate time. For example, when charging, or when the system resources of the first mobile device are abundant, the method for updating the map of the present application is executed. In some other examples, during the navigation movement performed by the first mobile device, the method for updating the map of the present application is executed based on the constructed current map and its current positioning data set.
- the method is mainly executed by the server and the first mobile device in cooperation.
- the server performs step S110 to start the reference map and the reference positioning data set Construction.
- the method is executed by the first mobile device itself.
- the first mobile device reads the current map and its current positioning data set from the storage medium when charging or when the system resource occupancy rate is low to perform step S110, and start the construction of the reference map and the reference positioning data set.
- the step S110 is that the server acquires the current map constructed by the first mobile device in a physical space to perform a navigation and movement operation and its current positioning data set. Or the first mobile device acquires the current map constructed by itself and its current positioning data set.
- the following takes the server as an example to describe the subsequent steps of the method. It should be noted that the first mobile device may also perform subsequent steps.
- step S120 perform data fusion processing on the reference map and its reference positioning data set and the current map and its current positioning data set.
- the fusion refers to integrating the current map and the current positioning data set constructed at different times.
- the integration of the map includes any of the following: integrating the coordinate information in the current map constructed at different times into the coordinate information in a unified reference map; or integrating the coordinate information in the current map into the reference map .
- the current map and the reference map are differentially processed to obtain differential coordinate information in the current map, and each coordinate information in the reference map is corrected based on the differential coordinate information.
- the integration of the map also includes the removal of geographic locations that have not been included in the reference map recently, such as removing the coordinate information of the geographic location of an obstacle that has been temporarily placed.
- the integration of the positioning data set includes any one of the following: integrating the second positioning feature information in the current positioning data set constructed at different times into a reference positioning data set uniformly corresponding to the reference map; or integrating the second positioning data set in the current positioning data set.
- the positioning feature information is integrated into the saved datum positioning data set.
- the positioning feature information that matches the current positioning data set and the reference positioning data set is differentially processed, and the corresponding first positioning information in the reference positioning data set is updated based on the differential positioning feature information.
- the integration of the positioning data set also includes: removing the first positioning feature information that has not been included in the reference positioning data set recently, for example, removing the first positioning feature information that is determined to reflect an obstacle that has been temporarily placed; and/or Add the second positioning feature information in the current positioning data set to the reference positioning data set.
- the merged map and positioning data set integrate all the map data collected by the first mobile device and/or the second mobile device during navigation and movement operations in history.
- the first mobile device performs the first navigation and movement operation Do1 under natural light during the day, and the current map P1 and the current positioning feature data set D1 constructed by the first mobile device are both in the state presented under natural light and used as The reference map and its reference positioning data set; and at night, under the illumination of indoor lights, the brightness and angle of illumination have changed.
- the current map P2 and the current map constructed by the first mobile device based on the second navigation movement operation Do2 The status of the current location feature data set D2 has changed.
- the current map P2 and its current positioning data set D2 constructed by the first mobile device at night are fused with the reference map P1 and its reference positioning data set D1 constructed during the day, thereby the fused reference map and its reference positioning
- the data set contains both the map and positioning data set constructed in the day and night scenes of the physical space.
- the first mobile device and/or at least one second mobile device has performed multiple navigation and movement operations in a physical space, and its reference map and its reference positioning data set have been integrated in multiple navigation and movement operations. Constructed maps and positioning datasets. After a new navigation movement operation of the first mobile device and/or at least one second mobile device, the current map constructed by it and its current positioning data set are merged with the historically constructed reference map and its reference positioning data set, thereby The current map and its current positioning data set are constructed through continuous iteration.
- each second positioning feature information collected by the first mobile device and/or the second device's navigation movement is not completely consistent with each first positioning feature information in the reference positioning data set, it is necessary to check during the fusion.
- Figure 2 shows a flow chart of the steps for fusing the reference map and its reference positioning data set with the current map and its current positioning data set in this application.
- the step of performing data fusion processing on the reference map and its reference positioning data set and the current map and its current positioning data set includes: step S121, determining the first positioning feature information and the first positioning feature information in the reference positioning data set.
- the positioning coordinate information matches the second positioning feature information and the second positioning coordinate information in the current positioning data set.
- the reference positioning data set contains first positioning feature information and its first positioning coordinate information in the reference map
- the current positioning data set contains second positioning feature information and its second positioning in the current map Coordinate information.
- the server confirms that the information combination containing the first positioning feature information and its first positioning coordinate information matches the information combination containing the second positioning feature information and its second positioning coordinate information.
- the step S121 includes step S1211: matching each first positioning feature information in the reference positioning data set with each second positioning feature information in the current positioning data set; and based on the obtained matching result Determine the matching first positioning feature information and its first positioning coordinate information, and second positioning feature information and its second positioning coordinate information.
- an example of the method of matching positioning feature information includes: matching whether at least one of the coordinate vector deviation value or the depth vector deviation value described by the measurement positioning feature information in the two data sets is within the preset measurement matching error range; or Matching whether at least one of the gray value, gray distribution, color value, color distribution, color difference, or gray step described by the visual positioning feature information in the two data sets is within the preset image matching error range.
- the server traverses the matching of the first positioning feature information and the second positioning feature information, and when at least one of the following matching conditions is met, the server is based on the matched first positioning feature information and second positioning feature information.
- the positioning feature information determines that the first positioning coordinate information corresponding to the corresponding first positioning feature information matches the second positioning coordinate information corresponding to the corresponding second positioning feature information.
- the matching condition includes: the ratio of the matched first positioning feature information to the total first positioning feature information is greater than a ratio threshold, and the ratio of the matching second positioning feature information to the total second positioning feature information is greater than a ratio Threshold, or the sum of matched first positioning feature information is greater than a sum threshold, etc.
- the matching condition includes: the evaluation result obtained by evaluating the multiple matching conditions based on preset weights satisfies an evaluation threshold interval, etc.
- the server in order to prevent the first mobile device from moving in different physical spaces containing similar positioning feature information, the server also matches the positioning coordinate information corresponding to each positioning feature information during the fusion process. For example, during the cleaning period of a single or different mobile robots uniformly decorating rooms of different types, the acquired second location feature information may have a higher degree of matching with the first location feature information in the reference location data set, but the corresponding location coordinates The information is quite different.
- the step S1211 further includes: based on the matched first positioning feature information and second positioning feature information, The respective corresponding first positioning coordinate information and second positioning coordinate information are matched to obtain matched first positioning coordinate information and second positioning coordinate information.
- the matched first positioning feature information and second positioning feature information it is calculated whether the position relationship error between the corresponding first positioning coordinate information and the second positioning coordinate information meets the preset position relationship error condition, If so, it is determined that the first positioning coordinate information corresponding to the corresponding first positioning feature information matches the second positioning coordinate information corresponding to the corresponding second positioning feature information; otherwise, it is determined that all the matched first positioning feature information and The second location feature information does not match.
- the above-mentioned position relationship error conditions include at least one and a combination of the following: the displacement error between the matching positioning feature information in the positioning coordinate information in the respective maps is less than the preset displacement error threshold, and the matching positioning feature information is in each The declination error between the positioning coordinate information in the map is less than the preset declination error threshold, and the ratio of the number of positioning coordinate information that meets the preset displacement error threshold to the number of matching positioning feature information exceeds the preset ratio threshold. The amount of positioning coordinate information of the preset displacement error threshold exceeds the preset total threshold.
- the server determines the positional relationship between the corresponding positioning coordinate information of the matched positioning feature information according to the location distribution of the matched positioning feature information during the matching of the positioning feature information Whether the error satisfies the preset position relationship error condition, if so, it is determined that the first positioning coordinate information corresponding to the corresponding first positioning feature information matches the second positioning coordinate information corresponding to the corresponding second positioning feature information; otherwise, then It is determined that all the matched first positioning feature information and the second positioning feature information do not match.
- the location distribution includes, but is not limited to, at least one of the following: 1) Perform location clustering on matching positioning feature information; correspondingly, the server performs positional relationship errors between the clustered location distributions in the two maps , Filtering out the first positioning feature information and its first positioning coordinate information from the matched positioning feature information, and the second positioning feature information and its second positioning coordinate information are matched. 2) Take the shape depicted in the respective maps of the corresponding positioning coordinate information of the matched positioning feature information as the position distribution; correspondingly, the server will determine from the matched positional relationship error between the shapes in the two maps. The first location feature information and its first location coordinate information are screened out from the location feature information, and the second location feature information and its second location coordinate information are matched.
- the server uses the positioning features contained in the key frame image to perform positioning feature matching.
- the step S121 includes the step S1212: collecting the current positioning data for each second The second positioning feature information in the two key frame images is matched with the first positioning feature information in each first key frame image in the reference positioning data set to determine the second key frame image and the first key frame image. Matched first positioning feature information and second positioning feature information; and determining matching first positioning feature information and its first positioning coordinate information, and second positioning feature information and its second positioning coordinate based on the obtained matching result information.
- each key frame image contains visual positioning feature information.
- the descriptor describing the positioning feature information contained in the key frame image is used as the matching index to extract the positioning feature information to be matched in the respective key frame images in the two databases, and then based on the two key frame images to be matched.
- the pixel position relationship of the contained multiple positioning feature information in the respective key frame images determines the first positioning feature information and the second positioning feature information that match the second key frame image and the first key frame image.
- the server performs a rough first match with descriptors (or summary of descriptors) corresponding to multiple positioning feature information of the same key frame image in two databases, and uses preset first matching conditions to filter out The second key frame image and its second positioning feature information to be further matched, and the first key frame image and its first positioning feature information, wherein the first matching condition includes, but is not limited to: two descriptors
- the matching ratio is above the preset ratio, or the number of descriptors that meet the matching conditions in the two key frame images is above the preset number, etc.
- the server uses the similarity condition of the frequency histogram to perform a rough first match on the key frame images in the two databases to filter out the second key frame image to be further matched and its second positioning feature information. And the corresponding first key frame image and its first positioning feature information.
- the server matches the second key frame image and its second positioning feature information to be further matched, and the first key frame image and its first positioning feature information one by one based on the image matching technology.
- the image matching technology includes but It is not limited to matching the image feature error between the shape formed by the multiple first positioning feature information in the first key frame image P1 and the shape formed by the multiple second positioning feature information in the second key frame image P2, if the If the image feature error meets the preset image feature error condition, it is determined that the corresponding two positioning feature information match, otherwise, it does not match.
- the image feature error conditions include but are not limited to at least one of the following: whether the edges and corners of the two shapes meet the image's translation, rotation, and scale invariance matching conditions; the descriptor of the first positioning feature information and the second The error between the descriptors of the positioning feature information is less than a preset error threshold.
- the server matches the first positioning coordinate information corresponding to each of the matched first positioning characteristic information and the second positioning characteristic information with the second positioning coordinate information.
- the server uses the coordinate matching methods mentioned in the previous examples to perform matching operations on the positioning coordinate information corresponding to the matched positioning feature information, and filter out the matching first positioning feature information And its first positioning coordinate information and second positioning feature information and its second positioning coordinate information.
- the current map depicts geographic information of the physical space detected by the first mobile device along the route of a navigation movement; in other words, the current map is determined based on the position and posture of the first mobile device during movement of. It can be seen that there is a deflection angle difference between the current map constructed based on the navigation movement of the first mobile device and the main direction of the physical space based on the posture of the first mobile device.
- the step S121 further includes step S1213, to perform any or all positioning data sets based on the main direction of the physical space The adjustment is made so that the fused reference map and its reference positioning data set are basically consistent with the main direction of the physical space, which facilitates the interaction between multiple devices and the user, and facilitates the user to identify the location of each device in the physical space.
- the physical space where the first mobile device is located usually has one or two main directions.
- the main direction is used to describe the placement direction of the partitions constituting the physical space, where the partitions include, for example, walls, windows, and screens.
- the first mobile device navigates and moves in a home interior, and the main direction of the physical space includes two intersecting directions determined along the wall of the room.
- the main direction is used to describe the movable road direction constructed by the placed partitions in the physical space, where the partitions are, for example, marking lines and shoulder stones set along the road. , Shelves, etc.
- the first mobile device navigates and moves in a tunnel, and the main direction of the physical space is a single direction determined along a road constructed based on the tunnel wall.
- the first mobile device navigates and moves in the warehouse, and the main directions of the physical space are two directions determined along the intersecting roads constructed based on the warehouse shelves.
- the step S1213 includes: analyzing the second key frame image in the second positioning feature information in the current positioning data set, and determining that the second coordinate information corresponding to the second key frame image is relatively A second relative orientation relationship in the main direction of the physical space; and adjusting the pixel position of the second positioning feature information in the second key frame image based on the second relative orientation relationship; so as to perform step S1212 to match
- the maps constructed each time have their own declination angle differences from the main direction of the physical space.
- the reference map and its reference visual data set are constructed based on the main direction of the physical space.
- the second relative orientation relationship of the second coordinate information with respect to the main direction of the physical space is determined, and then based on the corresponding relationship.
- the determined second relative orientation relationship adjusts the pixel position of the second positioning feature information in the second key frame image, and then uses the adjusted second positioning feature information and the matched first positioning feature information to perform the aforementioned fusion operation .
- the mismatch of positioning features caused by the difference in the deflection angle can be effectively reduced, and the amount of fusion calculation can be reduced.
- a straight line segment is selected from the second positioning feature information of the second key frame image, and the second relative position between the first mobile device and the partition in the physical space is determined according to the identified feature line segment.
- Location relationship the patent application with the publication number CN109074084A provides a technical solution for determining the second relative orientation relationship between the first mobile device and the partition in the physical space according to the identified characteristic line segment, which is hereby quoted in its entirety.
- the second key frame image corresponds to an image taken during the movement of the robot in the cited document
- the first mobile device corresponds to the robot in the cited document
- the second relative orientation relationship corresponds to the cited document
- the relative azimuth relationship in the document will not be detailed here.
- the pixel position of the second positioning feature information in the second key frame image is adjusted based on the determined second relative orientation relationship.
- the correspondence between the pixel coordinates in the second key frame image and the map coordinates constructed by the current map may be default.
- the main optical axis of the camera device is substantially perpendicular to the movement plane of the first mobile device, and a pixel coordinate system having a consistent angular relationship with the map coordinate system can be constructed, thereby adjusting the second relative orientation relationship based on the second relative orientation relationship.
- the pixel position of the second positioning feature information in the key frame image is adjusted based on the determined second relative orientation relationship.
- the correspondence between the pixel coordinates in the second key frame image and the map coordinates constructed by the current map is based on the tilt angle of the camera installed on the first mobile device and the camera parameters. If set, the corresponding relationship can be acquired together with the current positioning data set, thereby adjusting the pixel position of the second positioning feature information in the second key frame image based on the second relative orientation relationship.
- the straight line segment is extracted based on the first positioning feature information of the first key frame image, for example, a plurality of first positioning feature information is connected by the image expansion algorithm, and based on a preset
- the straightness and/or length feature extracts the straight line segment therein, and then uses the aforementioned specific example to determine the first relative orientation relationship, and adjusts the pixel position of the first positioning feature information in the first key frame.
- the second positioning feature information in the adjusted second key frame and the first positioning feature information in the matched first key frame overlap, differ by 180°, or differ by ⁇ 90°.
- the above-mentioned pre-processing process is beneficial to optimize the algorithm of matching positioning feature information and reduce the multi-step calculation in the matching process.
- This step may also analyze the first key frame image in the first positioning feature information in the reference positioning data set to determine the first coordinate information corresponding to the first key frame image relative to the main direction of the physical space. A relative orientation relationship; and adjusting the pixel position of the first positioning feature information in the first key frame image based on the first relative orientation relationship.
- the method of determining the first relative orientation relationship may be the same or similar to the method of determining the second relative orientation relationship, and will not be described in detail here.
- the method of adjusting the pixel position of the first positioning feature information in the first key frame image may be the same or similar to the method of adjusting the pixel position of the second positioning feature information, and will not be described in detail here.
- this step can also be combined with the foregoing two examples: analyzing the first key frame image in the first positioning feature information, and determining that the first coordinate information corresponding to the first key frame image is relative to the The first relative orientation relationship in the main direction of the physical space, and analyze the second key frame image in the second positioning feature information in the current positioning data set to determine that the second coordinate information corresponding to the second key frame image is relative to all the A second relative orientation relationship in the main direction of the physical space; and adjusting the pixel position of the first positioning feature information in the first key frame image based on the first relative orientation relationship, and based on the second relative orientation relationship Adjusting the pixel position of the second positioning feature information in the second key frame image.
- the pixel positions of each positioning feature in each key frame image that are determined based on the main direction of the physical space are obtained.
- step S1212 is performed based on the adjusted first positioning feature information and/or second positioning feature information to obtain the matching first positioning feature information and its first positioning coordinate information, and second positioning feature information and its second Positioning coordinate information.
- the server executes step S122.
- step S122 based on the matched first positioning feature information and its first positioning coordinate information, and second positioning feature information and its second positioning coordinate information, the reference map and its reference positioning data set are merged with the The current map and its current positioning dataset.
- the server obtains the displacement and angle deviation between the current map and the reference map, and Obtain the feature deviation of the positioning feature information that at least matches the current positioning data set and the reference positioning data set; and use the obtained deviation information to fuse the reference map and its reference positioning data set, and the current map and its current positioning data set.
- the step S122 includes step S1221 of correcting the coordinate error in the reference map and/or the current map based on the coordinate deviation information between the matched first positioning coordinate information and the second positioning coordinate information.
- the server counts the displacement deviation information and the angle deviation information between the matched first positioning coordinate information and the second positioning coordinate information to obtain the average displacement deviation information and the average angle deviation information, and according to the obtained average
- the displacement deviation information and the average angle deviation information are used to correct each coordinate information in the reference map and/or the current map.
- the step S122 further includes a step S1222 of performing a merging operation based on at least one map after correction to obtain a new reference map.
- the server uses the revised map as the new reference map.
- the server determines the overlap area between the corrected reference map and the current map before (or after) correction, and updates the reference map based on the overlap area to obtain a new reference map. For example, the server overlaps the revised reference map and the revised current map to determine the overlapping area between the two maps, and updates the non-overlapping area in the revised reference map according to the revised current map to obtain a new Base map.
- the step S122 further includes a step S1223, marking the first positioning feature information and the second positioning feature information that at least match the reference positioning data set and the current positioning data set on a new reference map to obtain new positioning coordinate information.
- the server corrects the positioning coordinate information corresponding to the positioning feature information and marks it in the new reference map according to the correction operation on each coordinate information in the map. For example, according to the aforementioned average displacement deviation information and average angle deviation information, the reference positioning data set and/or the positioning coordinates corresponding to all positioning feature information in the current positioning data set are corrected to obtain new positioning coordinate information. For another example, according to the displacement deviation information and the angle deviation information between the matched first positioning coordinate information and the second positioning coordinate information, the first positioning coordinate information or the second positioning coordinate information is corrected to mark the new reference In the map.
- the step S122 also includes a step of fusing two positioning data sets to obtain a new reference positioning data set, and this step includes at least the following step S1224 and/or step S1225.
- step S1224 the reference positioning data set or the current positioning data set is adjusted based on the positioning feature deviation information between the matched first positioning feature information and the second positioning feature information.
- the matched first location feature information and the second location feature information are measured location feature information
- the server uses the vector deviation information between the matched first location feature information and the second location feature information
- the first positioning feature information or the second positioning feature information corresponding to the reference positioning data set or the current positioning data set is adjusted to obtain new first feature information in the new reference positioning data set.
- the matched first location feature information and second location feature information are both measured location feature information, and are described by a plurality of firstly connected location offset vectors, using one of the first location feature information and the second location feature information Adjust the corresponding first positioning feature information or second positioning feature information (including displacement deviation information and angle deviation information) between the vector deviation information.
- the matched first positioning feature information and second positioning feature information are visual positioning feature information
- the server terminal according to the feature deviation information between the matching first positioning feature information and the second positioning feature information
- the first positioning feature information or the second positioning feature information corresponding to the reference positioning data set or the current positioning data set is adjusted to obtain new first feature information in the new reference positioning data set.
- the matched first location feature information and second location feature information are both visual location feature information, and are described by a descriptor, using the feature deviation information between the first location feature information and the second location feature information (including Grayscale deviation information and/or brightness deviation information), adjust the corresponding first positioning feature information or second positioning feature information.
- step S1225 each unmatched second positioning feature information in the current positioning data set is added to the reference positioning data set, or each unmatched first positioning feature information in the reference positioning data set is added to the current positioning data set.
- the server supplements the unmatched positioning feature information in the two positioning data sets by performing any addition operation, so that the new reference data set can provide more abundant positioning feature information.
- the first mobile device can perform the next navigation operation according to the new reference map and the new reference positioning data set.
- the current map and current positioning data set provided by the first mobile device are not constructed based on the reference map and reference positioning data set used by the first mobile device.
- the method for updating the map further includes :
- the overlap condition includes, but is not limited to: the overall or edge coordinate error between the adjusted reference map and the coordinate information indicating the location of the obstacle in the current map is less than the preset coordinate error value, or the adjusted reference map The overall or edge pixel error between the two map image data formed by the current map is smaller than the preset pixel error value.
- the method of adjusting the reference map or the current map may be adjusted step by step based on the preset unit angle and unit displacement, and/or based on statistics of the displacement and angle difference between the matched measurement positioning feature information in the two maps.
- Translation and rotation operations After the overlap condition is met, the adjusted displacement and angle are determined, and the first positioning coordinate information in the reference positioning data set, the image coordinate information of the key frame image, etc. are adjusted accordingly, and the aforementioned matching and fusion operations are performed on this basis. I will not go into details here.
- different physical spaces may have part of the same positioning feature information. To prevent the first mobile device from recognizing two different physical spaces with part of the same positioning feature information as the same physical space, in some examples , The server can determine whether it is the same physical space by matching the boundary information of the map.
- the method for updating the map further includes detecting the completeness of the current map, and/or detecting the amount of information of the current positioning data set; and performing the data fusion processing based on the obtained detection result Steps of operation.
- the method for detecting the integrity of the current map includes: detecting the time spent drawing the current map based on a preset duration condition to determine the integrity of the current map; and detecting the contour data in the current map based on the preset contour condition In this way, the integrity of the current map is determined; based on the overlap condition of the current map and the reference map, the integrity of the current map is detected.
- the method of detecting the information amount of the current positioning data set includes: detecting the total amount of different second positioning feature information in the positioning data set based on a preset total quantity condition; and/or based on a preset difference total quantity condition, Detect the number of unmatched second positioning feature information in the current positioning data set and the reference positioning data set, etc.
- the above detection methods are not optional, and one or more detection methods can be selected for detection according to the actual situation, thereby reducing unnecessary fusion operations.
- the first mobile device does not perform a complete navigation and movement operation in the physical space every time. For example, in a home environment, the user needs to go out while the cleaning robot is working, and the cleaning robot needs to stop. Current operation. In this case, it is necessary to detect the completeness of the current map, or detect the amount of information in the current positioning data set, or simultaneously detect the completeness of the current map and the amount of information in the current positioning data set, so as to determine whether the current The map and its current positioning data set are fused with the reference map and its reference positioning data set.
- the time spent by the first mobile device to perform the current navigation and movement operation task may be obtained, and this can be compared with the time spent by the first mobile device in performing the navigation and movement operation in the physical space in history.
- the preset condition may be the time taken by the first mobile device to perform the current navigation movement operation task and the first mobile device’s historical history The ratio of the time spent to perform the navigation movement operation task in the physical space, etc.
- the operating data of the motor when the first mobile device is performing the current navigation and movement operation task, and compare it with the first mobile device to perform the navigation and movement operation in the physical space in history.
- the motor data at the time is compared to determine whether the fusion needs to be performed based on a preset condition.
- the preset condition may be that the motor data and the first mobile device have historically The ratio of motor data when performing navigation and movement operation tasks in the physical space.
- the distance moved by the first mobile device during the current navigation and movement operation task can be obtained, and the distance moved by the first mobile device during the navigation and movement operation in the physical space in the history can be obtained.
- the preset condition may be the distance moved by the first mobile device to perform the current navigation movement operation task and the first mobile device in the history The ratio of the distance moved to perform the navigation movement operation task in the physical space, etc.
- the data fusion map and its positioning data set are used as the new reference map and its reference positioning data set in the first mobile device.
- the fused map and its positioning data set are stored in the storage medium.
- the reference map and its reference positioning data set may be actively pushed to the first mobile device, or may be downloaded based on a request of the first mobile device.
- the server after the server performs the fusion operation, it sends the new reference map and its reference positioning data set to the first mobile device located in the physical space, so that the first mobile device will perform the next navigation and movement operation. Use the new datum map and its datum positioning data set.
- the server further executes the step of marking the position of at least one second device equipped with a camera device located in the physical space on the reference map.
- the first mobile device in addition to the first mobile device that can perform navigation and movement operations in the physical space, it also includes a second device configured with a camera device arranged in the physical space.
- the second device includes the aforementioned second mobile device, and/or an electronic device fixedly installed in the physical space and equipped with a camera device, such as a security camera.
- the server also obtains the third key frame image captured by the second device, and determines the first positioning feature information in the first key frame image in the reference positioning data set by matching the third positioning feature information in the third key frame image
- the coordinate position of the second device on the reference map, the location of the second device is marked on the reference map, and the reference map marked with the location of the second device and its reference positioning data set are sent to the first mobile device.
- the user can interact with the first mobile device and/or each second device marked with a reference map through the smart terminal.
- the second device may execute the user's instruction based on the reference map and its reference positioning data set.
- the first mobile device interacts with the corresponding second device based on the used reference map.
- the user makes a gesture instruction straight to the camera of the first mobile device, and the first mobile device communicates with the second device so that the second device executes the user's instruction based on the gesture instruction and the reference map and its reference positioning data set.
- the map constructed in this application is a persistent map, that is, the map after the mobile device is restarted is in the same coordinate system as the map that was worked last time.
- the user can mark the map on the terminal device to set the working area and working mode of the mobile device. For example, a user may mark an area that needs to work multiple times a day or a restricted area or designate a certain area to work in a map on a terminal device.
- this application not only obtains the work record once, but also continuously collects and integrates information to enrich the positioning features.
- the map can be used in different periods and lighting Provide positioning for mobile devices under conditions.
- the map construction method disclosed in this application can obtain a more stable map, which facilitates the interaction between the user and the device, while also saving computing resources, and solves the problem of computing resources caused by changing the composition of the edge positioning in the prior art. tension.
- the persistent map of this application can be used directly after the positioning is successful. If it was originally necessary to create many positioning features per second, the update map method of this application can only create the reference map and the positioning features not covered by the reference positioning data set. can.
- FIG. 3 shows a schematic structural diagram of an embodiment of the server of this application.
- the reference map constructed by the first mobile device during the navigation and movement operation process and its reference positioning data set and the current map and its The current positioning data set is stored in the server.
- the server includes but is not limited to a single server, a server cluster, a distributed server cluster, a cloud server, etc.
- the service end is provided by the cloud service end provided by the cloud provider.
- the cloud server includes a public cloud (Public Cloud) server and a private cloud (Private Cloud) server, where the public or private cloud server includes Software-as-a-Service (Software-as-a-Service, SaaS) ), Platform-as-a-Service (Platform-as-a-Service, PaaS) and Infrastructure-as-a-Service (Infrastructure-as-a-Service, IaaS), etc.
- the private cloud service terminal is for example Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, Tencent Cloud Computing Platform, etc.
- the server is in communication connection with a first mobile device located in a physical space.
- the physical space refers to a physical space provided for navigation and movement of a mobile device, and the physical space includes, but is not limited to, any of the following: indoor/outdoor space, road space, flight space, etc.
- the mobile device is a drone
- the physical space corresponds to flight space
- the mobile device is a vehicle with autopilot function
- the physical space Corresponding to tunnel roads where positioning cannot be obtained or road spaces where the network signal is weak but navigation is required
- the mobile device is a sweeping robot, and the physical space corresponds to an indoor or outdoor space.
- the mobile device is equipped with a camera device, a movement sensor device, and other sensing devices that provide navigation data for autonomous movement; it includes a first mobile device and/or at least one second mobile device, wherein the first mobile device and The second mobile device may be a device of the same type or a device of a different type.
- both the first mobile device and the second mobile device are trucks with autonomous navigation capabilities.
- the first mobile device is a cleaning robot
- the second mobile device is a family companion robot.
- both the first mobile device and the second mobile device are vehicle-mounted terminals.
- the first mobile device or the second mobile device may also be a patrol robot or the like.
- the server includes an interface device 11, a storage device 12, and a processing device 13.
- the storage device 12 includes a non-volatile memory, a storage server, and the like.
- the non-volatile memory is, for example, a solid state hard disk or a U disk.
- the storage server is used to store various information related to power consumption and power supply.
- the interface device 11 includes a network interface, a data line interface, and the like.
- the network interface includes, but is not limited to: an Ethernet network interface device, a mobile network (3G, 4G, 5G, etc.)-based network interface device, a short-range communication (WiFi, Bluetooth, etc.)-based network interface device, etc.
- the data line interface includes but is not limited to: USB interface, RS232, etc.
- the interface device is data connected with the control system, third-party system, Internet, etc.
- the processing device 13 is connected to the interface device 11 and the storage device 12, and includes at least one of a CPU or a chip integrated with the CPU, a programmable logic device (FPGA), and a multi-core processor.
- the processing device 13 also includes a memory for temporarily storing data, such as a memory and a register.
- the interface device 11 is used for data communication with a first mobile device located in a physical space.
- the interface device 11 is, for example, a network interface device based on Ethernet, a network interface device based on mobile networks (3G, 4G, 5G, etc.), a network interface device based on short-range communication (WiFi, Bluetooth, etc.), etc. This is in communication with the first mobile device.
- the storage device 12 is used to store a reference map and its reference positioning data set provided to the first mobile device, and store current data from the first mobile device performing navigation and movement operations in the physical space. Map and its current positioning data set, and store at least one program.
- the storage device 12 includes, for example, a hard disk set on the server side and stores the at least one program.
- the server stores the reference map and its reference positioning data set in the storage device 12.
- the storage device 12 provides the reference map and its reference positioning data set to the interface device 11.
- the storage device 12 stores the current map from the interface device 11 and its current positioning data set.
- the storage device 12 provides the reference map and its reference positioning data set with the current map and its current positioning data set to the processing device 13 .
- the processing device 13 is configured to call the at least one program to coordinate the interface device and the storage device to execute the method for updating the map mentioned in any of the foregoing examples.
- the method for updating the map is shown in FIG. 1 and the corresponding description, which will not be repeated here.
- the step of updating the map can also be completed by a mobile robot.
- a mobile robot is provided.
- FIG. 4 shows a schematic diagram of an embodiment of a module structure of a mobile robot.
- the mobile robot 2 includes a storage device 24, a mobile device 23, a positioning sensing device 21 and a processing device 22.
- the storage device is used to store a reference map describing a physical space and its reference positioning data set, a current map and a current positioning data set constructed by performing navigation and movement operations in the physical space, and at least one program;
- the mobile device Is used to perform a movement operation based on the navigation route determined on the reference map;
- the positioning sensing device is used to collect second positioning feature information during the execution of the navigation movement operation to form a current positioning data set;
- the storage device, the camera device and the mobile device are connected, and are used to call and execute the at least one program to coordinate the storage device, the camera device and the mobile device.
- the positioning sensing device includes, but is not limited to, at least one of the following: a camera device, an infrared distance measuring device, a laser distance measuring device, an angle sensor, a displacement sensor, a counter, and the like.
- measurement and sensing devices such as laser ranging sensors and infrared ranging sensors are arranged on the side of the mobile robot body.
- Measuring and sensing devices such as angle sensors, displacement sensors, and counters are arranged on the movement control system (such as drive motors, rollers, etc.) of the mobile robot.
- Visual sensing devices such as 2D camera devices and 3D camera devices are arranged on the side or top of the mobile robot.
- the processing device 22 navigates the mobile device 23 based on a reference map and its reference positioning data set, and captures key frame images through the camera device in the positioning sensing device 21 It is provided to the processing device 22, and the processing device 22 constructs a current map and a current positioning data set based on the key frame images provided by the camera device and provides them to the storage device 24 for storage.
- the processing device 22 reads the current map and the current positioning data set from the storage device 24, and starts the construction of the reference map and the reference positioning data set.
- the stored reference map and its reference positioning data set are constructed based on the mobile robot itself and/or at least one second mobile device respectively performing at least one navigation movement operation in the physical space, that is, the The reference map and its reference positioning data set are obtained after at least one mobile device performs multiple navigation movements in the same physical space and the maps and its positioning data sets constructed separately are merged.
- the reference map and its reference positioning data set constitute the aforementioned map data.
- the mobile device is equipped with a camera device, a mobile sensor device, and other sensing devices that provide navigation data for autonomous movement; it includes a mobile robot and/or at least one second mobile device, wherein the mobile robot and the second mobile device Mobile devices can be devices of the same type or different types of devices.
- the mobile robot and the second mobile device are both trucks with autonomous navigation capabilities.
- the mobile robot in an indoor space, the mobile robot is a cleaning robot, and the second mobile device is a family companion robot.
- the mobile robot and the second mobile device are both vehicle-mounted terminals.
- the mobile robot or the second mobile device may also be a patrol robot or the like.
- the mobile robot further includes an interface device for data communication with at least one second mobile device; in some embodiments, the processing device also executes acquiring the second mobile device
- the provided third map and the third positioning data set are operated to combine the reference map and its reference positioning data set, the second map and its second positioning data set, and the third map and its Three positioning data sets for data fusion processing.
- a physical space includes a mobile robot and a second mobile device, the mobile robot constructs the current map and its current positioning data set during the navigation and movement operation, and the second mobile device is in the navigation movement operation.
- a third map and its third positioning data set are constructed.
- the mobile robot receives data from the second mobile device through the interface device, and combines the reference map and its reference positioning data set, the current map and its current location Performing data fusion processing on the data set, the third map and the third positioning data set.
- the physical space contains multiple second mobile devices in addition to the mobile robot. Therefore, multiple third maps and their third locations will be constructed during the navigation and movement operations of multiple second mobile devices.
- the mobile robot receives the plurality of third maps and their third positioning data sets through an interface device, and combines the plurality of third maps and their third positioning data sets with a reference map and its reference positioning data set The second map and its second positioning data set are subjected to data fusion processing.
- FIG. 5 shows a schematic diagram of an embodiment of a working process of the mobile robot in this application.
- the robot performs data fusion processing on the reference map and its reference positioning data set and the current map and its current positioning data set.
- the fusion refers to the integration of maps and positioning data sets constructed at different times.
- the integration of the map includes any of the following: integrating the coordinate information in the current map constructed at different times into the coordinate information in a unified reference map; or integrating the coordinate information in the current map into the reference map .
- the second map and the reference map are differentially processed to obtain the differential coordinate information in the current map and integrate it into the reference map.
- the integration of the map also includes the removal of geographic locations not included in the reference map recently, such as removing the coordinate information of the geographic location of an obstacle that has been temporarily placed.
- the integration of the positioning data set includes any of the following: integrating the second positioning feature information in the current positioning data set constructed at different times into the first positioning feature information in a unified reference map; or integrating the second positioning data
- the second positioning feature information is integrated into the benchmark positioning data set.
- the current positioning data set and the reference positioning data set are differentially processed to obtain differential positioning feature information, and the two positioning data sets are integrated based on the differential positioning feature information.
- the integration of the positioning data set also includes removing the first positioning feature information that has not been included in the reference positioning data set recently, for example, removing the first positioning feature information that is determined to reflect an obstacle that has been temporarily placed.
- the merged map and positioning data set integrate all the map data collected by the mobile robot and/or the second mobile device in the historical navigation and movement operation.
- the mobile robot performs the first navigation and movement operation Do1 under natural light during the day.
- the current map P1 and the current positioning feature data set D1 constructed by the mobile robot are in the state presented under natural light and used as the reference map and its Benchmark positioning data set; at night, under the illumination of indoor lights, the brightness and angle of illumination have changed.
- the mobile robot builds the current map P2 and current positioning feature data set D2 based on the second navigation movement operation Do2. The status of has changed.
- the current map P2 and its current positioning data set D2 constructed by the mobile robot at night are fused with the reference map P1 and its reference positioning data set D1 constructed during the day, so that the fused reference map and its reference positioning data set At the same time, it contains the map and positioning data set constructed under the scene of the physical space during the day and night.
- the mobile robot and/or at least one second mobile device has performed multiple navigation and movement operations in a physical space, and its reference map and its reference positioning data set have been integrated with the construction of multiple navigation and movement operations. Map and positioning data sets. After a new navigation movement operation of the mobile robot and/or at least one second mobile device, the current map constructed by the mobile robot and/or at least one second mobile device and its current positioning data set are merged with the historically constructed reference map and its reference positioning data set. Iteratively construct the current map and its current positioning data set.
- process of the mobile robot performing step S210 is the same or similar to the process of performing step S120 by the first device in the foregoing example, and will not be described in detail here.
- step S220 the mobile robot executes step S220 to use the data fusion map and its positioning data set as a new reference map and its reference positioning data set in the mobile robot, and store it .
- the processing device of the mobile robot after fusing the reference map and its reference positioning data set with the second map and its second positioning data set, sends the fused new reference map and its reference positioning data set to the second In the mobile device, in order to use the new reference map and its reference positioning data set in the next navigation movement operation.
- the processing device further executes the step of marking the location of at least one second device equipped with a camera device located in the physical space on the map.
- a mobile robot in addition to a mobile robot that can perform navigation and movement operations in the physical space, it also includes a second device equipped with a camera device arranged in the physical space.
- the second device includes the aforementioned second mobile device, and/or an electronic device fixedly installed in the physical space and equipped with a camera device, such as a security camera.
- the mobile robot also obtains the third key frame image taken by the second device, and determines the first positioning feature information in the first key frame image in the reference positioning data set by matching the third positioning feature information in the third key frame image
- the coordinate position of the second device on the reference map, the location of the second device is marked on the reference map, and the reference map marked with the location of the second device and its reference positioning data set are sent to the storage device of the mobile robot.
- the user can interact with the mobile robot marked with a reference map and/or each second device through the smart terminal.
- the second device may execute the user's instruction based on the reference map and its reference positioning data set.
- the mobile robot interacts with the corresponding second device based on the used reference map.
- the user makes a gesture instruction straight to the camera of the mobile robot, and the mobile robot communicates with the second device so that the second device executes the user's instruction based on the gesture instruction and the reference map and its reference positioning data set.
- the map constructed by the mobile robot in this application is a persistent map, that is, the map after the mobile robot is restarted is in the same coordinate system as the map that was last worked.
- the user can mark the map on the terminal device to set the working area and working mode of the mobile device. For example, a user may mark an area that needs to work multiple times a day or a restricted area or designate a certain area to work in a map on a terminal device.
- the mobile robot in this application not only obtains the work record once, but also continuously collects and integrates information to enrich the positioning features.
- the map can be changed Provide positioning for mobile devices under the time of day and light conditions.
- the mobile robot disclosed in the present application can obtain a more stable map, which facilitates the interaction between the user and the device, saves computing resources, and solves the shortage of computing resources caused by changing the composition of the position in the prior art.
- the persistent map of the mobile robot in this application can be used directly after the positioning is successful. If it was originally necessary to create many positioning features per second, the mobile robot of this application can only create the reference map and the positioning features not covered by the reference positioning data set OK.
- the mobile robot 3 includes: an interface device 35 for data communication with a server; a storage device 34 for storing a reference map used to provide navigation services during navigation and mobile operations in a physical space And its reference positioning data set, storing the current map and its current positioning data set constructed during the execution of the navigation movement operation, and storing at least one program; the processing device 32 is connected to the storage device and the interface device for Invoke and execute the at least one program to coordinate the storage device and the interface device to execute the following method: send the current map and its current positioning data set to the server; and obtain the new one returned by the server The reference map and its reference positioning data set, and update the stored reference map and its reference positioning data set; wherein the acquired new reference map and its reference positioning data set are the reference map and the reference location data set before the server will update The reference positioning data set and the current map and the current map and the current
- the mobile robot 3 completes the navigation movement operation by calling the reference map and its reference positioning data set in its storage device 34, and constructs the current map and its current positioning data set during the navigation movement operation.
- the mobile robot stores the current map constructed during the navigation and movement operation and its current positioning data set in the storage device 34.
- the processing device 32 of the mobile robot 3 calls the current map and its current location data set in the storage device, and sends the current map and its current location data set to the server through the interface device 35. After completing the fusion step of the reference map and its reference positioning data set with the current map and its current positioning data set on the server side, a new reference map and its reference positioning data set are formed.
- the server sends the new reference map and its reference positioning data set to the interface device 35 of the mobile robot 3, and stores the new reference map and its reference positioning data set in the storage device 34 through the processing device 32. in.
- the method for the server to update the reference map and its reference positioning data set by data fusion is the same or similar to the foregoing example of the method for updating the map, and will not be described in detail here.
- the physical space includes a mobile robot and at least one second mobile device, and the at least one second mobile device constructs a third map and its third map during navigation and movement operations in the physical space. Locate the data set.
- the new reference map and its reference positioning data set are also integrated with a third map and a third positioning data set provided by at least one second mobile device.
- the mobile robot disclosed in the present application can cooperate with the server to jointly construct a persistent map, and the persistent map can make the map of the mobile robot after restarting and the last working map in the same coordinate system.
- the user can mark the map on the terminal device to set the working area and working mode of the mobile device. For example, a user may mark an area that needs to work multiple times a day or a restricted area or designate a certain area to work in a map on a terminal device.
- the visual information will be quite different. Therefore, the mobile robot in this application not only obtains the work record once, but also continuously collects and integrates information to enrich the positioning features.
- the map can be changed Provide positioning for mobile devices under the time of day and light conditions.
- the mobile robot disclosed in the present application can obtain a more stable map, which facilitates the interaction between the user and the device, saves computing resources, and solves the shortage of computing resources caused by changing the composition of the position in the prior art.
- the persistent map of the mobile robot in this application can be used directly after the positioning is successful. If it was originally necessary to create many positioning features per second, the mobile robot of this application can only create the reference map and the positioning features not covered by the reference positioning data set OK.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Navigation (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
Abstract
Description
Claims (21)
- 一种更新地图的方法,其特征在于,包括:获取第一移动设备在一物理空间内执行导航移动操作所构建的当前地图及其当前定位数据集;其中,所述第一移动设备是利用预先存储的对应所述物理空间的基准地图及其基准定位数据集进行导航移动的;将所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合处理;将数据融合后的地图及其定位数据集作为所述第一移动设备中新的基准地图及其基准定位数据集。
- 根据权利要求1所述的更新地图的方法,其特征在于,所述基准地图及其基准定位数据集是基于所述第一移动设备和/或至少一个第二移动设备在所述物理空间内分别执行至少一次导航移动操作而构建的。
- 根据权利要求1所述的更新地图的方法,其特征在于,所述将基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合处理的步骤包括:确定所述基准定位数据集中的第一定位特征信息及其第一定位坐标信息、和所述当前定位数据集中的第二定位特征信息及其第二定位坐标信息相匹配;基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集。
- 根据权利要求3所述的更新地图的方法,其特征在于,所述确定基准定位数据集和所述当前定位数据集中相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息的步骤包括:匹配所述基准定位数据集中的各第一定位特征信息和所述当前定位数据集中的各第二定位特征信息;基于所得到的匹配结果确定相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。
- 根据权利要求4所述的更新地图的方法,其特征在于,所述基于所得到的匹配结果确定相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标 信息的步骤包括:基于相匹配的第一定位特征信息和第二定位特征信息,将各自对应的第一定位坐标信息和第二定位坐标信息进行匹配,以得到相匹配的第一定位坐标信息和第二定位坐标信息。
- 根据权利要求4所述的更新地图的方法,其特征在于,所述第一定位特征信息包含基于基准地图中的空间特征而确定的第一测量定位特征信息,以及所述第二定位特征信息包含基于当前地图中的空间特征而确定的第二测量定位特征信息;和/或所述第一定位特征信息包含从基准定位数据集中的第一关键帧图像中提取的第一视觉定位特征信息;所述第二定位特征信息包含从当前定位数据集中的第二关键帧图像中提取的第二视觉定位特征信息。
- 根据权利要求6所述的更新地图的方法,其特征在于,所述第一测量定位特征信息包括以下至少一种:基于基准地图中空间特征的坐标信息组合而确定的测量数据,根据用于描述基准地图中空间特征的深度信息组合而确定的测量数据;以及所述第二测量定位特征信息包括以下至少一种:基于当前地图中对应空间特征的坐标信息组合而确定的测量数据,根据用于描述当前地图中空间特征的深度信息组合而确定的测量数据。
- 根据权利要求4所述的更新地图的方法,其特征在于,所述匹配基准定位数据集中的各第一定位特征信息和所述当前定位数据集中的各第二定位特征信息的步骤包括:将所述当前定位数据集中各第二关键帧图像中的第二定位特征信息与基准定位数据集中各第一关键帧图像中的第一定位特征信息进行匹配处理,以确定所述第一关键帧图像与第二关键帧图像中相匹配的第一定位特征信息和第二定位特征信息。
- 根据权利要求8所述的更新地图的方法,其特征在于,还包括:分析所述基准定位数据集中的第一关键帧图像,确定所述第一关键帧图像所对应的第一图像坐标信息相对于所述物理空间主方向的第一相对方位关系;以及基于所述第一相对方位关系调整在所述第一关键帧图像中的第一定位特征信息的像素位置;和/或分析所述当前定位数据集中的第二关键帧图像,确定所述第二关键帧图像所对应的第二图像坐标信息相对于所述物理空间主方向的第二相对方位关系;以及基于所述第二相对 方位关系调整在所述第二关键帧图像中的第二定位特征信息的像素位置;以便匹配调整后的第二关键帧图像中的第二定位特征信息与调整后的第一关键帧图像中的第一定位特征信息。
- 根据权利要求3所述的更新地图的方法,其特征在于,还包括:调整所述基准地图或当前地图直至调整后两幅地图符合预设的重叠条件的步骤;以便基于调整后的两地图确定所述基准定位数据集和所述当前定位数据集中相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。
- 根据权利要求3所述的更新地图的方法,其特征在于,所述基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集的步骤包括:基于相匹配的第一定位坐标信息和第二定位坐标信息之间的坐标偏差信息修正基准地图和/或当前地图中的坐标误差;基于修正后的至少一个地图执行合并操作,以得到新的基准地图;以及将基准定位数据集和当前定位数据集中至少相匹配的第一定位特征信息和第二定位特征信息标记在新的基准地图上,以得到新的定位坐标信息。
- 根据权利要求3或11所述的更新地图的方法,其特征在于,所述基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集的步骤包括以下至少一个步骤,以得到新的基准定位数据集:基于相匹配的第一定位特征信息和第二定位特征信息之间的定位特征偏差信息调整基准定位数据集或当前定位数据集;将当前定位数据集中未匹配的各第二定位特征信添加至基准定位数据集中,或者,将基准定位数据集中未匹配的各第一定位特征信息添加至当前定位数据集中。
- 根据权利要求1所述的更新地图的方法,其特征在于,还包括以下步骤:检测所述当前地图的完整程度,和/或检测所述当前定位数据集的信息量;基于所得到的检测结果执行所述数据融合处理的操作。
- 根据权利要求1所述的更新地图的方法,其特征在于,还包括将新的基准地图及其基准定位数据集发送至位于所述物理空间中的第一移动设备的步骤。
- 根据权利要求1所述的更新地图的方法,其特征在于,还包括将位于所述物理空间内的至少一个配置有摄像装置的第二设备的位置标记在所述基准地图上的步骤。
- 一种服务端,其特征在于,包括:接口装置,用于与位于一物理空间中的第一移动设备进行数据通信;存储装置,用于存储用于提供给所述第一移动设备的基准地图及其基准定位数据集,存储来自所述第一移动设备在所述物理空间内执行导航移动操作所构建的当前地图及其当前定位数据集,以及存储至少一个程序;处理装置,与所述存储装置和接口装置连接,用于调用并执行所述至少一个程序,以协调所述存储装置和接口装置执行如权利要求1-15中任一所述的方法。
- 一种移动机器人,其特征在于,包括:存储装置,用于存储一基准地图及其基准定位数据集,当前地图和当前定位数据集,以及至少一个程序;其中,所述当前地图和当前定位数据集为所述移动机器人执行一次导航移动操作所构建的;所述基准地图及其基准定位数据集为所述移动机器人执行所述导航移动操作所使用的;移动装置,用于基于所述基准地图而确定的导航路线执行移动操作;定位感应装置,用于在执行导航移动操作期间收集第二定位特征信息,以构成当前定位数据集;处理装置,与所述存储装置、摄像装置和移动装置相连,用于调用并执行所述至少一个程序,以协调所述存储装置、摄像装置和移动装置执行如权利要求1、或3-15中任一所述的更新地图的方法。
- 根据权利要求17所述的移动机器人,其特征在于,所存储的基准地图及其基准定位数据集是基于所述移动机器人自身和/或至少一个第二移动设备在同一物理空间内分别执行至少一次导航移动操作而构建的。
- 根据权利要求18所述的移动机器人,其特征在于,还包括接口装置,用于与至少一个第 二移动设备进行数据通信;所述处理装置还执行获取所述第二移动设备所提供的第三地图和第三定位数据集的操作,以便将所述基准地图及其基准定位数据集、所述第二地图及其第二定位数据集、和所述第三地图及其第三定位数据集进行数据融合处理。
- 一种移动机器人,其特征在于,包括:接口装置,用于与一服务端进行数据通信;存储装置,用于存储用于在一物理空间内导航移动操作期间提供导航服务的基准地图及其基准定位数据集,存储在执行所述导航移动操作期间所构建的当前地图及其当前定位数据集,以及存储至少一个程序;处理装置,与所述存储装置和接口装置连接,用于调用并执行所述至少一个程序,以协调所述存储装置和接口装置执行如下步骤:将所述当前地图及其当前定位数据集发送至所述服务端;以及获取所述服务端返回的新的基准地图及其基准定位数据集,并更新所存储的基准地图及其基准定位数据集;其中,所获取的新的基准地图及其基准定位数据集是所述服务端将更新前的基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合后得到的。
- 根据权利要20所述的移动机器人,其特征在于,所述新的基准地图及其基准定位数据集还融合有至少一个第二移动设备所提供的第三地图及其第三定位数据集。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/086281 WO2020223974A1 (zh) | 2019-05-09 | 2019-05-09 | 更新地图的方法及移动机器人 |
CN201980000681.2A CN110268354A (zh) | 2019-05-09 | 2019-05-09 | 更新地图的方法及移动机器人 |
US16/663,293 US11204247B2 (en) | 2019-05-09 | 2019-10-24 | Method for updating a map and mobile robot |
US17/520,224 US20220057212A1 (en) | 2019-05-09 | 2021-11-05 | Method for updating a map and mobile robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/086281 WO2020223974A1 (zh) | 2019-05-09 | 2019-05-09 | 更新地图的方法及移动机器人 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/663,293 Continuation US11204247B2 (en) | 2019-05-09 | 2019-10-24 | Method for updating a map and mobile robot |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020223974A1 true WO2020223974A1 (zh) | 2020-11-12 |
Family
ID=67912944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/086281 WO2020223974A1 (zh) | 2019-05-09 | 2019-05-09 | 更新地图的方法及移动机器人 |
Country Status (3)
Country | Link |
---|---|
US (2) | US11204247B2 (zh) |
CN (1) | CN110268354A (zh) |
WO (1) | WO2020223974A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112826373A (zh) * | 2021-01-21 | 2021-05-25 | 深圳乐动机器人有限公司 | 清洁机器人的清洁方法、装置、设备和存储介质 |
CN115177178A (zh) * | 2021-04-06 | 2022-10-14 | 美智纵横科技有限责任公司 | 一种清扫方法、装置和计算机存储介质 |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215887B (zh) * | 2019-07-09 | 2023-09-08 | 深圳市优必选科技股份有限公司 | 一种位姿确定方法、装置、存储介质及移动机器人 |
DE102019128253B4 (de) * | 2019-10-18 | 2024-06-06 | StreetScooter GmbH | Verfahren zum Navigieren eines Flurförderzeugs |
DE102019132363A1 (de) * | 2019-11-28 | 2021-06-02 | Bayerische Motoren Werke Aktiengesellschaft | Verfahren zum Betreiben einer Umgebungserfassungsvorrichtung mit einer gridbasierten Auswertung und mit einer Fusionierung, sowie Umgebungserfassungsvorrichtung |
CN111024100B (zh) | 2019-12-20 | 2021-10-29 | 深圳市优必选科技股份有限公司 | 一种导航地图更新方法、装置、可读存储介质及机器人 |
CN111145634B (zh) * | 2019-12-31 | 2022-02-22 | 深圳市优必选科技股份有限公司 | 一种校正地图的方法及装置 |
CN111220148A (zh) * | 2020-01-21 | 2020-06-02 | 珊口(深圳)智能科技有限公司 | 移动机器人的定位方法、系统、装置及移动机器人 |
CN113701767B (zh) * | 2020-05-22 | 2023-11-17 | 杭州海康机器人股份有限公司 | 一种地图更新的触发方法和系统 |
CN112068552A (zh) * | 2020-08-18 | 2020-12-11 | 广州赛特智能科技有限公司 | 一种基于cad图纸的移动机器人自主建图方法 |
CN112101177B (zh) * | 2020-09-09 | 2024-10-15 | 东软睿驰汽车技术(沈阳)有限公司 | 地图构建方法、装置及运载工具 |
CN112190185B (zh) * | 2020-09-28 | 2022-02-08 | 深圳市杉川机器人有限公司 | 扫地机器人及其三维场景的构建方法、系统及可读存储介质 |
CN112284402B (zh) * | 2020-10-15 | 2021-12-07 | 广州小鹏自动驾驶科技有限公司 | 一种车辆定位的方法和装置 |
CN114490675A (zh) * | 2020-10-26 | 2022-05-13 | 华为技术有限公司 | 一种地图更新方法、相关装置、可读存储介质和系统 |
CN113093731A (zh) * | 2021-03-12 | 2021-07-09 | 广东来个碗网络科技有限公司 | 智能回收箱的移动控制方法及装置 |
CN112927256A (zh) * | 2021-03-16 | 2021-06-08 | 杭州萤石软件有限公司 | 一种分割区域的边界融合方法、装置、移动机器人 |
CN113112847A (zh) * | 2021-04-12 | 2021-07-13 | 蔚来汽车科技(安徽)有限公司 | 用于固定泊车场景的车辆定位方法及其系统 |
CN113183153A (zh) * | 2021-04-27 | 2021-07-30 | 北京猎户星空科技有限公司 | 一种地图创建方法、装置、设备及介质 |
CN113590728B (zh) * | 2021-07-09 | 2024-10-29 | 北京小米移动软件有限公司 | 地图切换方法和装置、清扫设备、存储介质 |
CN114019953B (zh) * | 2021-10-08 | 2024-03-19 | 中移(杭州)信息技术有限公司 | 地图构建方法、装置、设备及存储介质 |
CN114111758B (zh) * | 2021-11-01 | 2024-06-04 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114237217B (zh) * | 2021-11-04 | 2024-08-02 | 深圳拓邦股份有限公司 | 一种工作场地切换方法、装置及机器人 |
CN114543808B (zh) * | 2022-02-11 | 2024-09-27 | 杭州萤石软件有限公司 | 室内重定位方法、装置、设备及存储介质 |
CN114674308B (zh) * | 2022-05-26 | 2022-09-16 | 之江实验室 | 基于安全出口指示牌视觉辅助激光长廊定位方法及装置 |
CN115930971B (zh) * | 2023-02-01 | 2023-09-19 | 七腾机器人有限公司 | 一种机器人定位与建图的数据融合处理方法 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101953172A (zh) * | 2008-02-13 | 2011-01-19 | 塞格瑞德公司 | 分布式多机器人系统 |
CN103674011A (zh) * | 2012-09-19 | 2014-03-26 | 联想(北京)有限公司 | 即时定位与地图构建设备、系统与方法 |
CN105203094A (zh) * | 2015-09-10 | 2015-12-30 | 联想(北京)有限公司 | 构建地图的方法和设备 |
CN105373610A (zh) * | 2015-11-17 | 2016-03-02 | 广东欧珀移动通信有限公司 | 一种室内地图的更新方法以及服务器 |
US20170083005A1 (en) * | 2011-05-06 | 2017-03-23 | X Development Llc | Methods and Systems for Multirobotic Management |
CN106885578A (zh) * | 2015-12-16 | 2017-06-23 | 北京奇虎科技有限公司 | 地图更新方法和装置 |
CN107449431A (zh) * | 2016-03-04 | 2017-12-08 | 通用汽车环球科技运作有限责任公司 | 移动导航单元中的渐进式地图维护 |
CN107515006A (zh) * | 2016-06-15 | 2017-12-26 | 华为终端(东莞)有限公司 | 一种地图更新方法和车载终端 |
CN107544515A (zh) * | 2017-10-10 | 2018-01-05 | 苏州中德睿博智能科技有限公司 | 基于云服务器的多机器人建图导航系统与建图导航方法 |
CN108896050A (zh) * | 2018-06-26 | 2018-11-27 | 上海交通大学 | 一种基于激光传感器的移动机器人长期定位系统及方法 |
Family Cites Families (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2222897A (en) * | 1988-04-08 | 1990-03-21 | Eliahu Igal Zeevi | Vehicle navigation system |
JPH0814930A (ja) * | 1994-07-04 | 1996-01-19 | Japan Radio Co Ltd | ナビゲーション装置 |
JP2674521B2 (ja) * | 1994-09-21 | 1997-11-12 | 日本電気株式会社 | 移動体誘導装置 |
JP3564547B2 (ja) * | 1995-04-17 | 2004-09-15 | 本田技研工業株式会社 | 自動走行誘導装置 |
JP3893647B2 (ja) * | 1996-09-30 | 2007-03-14 | マツダ株式会社 | ナビゲーション装置 |
JP3546680B2 (ja) * | 1998-01-26 | 2004-07-28 | トヨタ自動車株式会社 | ナビゲーション装置 |
JP4024450B2 (ja) * | 2000-03-03 | 2007-12-19 | パイオニア株式会社 | ナビゲーションシステム |
DE10127399A1 (de) * | 2001-05-31 | 2002-12-12 | Univ Dresden Tech | Verfahren und Vorrichtung zur autonomen Navigation von Satelliten |
JP4600357B2 (ja) * | 2006-06-21 | 2010-12-15 | トヨタ自動車株式会社 | 測位装置 |
JP4257661B2 (ja) * | 2006-06-30 | 2009-04-22 | アイシン・エィ・ダブリュ株式会社 | ナビゲーション装置 |
US9733091B2 (en) * | 2007-05-31 | 2017-08-15 | Trx Systems, Inc. | Collaborative creation of indoor maps |
AU2008283845A1 (en) * | 2007-08-06 | 2009-02-12 | Trx Systems, Inc. | Locating, tracking, and/or monitoring personnel and/or assets both indoors and outdoors |
WO2009101163A2 (de) * | 2008-02-15 | 2009-08-20 | Continental Teves Ag & Co. Ohg | Fahrzeugsystem zur navigation und/oder fahrerassistenz |
US9103917B2 (en) * | 2010-02-12 | 2015-08-11 | Broadcom Corporation | Method and system for determining location within a building based on historical location information |
US20120143495A1 (en) * | 2010-10-14 | 2012-06-07 | The University Of North Texas | Methods and systems for indoor navigation |
US10027952B2 (en) * | 2011-08-04 | 2018-07-17 | Trx Systems, Inc. | Mapping and tracking system with features in three-dimensional space |
FI124665B (en) * | 2012-01-11 | 2014-11-28 | Indooratlas Oy | Creating a magnetic field map for indoor positioning |
US9418478B2 (en) * | 2012-06-05 | 2016-08-16 | Apple Inc. | Methods and apparatus for building a three-dimensional model from multiple data sets |
WO2014057540A1 (ja) * | 2012-10-10 | 2014-04-17 | 三菱電機株式会社 | ナビゲーション装置およびナビゲーション用サーバ |
US11156464B2 (en) * | 2013-03-14 | 2021-10-26 | Trx Systems, Inc. | Crowd sourced mapping with robust structural features |
KR101288953B1 (ko) * | 2013-03-14 | 2013-07-23 | 주식회사 엠시스템즈 | 레저 선박용 블랙박스 시스템 |
US9749801B2 (en) * | 2013-03-15 | 2017-08-29 | Honeywell International Inc. | User assisted location devices |
US9723251B2 (en) * | 2013-04-23 | 2017-08-01 | Jaacob I. SLOTKY | Technique for image acquisition and management |
CN105637323B (zh) * | 2013-10-22 | 2018-09-25 | 三菱电机株式会社 | 导航用服务器、导航系统以及导航方法 |
DE102014002150B3 (de) * | 2014-02-15 | 2015-07-23 | Audi Ag | Verfahren zur Ermittlung der absoluten Position einer mobilen Einheit und mobile Einheit |
CN107110651B (zh) * | 2014-09-08 | 2021-04-30 | 应美盛股份有限公司 | 用于使用地图信息辅助的增强型便携式导航的方法和装置 |
DE102016211805A1 (de) * | 2015-10-09 | 2017-04-13 | Volkswagen Aktiengesellschaft | Fusion von Positionsdaten mittels Posen-Graph |
JP2017161501A (ja) * | 2016-03-07 | 2017-09-14 | 株式会社デンソー | 走行位置検出装置、走行位置検出方法 |
JP6804865B2 (ja) * | 2016-04-21 | 2020-12-23 | クラリオン株式会社 | 情報提供システム、情報提供装置および情報提供方法 |
GB201613105D0 (en) * | 2016-07-29 | 2016-09-14 | Tomtom Navigation Bv | Methods and systems for map matching |
CN108732584B (zh) * | 2017-04-17 | 2020-06-30 | 百度在线网络技术(北京)有限公司 | 用于更新地图的方法和装置 |
CN107144285B (zh) * | 2017-05-08 | 2020-06-26 | 深圳地平线机器人科技有限公司 | 位姿信息确定方法、装置和可移动设备 |
CN107145578B (zh) * | 2017-05-08 | 2020-04-10 | 深圳地平线机器人科技有限公司 | 地图构建方法、装置、设备和系统 |
US9838850B2 (en) * | 2017-05-12 | 2017-12-05 | Mapsted Corp. | Systems and methods for determining indoor location and floor of a mobile device |
CN107504971A (zh) * | 2017-07-05 | 2017-12-22 | 桂林电子科技大学 | 一种基于pdr和地磁的室内定位方法及系统 |
CN109388093B (zh) | 2017-08-02 | 2020-09-15 | 苏州珊口智能科技有限公司 | 基于线特征识别的机器人姿态控制方法、系统及机器人 |
JP7035448B2 (ja) * | 2017-10-26 | 2022-03-15 | 株式会社アイシン | 移動体 |
CN107907131B (zh) | 2017-11-10 | 2019-12-13 | 珊口(上海)智能科技有限公司 | 定位系统、方法及所适用的机器人 |
WO2019212698A1 (en) * | 2018-05-01 | 2019-11-07 | Magic Leap, Inc. | Avatar animation using markov decision process policies |
WO2020006685A1 (zh) * | 2018-07-03 | 2020-01-09 | 深圳前海达闼云端智能科技有限公司 | 一种建立地图的方法、终端和计算机可读存储介质 |
CN109074638B (zh) * | 2018-07-23 | 2020-04-24 | 深圳前海达闼云端智能科技有限公司 | 融合建图方法、相关装置及计算机可读存储介质 |
-
2019
- 2019-05-09 CN CN201980000681.2A patent/CN110268354A/zh active Pending
- 2019-05-09 WO PCT/CN2019/086281 patent/WO2020223974A1/zh active Application Filing
- 2019-10-24 US US16/663,293 patent/US11204247B2/en active Active
-
2021
- 2021-11-05 US US17/520,224 patent/US20220057212A1/en not_active Abandoned
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101953172A (zh) * | 2008-02-13 | 2011-01-19 | 塞格瑞德公司 | 分布式多机器人系统 |
US20170083005A1 (en) * | 2011-05-06 | 2017-03-23 | X Development Llc | Methods and Systems for Multirobotic Management |
CN103674011A (zh) * | 2012-09-19 | 2014-03-26 | 联想(北京)有限公司 | 即时定位与地图构建设备、系统与方法 |
CN105203094A (zh) * | 2015-09-10 | 2015-12-30 | 联想(北京)有限公司 | 构建地图的方法和设备 |
CN105373610A (zh) * | 2015-11-17 | 2016-03-02 | 广东欧珀移动通信有限公司 | 一种室内地图的更新方法以及服务器 |
CN106885578A (zh) * | 2015-12-16 | 2017-06-23 | 北京奇虎科技有限公司 | 地图更新方法和装置 |
CN107449431A (zh) * | 2016-03-04 | 2017-12-08 | 通用汽车环球科技运作有限责任公司 | 移动导航单元中的渐进式地图维护 |
CN107515006A (zh) * | 2016-06-15 | 2017-12-26 | 华为终端(东莞)有限公司 | 一种地图更新方法和车载终端 |
CN107544515A (zh) * | 2017-10-10 | 2018-01-05 | 苏州中德睿博智能科技有限公司 | 基于云服务器的多机器人建图导航系统与建图导航方法 |
CN108896050A (zh) * | 2018-06-26 | 2018-11-27 | 上海交通大学 | 一种基于激光传感器的移动机器人长期定位系统及方法 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112826373A (zh) * | 2021-01-21 | 2021-05-25 | 深圳乐动机器人有限公司 | 清洁机器人的清洁方法、装置、设备和存储介质 |
CN112826373B (zh) * | 2021-01-21 | 2022-05-06 | 深圳乐动机器人有限公司 | 清洁机器人的清洁方法、装置、设备和存储介质 |
CN115177178A (zh) * | 2021-04-06 | 2022-10-14 | 美智纵横科技有限责任公司 | 一种清扫方法、装置和计算机存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20220057212A1 (en) | 2022-02-24 |
US11204247B2 (en) | 2021-12-21 |
CN110268354A (zh) | 2019-09-20 |
US20200356582A1 (en) | 2020-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020223974A1 (zh) | 更新地图的方法及移动机器人 | |
CN110874100B (zh) | 用于使用视觉稀疏地图进行自主导航的系统和方法 | |
US11816907B2 (en) | Systems and methods for extracting information about objects from scene information | |
CN110497901B (zh) | 一种基于机器人vslam技术的泊车位自动搜索方法和系统 | |
CN108090958B (zh) | 一种机器人同步定位和地图构建方法和系统 | |
CN110850863B (zh) | 自主移动装置、自主移动方法以及存储介质 | |
CN113168717B (zh) | 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达 | |
CN104536445B (zh) | 移动导航方法和系统 | |
WO2021035669A1 (zh) | 位姿预测方法、地图构建方法、可移动平台及存储介质 | |
CN113674416B (zh) | 三维地图的构建方法、装置、电子设备及存储介质 | |
KR20200109260A (ko) | 지도 구축 방법, 장치, 기기 및 판독가능 저장 매체 | |
CN112734765A (zh) | 基于实例分割与多传感器融合的移动机器人定位方法、系统及介质 | |
CN111220148A (zh) | 移动机器人的定位方法、系统、装置及移动机器人 | |
Zhang | LILO: A novel LiDAR–IMU SLAM system with loop optimization | |
AU2024219616A1 (en) | Generating and validating a virtual 3D representation of a real-world structure | |
WO2023070115A1 (en) | Three-dimensional building model generation based on classification of image elements | |
WO2018133074A1 (zh) | 一种基于大数据及人工智能的智能轮椅系统 | |
CN117036447A (zh) | 基于多传感器融合的室内场景稠密三维重建方法及装置 | |
WO2023030062A1 (zh) | 一种无人机的飞行控制方法、装置、设备、介质及程序 | |
CN114299192A (zh) | 定位建图的方法、装置、设备和介质 | |
Zhang et al. | Recent Advances in Robot Visual SLAM | |
KR20220050386A (ko) | 맵 생성 방법 및 이를 이용한 이미지 기반 측위 시스템 | |
Liu et al. | Real-time trust region ground plane segmentation for monocular mobile robots | |
CN115752476B (zh) | 一种基于语义信息的车辆地库重定位方法、装置、设备和介质 | |
Wendel | Scalable visual navigation for micro aerial vehicles using geometric prior knowledge |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19927761 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19927761 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19927761 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/05/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19927761 Country of ref document: EP Kind code of ref document: A1 |