WO2020223974A1 - 更新地图的方法及移动机器人 - Google Patents

更新地图的方法及移动机器人 Download PDF

Info

Publication number
WO2020223974A1
WO2020223974A1 PCT/CN2019/086281 CN2019086281W WO2020223974A1 WO 2020223974 A1 WO2020223974 A1 WO 2020223974A1 CN 2019086281 W CN2019086281 W CN 2019086281W WO 2020223974 A1 WO2020223974 A1 WO 2020223974A1
Authority
WO
WIPO (PCT)
Prior art keywords
positioning
map
data set
feature information
current
Prior art date
Application number
PCT/CN2019/086281
Other languages
English (en)
French (fr)
Inventor
崔彧玮
李巍
Original Assignee
珊口(深圳)智能科技有限公司
珊口(上海)智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珊口(深圳)智能科技有限公司, 珊口(上海)智能科技有限公司 filed Critical 珊口(深圳)智能科技有限公司
Priority to PCT/CN2019/086281 priority Critical patent/WO2020223974A1/zh
Priority to CN201980000681.2A priority patent/CN110268354A/zh
Priority to US16/663,293 priority patent/US11204247B2/en
Publication of WO2020223974A1 publication Critical patent/WO2020223974A1/zh
Priority to US17/520,224 priority patent/US20220057212A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • This application relates to the field of navigation control technology, in particular to a method for updating a map, a server and a mobile robot.
  • Autonomous mobile devices represented by robots navigate and move based on maps. Among them, when there is no map available for navigation, the autonomous mobile device needs to update the map. At present, the maps constructed by autonomous mobile devices during navigation and movement cannot be permanently saved because the initial positions, initial attitudes, and physical spaces in which the navigational movement is performed cannot be guaranteed to be completely consistent.
  • the purpose of this application is to provide a method for updating a map and a mobile robot to solve the problem of non-persistence of the map in the prior art.
  • the first aspect of the present application provides a method for updating a map, including: acquiring a current map constructed by a first mobile device to perform a navigational movement operation in a physical space and its current positioning data set Wherein, the first mobile device uses a pre-stored reference map corresponding to the physical space and its reference positioning data set for navigation and movement; the reference map and its reference positioning data set and the current map and The current positioning data set is subjected to data fusion processing; the map and the positioning data set after data fusion are used as the new reference map and the reference positioning data set in the first mobile device.
  • the reference map and its reference positioning data set are based on the first mobile device and/or at least one second mobile device respectively executing at least once in the physical space Built for navigation and mobile operations.
  • the step of performing data fusion processing on the reference map and its reference positioning data set and the current map and its current positioning data set includes: determining the reference positioning data set The first positioning feature information and its first positioning coordinate information match the second positioning feature information and its second positioning coordinate information in the current positioning data set; based on the matching first positioning feature information and its first The positioning coordinate information, the second positioning feature information and the second positioning coordinate information are merged with the reference map and its reference positioning data set and the current map and its current positioning data set.
  • the first positioning feature information and the first positioning coordinate information and the second positioning feature information that match the reference positioning data set and the current positioning data set includes: matching each first positioning feature information in the reference positioning data set with each second positioning feature information in the current positioning data set; and determining the matching first based on the obtained matching result One positioning feature information and its first positioning coordinate information, and second positioning feature information and its second positioning coordinate information.
  • the first positioning feature information and its first positioning coordinate information, and the second positioning feature information and its second positioning coordinate that are matched are determined based on the obtained matching result
  • the information step includes: based on the matched first positioning feature information and second positioning feature information, matching respective first positioning coordinate information and second positioning coordinate information to obtain matching first positioning coordinate information and The second positioning coordinate information.
  • the first location feature information includes first measurement location feature information determined based on spatial features in a reference map, and the second location feature information includes And/or the first location feature information includes the first visual location feature information extracted from the first key frame image in the reference location data set; the second The positioning feature information includes second visual positioning feature information extracted from the second key frame image in the current positioning data set.
  • the first measurement positioning feature information includes at least one of the following: measurement data determined based on a combination of coordinate information of spatial features in a reference map
  • the second measurement positioning feature information includes at least one of the following: the measurement data determined based on the combination of the coordinate information of the corresponding spatial feature in the current map, according to the measurement data used to describe the current map Measurement data determined by combining the depth information of the spatial features.
  • the step of matching each first positioning feature information in the reference positioning data set with each second positioning feature information in the current positioning data set includes: The second positioning feature information in each second key frame image in the data set is matched with the first positioning feature information in each first key frame image in the reference positioning data set to determine the first key frame image and the second key frame image. The first location feature information and the second location feature information that match in the frame image.
  • the method further includes: analyzing the first key frame image in the reference positioning data set, and determining that the first image coordinate information corresponding to the first key frame image is relatively A first relative orientation relationship in the main direction of the physical space; and adjusting the pixel position of the first positioning feature information in the first key frame image based on the first relative orientation relationship; and/or analyzing the current Locate the second key frame image in the data set, determine the second relative orientation relationship of the second image coordinate information corresponding to the second key frame image with respect to the main direction of the physical space; and based on the second relative orientation relationship Adjust the pixel position of the second positioning feature information in the second key frame image; so as to match the second positioning feature information in the adjusted second key frame image with the first in the adjusted first key frame image Location feature information.
  • the method further includes: adjusting the reference map or the current map until the adjusted two maps meet a preset overlap condition; so as to determine based on the adjusted two maps The first positioning feature information and its first positioning coordinate information, and the second positioning feature information and its second positioning coordinate information that match the reference positioning data set and the current positioning data set.
  • the first locating feature information and its first locating coordinate information, and the second locating feature information and its second locating coordinate information are fused based on the matching of the reference
  • the steps of the map and its reference positioning data set and the current map and its current positioning data set include: correcting the reference map and/or the current map based on the coordinate deviation information between the matched first positioning coordinate information and the second positioning coordinate information The coordinate error in the map; the merge operation is performed based on at least one of the corrected maps to obtain a new reference map; and the first positioning feature information and the second positioning feature information that at least match the reference positioning data set and the current positioning data set Mark on the new reference map to get new positioning coordinate information.
  • the first locating feature information and its first locating coordinate information, and the second locating feature information and its second locating coordinate information are fused based on the matching of the reference
  • the steps of the map and its reference positioning data set and the current map and its current positioning data set include at least one of the following steps to obtain a new reference positioning data set: based on matching first positioning feature information and second positioning feature information Adjust the reference positioning data set or the current positioning data set by the positioning feature deviation information; add each second positioning feature information that is not matched in the current positioning data set to the reference positioning data set, or add each unmatched reference positioning data set
  • the first positioning feature information is added to the current positioning data set.
  • the method further includes the following steps: detecting the completeness of the current map, and/or detecting the amount of information of the current positioning data set; and executing the method based on the obtained detection result. Operation of data fusion processing.
  • it further includes the step of sending a new reference map and its reference positioning data set to the first mobile device located in the physical space.
  • the method further includes the step of marking the position of at least one second device equipped with a camera device located in the physical space on the reference map.
  • a second aspect of the present application provides a server, including: an interface device, used to communicate with a first mobile device located in a physical space; a storage device, used to store information provided to the first mobile device A reference map and its reference positioning data set, storing the current map and its current positioning data set constructed by the first mobile device performing navigation and movement operations in the physical space, and storing at least one program; a processing device, and The storage device and the interface device are connected, and are used to call and execute the at least one program to coordinate the storage device and the interface device to execute the method according to any one of the first aspect.
  • the third aspect of the present application provides a mobile robot, including: a storage device for storing a reference map and its reference positioning data set, the current map and the current positioning data set, and at least one program; wherein, the current map and the current The positioning data set is constructed by the mobile robot performing a navigation movement operation; the reference map and its reference positioning data set are used by the mobile robot to perform the navigation movement operation; the mobile device is used for The navigation route determined based on the reference map performs the movement operation; the positioning sensing device is used to collect the second positioning feature information during the navigation movement operation to form the current positioning data set; the processing device, and the storage device, the camera device and the mobile The device is connected and is used to call and execute the at least one program to coordinate the storage device, the camera device and the mobile device to execute the method for updating the map described in the first aspect.
  • the stored reference map and its reference positioning data set are based on the mobile robot itself and/or at least one second mobile device respectively performing at least one navigation in the same physical space Built for mobile operation.
  • an interface device is further included for data communication with at least one second mobile device; the processing device also executes obtaining a third map provided by the second mobile device And the third positioning data set, so as to perform data on the reference map and its reference positioning data set, the second map and its second positioning data set, and the third map and its third positioning data set Fusion processing.
  • a fourth aspect of the present application provides a mobile robot, including: an interface device for data communication with a server; a storage device for storing a reference map and a reference map used to provide navigation services during navigation and mobile operations in a physical space
  • Its reference positioning data set stores the current map and its current positioning data set constructed during the execution of the navigation movement operation, and stores at least one program
  • the processing device is connected to the storage device and the interface device for calling and The at least one program is executed to coordinate the storage device and the interface device to execute the following method: sending the current map and its current positioning data set to the server; and obtaining a new reference map returned by the server And its reference positioning data set, and update the stored reference map and its reference positioning data set; wherein the acquired new reference map and its reference positioning data set are the reference map and its reference before the server will update
  • the positioning data set is obtained after data fusion with the current map and the current positioning data set.
  • the new reference map and its reference positioning data set are further integrated with a third map and a third positioning data set provided by at least one second mobile device.
  • this application provides a solution for map persistence, that is, when the mobile device is restarted, its map is the same as the last working map. In the coordinate system.
  • the user can mark the map on the terminal device to set the area where the mobile device works and how it works.
  • the map in this application will cover different scenarios as time accumulates. Therefore, the map can provide positioning information for mobile devices under different time periods and lighting conditions.
  • FIG. 1 shows a schematic flowchart of an implementation manner of a method for updating a map according to this application.
  • Figure 2 shows a flow chart of the steps for fusing the reference map and its reference positioning data set with the current map and its current positioning data set in this application.
  • Fig. 3 shows a schematic structural diagram of an embodiment of the server of this application.
  • FIG. 4 shows a schematic diagram of an embodiment of a module structure of a mobile robot in this application.
  • FIG. 5 shows a schematic diagram of an embodiment of a process of a mobile robot in a work in this application.
  • FIG. 6 shows a schematic diagram of another embodiment of the mobile robot in this application.
  • first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element.
  • the first mobile device may be referred to as the second mobile device, and similarly, the second mobile device may be referred to as the first mobile device without departing from the scope of the various described embodiments.
  • Both the first mobile device and the mobile device describe one device, but unless the context clearly indicates otherwise, they are not the same mobile device. Similar situations also include the first positioning feature information and the second positioning feature information, the first key frame image and the second key frame image, the first positioning feature information and the second positioning feature information.
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C” .
  • An exception to this definition will only occur when the combination of elements, functions, steps or operations is inherently mutually exclusive in some way.
  • the autonomous mobile devices mentioned in the background technology are extended to other mobile devices that need to perform navigation operations based on maps, such as cars, vehicle-mounted terminals configured on cars, etc.
  • maps such as cars, vehicle-mounted terminals configured on cars, etc.
  • satellite positioning technology or other positioning technology
  • positioning in map data other positioning information needs to be used for positioning, mobile devices (such as mobile robots) will not be able to obtain corresponding positioning information and maps in the corresponding physical space.
  • this application provides a method for updating a map and a mobile robot, so as to provide a mobile device with a persistent reference map and its reference positioning data set that can be used for navigation and movement in a physical space.
  • the physical space refers to a physical space provided for navigation and movement of a mobile device, and the physical space includes, but is not limited to, any of the following: indoor/outdoor space, road space, flight space, etc.
  • the physical space corresponds to flight space; in other embodiments, if the mobile device is a vehicle with autopilot function, then the physical space corresponds to It is a tunnel road where positioning cannot be obtained or a road space where the network signal is weak but navigation is needed; in some other embodiments, the mobile device is a sweeping robot, and the physical space corresponds to an indoor or outdoor space.
  • the reference map and its reference positioning data set are constructed based on the first mobile device and/or at least one second mobile device respectively performing at least one navigation movement operation in the physical space. That is, the reference map and its reference positioning data set are fused with a map and its positioning data set constructed by at least one mobile device for multiple navigation movements in the same physical space.
  • the reference map and its reference positioning data set constitute the aforementioned map data.
  • the mobile device is equipped with a camera device, a motion sensing device, and other sensing devices that provide navigation data for autonomous movement; it includes a first mobile device and/or at least one second mobile device, wherein the first mobile device
  • the device and the second mobile device may be devices of the same type or different types of devices.
  • both the first mobile device and the second mobile device are trucks with autonomous navigation capabilities.
  • the first mobile device is a cleaning robot
  • the second mobile device is a family companion robot.
  • both the first mobile device and the second mobile device are vehicle-mounted terminals.
  • the first mobile device or the second mobile device may also be a patrol robot or the like.
  • the method for updating the map and the mobile robot provided by the present application are used to store the map and positioning data set constructed in each work in a storage medium, and merge with the reference map and the reference positioning data set in the storage medium,
  • the reference map and the reference positioning data set are continuously optimized, so that the reference map and the reference positioning data set can cover more scenes.
  • the first mobile device can move autonomously by using the reference map and its reference positioning data set, so that the first mobile device can accurately locate its environmental location under different time periods and lighting conditions.
  • the first mobile device is positioned in the same coordinate system every time it navigates and moves, which provides a basis for persistent map applications.
  • the map constructed by the first mobile device during navigation and movement may be any of the following types: grid map, topological map, and so on.
  • a grid map is constructed during path planning.
  • the physical space is represented as a topological structure diagram with nodes and related connecting lines, where the nodes represent important locations in the environment (corners, doors, elevators, stairs, etc.), and the edges represent the connection relationship between the nodes. Such as corridors.
  • various maps can be marked with semantic tags for the user to perform semantic-based navigation control with the first mobile device.
  • the semantic tags may be the names of objects in the physical space, such as desks, notebooks, etc. .
  • the user sends a voice command to the first mobile device, such as the position where the bedroom turns right 6 meters in front, so the map is also called a semantic map.
  • the navigation movement operation refers to a process in which the first mobile device uses navigation data to move and update map data.
  • the first mobile device uses the constructed map data to implement subsequent navigation during the navigation movement. .
  • the navigation of the data goes to the designated location to complete the work. For another example, a vehicle in a cruising state implements navigation based on map data on a road where a location cannot be obtained, such as a tunnel.
  • the first mobile device constructs a map of the working environment and stores it on a storage medium during navigation and movement, for example, a sweeping robot or vehicle constructs a map of its working environment while working, and constructs a map of the working environment.
  • Transfer to storage media may be separated from the side of the first mobile device, such as a storage device configured on the server side, or a storage device of a smart terminal that communicates with the first mobile device.
  • storage media such as SD card and flash configured in smart terminals.
  • storage media such as solid-state hard drives on the server side.
  • the storage medium may also be a storage device configured by the first mobile device itself, for example, a storage medium such as an SD card or flash configured in the first mobile device.
  • this application uses the current map data constructed by the first mobile device to perform a navigation operation, that is, the current map data including the current map and the current positioning data set as an example, to describe the update to provide a persistent reference map and its reference The execution process of the positioning data set.
  • the first mobile device autonomously moves according to the reference map and its reference positioning data set during the current navigation movement, and at the same time constructs the current map and its current positioning data set corresponding to the current navigation movement.
  • a vehicle with cruise function can be based on the reference map and its reference positioning data set based on VSLAM (Visual Simultaneous Localization and Mapping, visual-based instant positioning and Map construction) or SLAM (Simultaneous Localization and Mapping, real-time positioning and map construction) for autonomous driving, while constructing the current map of the physical space where the road traveled this time and its current visual data set according to the path taken.
  • VSLAM Visual Simultaneous Localization and Mapping, visual-based instant positioning and Map construction
  • SLAM Simultaneous Localization and Mapping, real-time positioning and map construction
  • a sweeping robot when a sweeping robot is performing a cleaning task, it can autonomously move and clean based on the reference map and its reference positioning data set based on VSLAM or SLAM, and at the same time construct the current physical space where the path it has passed is based on the cleaned path.
  • the map and its current visual data set for another example, the navigation robot in the hotel can perform navigation services for the customer after receiving the semantic instruction from the customer based on the reference map and the reference positioning data set based on VSLAM or SLAM, and at the same time according to the path taken Construct the current map of the physical space where the path passed this time and its current visual data set, etc.
  • the current map and the reference map are of the same type.
  • the same is a raster map, or the same is a topological map.
  • the current map and the reference map can be converted to the same type.
  • the raster map is converted into a topological map by using pixel data with a preset resolution.
  • the current map depicts the geographic information of the physical space detected by the first mobile device along the route of a navigation movement; in other words, the current map is determined based on the position and posture of the first mobile device during the movement.
  • the current map includes: coordinate information corresponding to the starting position of the first mobile device, and geographic information such as coordinate information corresponding to obstacles sensed during the movement.
  • the reference map depicts the geographic information of the physical space detected by integrating the route of the first mobile device and/or the at least one second mobile device in the same physical space after multiple navigation movements.
  • the reference map includes: coordinate information corresponding to the starting position determined by the first mobile device based on positioning matching, and coordinate information corresponding to obstacles in the physical space determined after positioning matching, etc. Geographical information.
  • the current positioning data set includes each second positioning feature information collected based on the current navigation movement and the second positioning coordinate information in the current map.
  • the second positioning feature information includes second measurement positioning feature information determined based on the spatial feature depicting the current map.
  • the first mobile device includes measurement and sensing devices such as a laser ranging sensor and an infrared ranging sensor, and includes an angle set on a movement control system (such as a drive motor, a roller, etc.) of the first mobile device.
  • the first The mobile device extracts the second measurement positioning feature information according to the spatial feature formed by the obstacle corresponding to the geographic information in the current map.
  • the spatial feature includes: at least one of obstacle outlines, feature points, and feature lines depicted in the current map.
  • the second measurement positioning feature includes at least one of the following description methods: measurement data determined based on a combination of coordinate information corresponding to the spatial feature in the current map, and measurement data determined based on a combination of depth information used to describe the spatial feature in the current map .
  • the coordinate information of a measurement point in the contour of the obstacle in the current map is used as a starting point to construct the position offset vector of the obstacle contour, so as to obtain the second measurement positioning feature including at least the first connected position offset vector.
  • the second measurement positioning feature is obtained at least Including: a depth offset vector used to describe the outline of the obstacle.
  • the second measurement positioning feature may also include a depth offset vector and a position offset vector describing the contour of the same obstacle.
  • the second measurement positioning feature information is obtained based on a combination of the coordinate information of an inflection point in the current map and the coordinate information of the surrounding measurement points.
  • the second positioning feature information includes second visual positioning feature information extracted from a second key frame image in the current positioning data set.
  • the second key frame image is a key frame image taken by the first mobile device during navigation movement.
  • the second visual positioning feature information is obtained from multiple second key frame images by using image feature extraction and matching methods.
  • the second visual positioning feature information includes but is not limited to: feature points, feature lines, etc. in the second key frame image. An example of the second visual positioning feature information is described by a descriptor.
  • the positioning feature information is extracted from multiple second key frame images, and based on the images containing visual positioning feature information in the multiple second key frame images
  • the block obtains the gray value sequence used to describe the visual positioning feature information, and uses the gray value sequence as the descriptor.
  • the descriptor is used to describe the second visual positioning feature information by encoding the surrounding brightness information of the visual positioning feature information, and sampling several points in a circle around the second visual positioning feature information as the center , Where the number of sampling points is but not limited to 256 or 512. Compare these sampling points in pairs to obtain the brightness relationship between these sampling points and convert the brightness relationship into a binary string or other encoding format.
  • the reference positioning data set includes the first mobile device and/or at least one second mobile device collected based on each movement during previous navigation movements And the first set of positioning feature information obtained by fusion.
  • the first positioning feature information includes a first measurement positioning determined based on the spatial feature in the reference map Characteristic information.
  • the first measurement positioning feature information includes at least one of the following: measurement data determined based on a combination of coordinate information of spatial features in a reference map, and measurement data determined based on a combination of depth information used to describe spatial features in the reference map .
  • the first positioning feature information includes first visual positioning feature information extracted from a first key frame image in a reference positioning data set.
  • the method of obtaining the first key frame image is the same as or similar to the method of obtaining the second key frame image, which will not be described in detail here; and the method of describing the positioning feature in the image by the first visual positioning feature information is the same as that of the second
  • the visual positioning feature information is the same or similar and will not be detailed here.
  • a frame refers to a single image frame in the smallest unit of animation, and the frame is represented as a grid or a mark on the time axis of the animation software.
  • the key frame is equivalent to the original picture in the two-dimensional animation, and refers to the frame where the key action of the object is in motion or change.
  • the vision sensor will continuously capture surrounding images during the movement of the mobile device, and the images of adjacent frames have a high degree of similarity. Therefore, if adjacent frames are compared, the movement process of the mobile device may not be clearly judged. Therefore, the movement process of the mobile device can be judged more prominently through the comparison between key frames.
  • each of the first key frame images acquired by the first mobile device during the navigation movement corresponds to a different position and posture of the first mobile device in the physical space.
  • Different first key frame images captured by the first mobile device at different positions and postures can be used to determine the matching positioning feature in the image, and use it as the first positioning feature information.
  • the first positioning feature information is currently Coordinate information in the map.
  • the method of using at least two key frame images before and after to match the positioning feature information and determining the position and posture of each key frame image taken by the first mobile device can be found in the patent application with publication number CN107907131A, and the patent application is fully cited in this.
  • the first mobile device determines its current position by using the reference map and its reference positioning data set and controls the attitude, movement direction, speed, etc. along the navigation route. At the same time, the first mobile device also constructs the current map and its current positioning data set based on the starting position of the navigation movement. In some examples, after the first mobile device completes the navigation movement, the current map and its current positioning data set are saved, and step S110 in the method for updating a map of the present application is started at an appropriate time. For example, when charging, or when the system resources of the first mobile device are abundant, the method for updating the map of the present application is executed. In some other examples, during the navigation movement performed by the first mobile device, the method for updating the map of the present application is executed based on the constructed current map and its current positioning data set.
  • the method is mainly executed by the server and the first mobile device in cooperation.
  • the server performs step S110 to start the reference map and the reference positioning data set Construction.
  • the method is executed by the first mobile device itself.
  • the first mobile device reads the current map and its current positioning data set from the storage medium when charging or when the system resource occupancy rate is low to perform step S110, and start the construction of the reference map and the reference positioning data set.
  • the step S110 is that the server acquires the current map constructed by the first mobile device in a physical space to perform a navigation and movement operation and its current positioning data set. Or the first mobile device acquires the current map constructed by itself and its current positioning data set.
  • the following takes the server as an example to describe the subsequent steps of the method. It should be noted that the first mobile device may also perform subsequent steps.
  • step S120 perform data fusion processing on the reference map and its reference positioning data set and the current map and its current positioning data set.
  • the fusion refers to integrating the current map and the current positioning data set constructed at different times.
  • the integration of the map includes any of the following: integrating the coordinate information in the current map constructed at different times into the coordinate information in a unified reference map; or integrating the coordinate information in the current map into the reference map .
  • the current map and the reference map are differentially processed to obtain differential coordinate information in the current map, and each coordinate information in the reference map is corrected based on the differential coordinate information.
  • the integration of the map also includes the removal of geographic locations that have not been included in the reference map recently, such as removing the coordinate information of the geographic location of an obstacle that has been temporarily placed.
  • the integration of the positioning data set includes any one of the following: integrating the second positioning feature information in the current positioning data set constructed at different times into a reference positioning data set uniformly corresponding to the reference map; or integrating the second positioning data set in the current positioning data set.
  • the positioning feature information is integrated into the saved datum positioning data set.
  • the positioning feature information that matches the current positioning data set and the reference positioning data set is differentially processed, and the corresponding first positioning information in the reference positioning data set is updated based on the differential positioning feature information.
  • the integration of the positioning data set also includes: removing the first positioning feature information that has not been included in the reference positioning data set recently, for example, removing the first positioning feature information that is determined to reflect an obstacle that has been temporarily placed; and/or Add the second positioning feature information in the current positioning data set to the reference positioning data set.
  • the merged map and positioning data set integrate all the map data collected by the first mobile device and/or the second mobile device during navigation and movement operations in history.
  • the first mobile device performs the first navigation and movement operation Do1 under natural light during the day, and the current map P1 and the current positioning feature data set D1 constructed by the first mobile device are both in the state presented under natural light and used as The reference map and its reference positioning data set; and at night, under the illumination of indoor lights, the brightness and angle of illumination have changed.
  • the current map P2 and the current map constructed by the first mobile device based on the second navigation movement operation Do2 The status of the current location feature data set D2 has changed.
  • the current map P2 and its current positioning data set D2 constructed by the first mobile device at night are fused with the reference map P1 and its reference positioning data set D1 constructed during the day, thereby the fused reference map and its reference positioning
  • the data set contains both the map and positioning data set constructed in the day and night scenes of the physical space.
  • the first mobile device and/or at least one second mobile device has performed multiple navigation and movement operations in a physical space, and its reference map and its reference positioning data set have been integrated in multiple navigation and movement operations. Constructed maps and positioning datasets. After a new navigation movement operation of the first mobile device and/or at least one second mobile device, the current map constructed by it and its current positioning data set are merged with the historically constructed reference map and its reference positioning data set, thereby The current map and its current positioning data set are constructed through continuous iteration.
  • each second positioning feature information collected by the first mobile device and/or the second device's navigation movement is not completely consistent with each first positioning feature information in the reference positioning data set, it is necessary to check during the fusion.
  • Figure 2 shows a flow chart of the steps for fusing the reference map and its reference positioning data set with the current map and its current positioning data set in this application.
  • the step of performing data fusion processing on the reference map and its reference positioning data set and the current map and its current positioning data set includes: step S121, determining the first positioning feature information and the first positioning feature information in the reference positioning data set.
  • the positioning coordinate information matches the second positioning feature information and the second positioning coordinate information in the current positioning data set.
  • the reference positioning data set contains first positioning feature information and its first positioning coordinate information in the reference map
  • the current positioning data set contains second positioning feature information and its second positioning in the current map Coordinate information.
  • the server confirms that the information combination containing the first positioning feature information and its first positioning coordinate information matches the information combination containing the second positioning feature information and its second positioning coordinate information.
  • the step S121 includes step S1211: matching each first positioning feature information in the reference positioning data set with each second positioning feature information in the current positioning data set; and based on the obtained matching result Determine the matching first positioning feature information and its first positioning coordinate information, and second positioning feature information and its second positioning coordinate information.
  • an example of the method of matching positioning feature information includes: matching whether at least one of the coordinate vector deviation value or the depth vector deviation value described by the measurement positioning feature information in the two data sets is within the preset measurement matching error range; or Matching whether at least one of the gray value, gray distribution, color value, color distribution, color difference, or gray step described by the visual positioning feature information in the two data sets is within the preset image matching error range.
  • the server traverses the matching of the first positioning feature information and the second positioning feature information, and when at least one of the following matching conditions is met, the server is based on the matched first positioning feature information and second positioning feature information.
  • the positioning feature information determines that the first positioning coordinate information corresponding to the corresponding first positioning feature information matches the second positioning coordinate information corresponding to the corresponding second positioning feature information.
  • the matching condition includes: the ratio of the matched first positioning feature information to the total first positioning feature information is greater than a ratio threshold, and the ratio of the matching second positioning feature information to the total second positioning feature information is greater than a ratio Threshold, or the sum of matched first positioning feature information is greater than a sum threshold, etc.
  • the matching condition includes: the evaluation result obtained by evaluating the multiple matching conditions based on preset weights satisfies an evaluation threshold interval, etc.
  • the server in order to prevent the first mobile device from moving in different physical spaces containing similar positioning feature information, the server also matches the positioning coordinate information corresponding to each positioning feature information during the fusion process. For example, during the cleaning period of a single or different mobile robots uniformly decorating rooms of different types, the acquired second location feature information may have a higher degree of matching with the first location feature information in the reference location data set, but the corresponding location coordinates The information is quite different.
  • the step S1211 further includes: based on the matched first positioning feature information and second positioning feature information, The respective corresponding first positioning coordinate information and second positioning coordinate information are matched to obtain matched first positioning coordinate information and second positioning coordinate information.
  • the matched first positioning feature information and second positioning feature information it is calculated whether the position relationship error between the corresponding first positioning coordinate information and the second positioning coordinate information meets the preset position relationship error condition, If so, it is determined that the first positioning coordinate information corresponding to the corresponding first positioning feature information matches the second positioning coordinate information corresponding to the corresponding second positioning feature information; otherwise, it is determined that all the matched first positioning feature information and The second location feature information does not match.
  • the above-mentioned position relationship error conditions include at least one and a combination of the following: the displacement error between the matching positioning feature information in the positioning coordinate information in the respective maps is less than the preset displacement error threshold, and the matching positioning feature information is in each The declination error between the positioning coordinate information in the map is less than the preset declination error threshold, and the ratio of the number of positioning coordinate information that meets the preset displacement error threshold to the number of matching positioning feature information exceeds the preset ratio threshold. The amount of positioning coordinate information of the preset displacement error threshold exceeds the preset total threshold.
  • the server determines the positional relationship between the corresponding positioning coordinate information of the matched positioning feature information according to the location distribution of the matched positioning feature information during the matching of the positioning feature information Whether the error satisfies the preset position relationship error condition, if so, it is determined that the first positioning coordinate information corresponding to the corresponding first positioning feature information matches the second positioning coordinate information corresponding to the corresponding second positioning feature information; otherwise, then It is determined that all the matched first positioning feature information and the second positioning feature information do not match.
  • the location distribution includes, but is not limited to, at least one of the following: 1) Perform location clustering on matching positioning feature information; correspondingly, the server performs positional relationship errors between the clustered location distributions in the two maps , Filtering out the first positioning feature information and its first positioning coordinate information from the matched positioning feature information, and the second positioning feature information and its second positioning coordinate information are matched. 2) Take the shape depicted in the respective maps of the corresponding positioning coordinate information of the matched positioning feature information as the position distribution; correspondingly, the server will determine from the matched positional relationship error between the shapes in the two maps. The first location feature information and its first location coordinate information are screened out from the location feature information, and the second location feature information and its second location coordinate information are matched.
  • the server uses the positioning features contained in the key frame image to perform positioning feature matching.
  • the step S121 includes the step S1212: collecting the current positioning data for each second The second positioning feature information in the two key frame images is matched with the first positioning feature information in each first key frame image in the reference positioning data set to determine the second key frame image and the first key frame image. Matched first positioning feature information and second positioning feature information; and determining matching first positioning feature information and its first positioning coordinate information, and second positioning feature information and its second positioning coordinate based on the obtained matching result information.
  • each key frame image contains visual positioning feature information.
  • the descriptor describing the positioning feature information contained in the key frame image is used as the matching index to extract the positioning feature information to be matched in the respective key frame images in the two databases, and then based on the two key frame images to be matched.
  • the pixel position relationship of the contained multiple positioning feature information in the respective key frame images determines the first positioning feature information and the second positioning feature information that match the second key frame image and the first key frame image.
  • the server performs a rough first match with descriptors (or summary of descriptors) corresponding to multiple positioning feature information of the same key frame image in two databases, and uses preset first matching conditions to filter out The second key frame image and its second positioning feature information to be further matched, and the first key frame image and its first positioning feature information, wherein the first matching condition includes, but is not limited to: two descriptors
  • the matching ratio is above the preset ratio, or the number of descriptors that meet the matching conditions in the two key frame images is above the preset number, etc.
  • the server uses the similarity condition of the frequency histogram to perform a rough first match on the key frame images in the two databases to filter out the second key frame image to be further matched and its second positioning feature information. And the corresponding first key frame image and its first positioning feature information.
  • the server matches the second key frame image and its second positioning feature information to be further matched, and the first key frame image and its first positioning feature information one by one based on the image matching technology.
  • the image matching technology includes but It is not limited to matching the image feature error between the shape formed by the multiple first positioning feature information in the first key frame image P1 and the shape formed by the multiple second positioning feature information in the second key frame image P2, if the If the image feature error meets the preset image feature error condition, it is determined that the corresponding two positioning feature information match, otherwise, it does not match.
  • the image feature error conditions include but are not limited to at least one of the following: whether the edges and corners of the two shapes meet the image's translation, rotation, and scale invariance matching conditions; the descriptor of the first positioning feature information and the second The error between the descriptors of the positioning feature information is less than a preset error threshold.
  • the server matches the first positioning coordinate information corresponding to each of the matched first positioning characteristic information and the second positioning characteristic information with the second positioning coordinate information.
  • the server uses the coordinate matching methods mentioned in the previous examples to perform matching operations on the positioning coordinate information corresponding to the matched positioning feature information, and filter out the matching first positioning feature information And its first positioning coordinate information and second positioning feature information and its second positioning coordinate information.
  • the current map depicts geographic information of the physical space detected by the first mobile device along the route of a navigation movement; in other words, the current map is determined based on the position and posture of the first mobile device during movement of. It can be seen that there is a deflection angle difference between the current map constructed based on the navigation movement of the first mobile device and the main direction of the physical space based on the posture of the first mobile device.
  • the step S121 further includes step S1213, to perform any or all positioning data sets based on the main direction of the physical space The adjustment is made so that the fused reference map and its reference positioning data set are basically consistent with the main direction of the physical space, which facilitates the interaction between multiple devices and the user, and facilitates the user to identify the location of each device in the physical space.
  • the physical space where the first mobile device is located usually has one or two main directions.
  • the main direction is used to describe the placement direction of the partitions constituting the physical space, where the partitions include, for example, walls, windows, and screens.
  • the first mobile device navigates and moves in a home interior, and the main direction of the physical space includes two intersecting directions determined along the wall of the room.
  • the main direction is used to describe the movable road direction constructed by the placed partitions in the physical space, where the partitions are, for example, marking lines and shoulder stones set along the road. , Shelves, etc.
  • the first mobile device navigates and moves in a tunnel, and the main direction of the physical space is a single direction determined along a road constructed based on the tunnel wall.
  • the first mobile device navigates and moves in the warehouse, and the main directions of the physical space are two directions determined along the intersecting roads constructed based on the warehouse shelves.
  • the step S1213 includes: analyzing the second key frame image in the second positioning feature information in the current positioning data set, and determining that the second coordinate information corresponding to the second key frame image is relatively A second relative orientation relationship in the main direction of the physical space; and adjusting the pixel position of the second positioning feature information in the second key frame image based on the second relative orientation relationship; so as to perform step S1212 to match
  • the maps constructed each time have their own declination angle differences from the main direction of the physical space.
  • the reference map and its reference visual data set are constructed based on the main direction of the physical space.
  • the second relative orientation relationship of the second coordinate information with respect to the main direction of the physical space is determined, and then based on the corresponding relationship.
  • the determined second relative orientation relationship adjusts the pixel position of the second positioning feature information in the second key frame image, and then uses the adjusted second positioning feature information and the matched first positioning feature information to perform the aforementioned fusion operation .
  • the mismatch of positioning features caused by the difference in the deflection angle can be effectively reduced, and the amount of fusion calculation can be reduced.
  • a straight line segment is selected from the second positioning feature information of the second key frame image, and the second relative position between the first mobile device and the partition in the physical space is determined according to the identified feature line segment.
  • Location relationship the patent application with the publication number CN109074084A provides a technical solution for determining the second relative orientation relationship between the first mobile device and the partition in the physical space according to the identified characteristic line segment, which is hereby quoted in its entirety.
  • the second key frame image corresponds to an image taken during the movement of the robot in the cited document
  • the first mobile device corresponds to the robot in the cited document
  • the second relative orientation relationship corresponds to the cited document
  • the relative azimuth relationship in the document will not be detailed here.
  • the pixel position of the second positioning feature information in the second key frame image is adjusted based on the determined second relative orientation relationship.
  • the correspondence between the pixel coordinates in the second key frame image and the map coordinates constructed by the current map may be default.
  • the main optical axis of the camera device is substantially perpendicular to the movement plane of the first mobile device, and a pixel coordinate system having a consistent angular relationship with the map coordinate system can be constructed, thereby adjusting the second relative orientation relationship based on the second relative orientation relationship.
  • the pixel position of the second positioning feature information in the key frame image is adjusted based on the determined second relative orientation relationship.
  • the correspondence between the pixel coordinates in the second key frame image and the map coordinates constructed by the current map is based on the tilt angle of the camera installed on the first mobile device and the camera parameters. If set, the corresponding relationship can be acquired together with the current positioning data set, thereby adjusting the pixel position of the second positioning feature information in the second key frame image based on the second relative orientation relationship.
  • the straight line segment is extracted based on the first positioning feature information of the first key frame image, for example, a plurality of first positioning feature information is connected by the image expansion algorithm, and based on a preset
  • the straightness and/or length feature extracts the straight line segment therein, and then uses the aforementioned specific example to determine the first relative orientation relationship, and adjusts the pixel position of the first positioning feature information in the first key frame.
  • the second positioning feature information in the adjusted second key frame and the first positioning feature information in the matched first key frame overlap, differ by 180°, or differ by ⁇ 90°.
  • the above-mentioned pre-processing process is beneficial to optimize the algorithm of matching positioning feature information and reduce the multi-step calculation in the matching process.
  • This step may also analyze the first key frame image in the first positioning feature information in the reference positioning data set to determine the first coordinate information corresponding to the first key frame image relative to the main direction of the physical space. A relative orientation relationship; and adjusting the pixel position of the first positioning feature information in the first key frame image based on the first relative orientation relationship.
  • the method of determining the first relative orientation relationship may be the same or similar to the method of determining the second relative orientation relationship, and will not be described in detail here.
  • the method of adjusting the pixel position of the first positioning feature information in the first key frame image may be the same or similar to the method of adjusting the pixel position of the second positioning feature information, and will not be described in detail here.
  • this step can also be combined with the foregoing two examples: analyzing the first key frame image in the first positioning feature information, and determining that the first coordinate information corresponding to the first key frame image is relative to the The first relative orientation relationship in the main direction of the physical space, and analyze the second key frame image in the second positioning feature information in the current positioning data set to determine that the second coordinate information corresponding to the second key frame image is relative to all the A second relative orientation relationship in the main direction of the physical space; and adjusting the pixel position of the first positioning feature information in the first key frame image based on the first relative orientation relationship, and based on the second relative orientation relationship Adjusting the pixel position of the second positioning feature information in the second key frame image.
  • the pixel positions of each positioning feature in each key frame image that are determined based on the main direction of the physical space are obtained.
  • step S1212 is performed based on the adjusted first positioning feature information and/or second positioning feature information to obtain the matching first positioning feature information and its first positioning coordinate information, and second positioning feature information and its second Positioning coordinate information.
  • the server executes step S122.
  • step S122 based on the matched first positioning feature information and its first positioning coordinate information, and second positioning feature information and its second positioning coordinate information, the reference map and its reference positioning data set are merged with the The current map and its current positioning dataset.
  • the server obtains the displacement and angle deviation between the current map and the reference map, and Obtain the feature deviation of the positioning feature information that at least matches the current positioning data set and the reference positioning data set; and use the obtained deviation information to fuse the reference map and its reference positioning data set, and the current map and its current positioning data set.
  • the step S122 includes step S1221 of correcting the coordinate error in the reference map and/or the current map based on the coordinate deviation information between the matched first positioning coordinate information and the second positioning coordinate information.
  • the server counts the displacement deviation information and the angle deviation information between the matched first positioning coordinate information and the second positioning coordinate information to obtain the average displacement deviation information and the average angle deviation information, and according to the obtained average
  • the displacement deviation information and the average angle deviation information are used to correct each coordinate information in the reference map and/or the current map.
  • the step S122 further includes a step S1222 of performing a merging operation based on at least one map after correction to obtain a new reference map.
  • the server uses the revised map as the new reference map.
  • the server determines the overlap area between the corrected reference map and the current map before (or after) correction, and updates the reference map based on the overlap area to obtain a new reference map. For example, the server overlaps the revised reference map and the revised current map to determine the overlapping area between the two maps, and updates the non-overlapping area in the revised reference map according to the revised current map to obtain a new Base map.
  • the step S122 further includes a step S1223, marking the first positioning feature information and the second positioning feature information that at least match the reference positioning data set and the current positioning data set on a new reference map to obtain new positioning coordinate information.
  • the server corrects the positioning coordinate information corresponding to the positioning feature information and marks it in the new reference map according to the correction operation on each coordinate information in the map. For example, according to the aforementioned average displacement deviation information and average angle deviation information, the reference positioning data set and/or the positioning coordinates corresponding to all positioning feature information in the current positioning data set are corrected to obtain new positioning coordinate information. For another example, according to the displacement deviation information and the angle deviation information between the matched first positioning coordinate information and the second positioning coordinate information, the first positioning coordinate information or the second positioning coordinate information is corrected to mark the new reference In the map.
  • the step S122 also includes a step of fusing two positioning data sets to obtain a new reference positioning data set, and this step includes at least the following step S1224 and/or step S1225.
  • step S1224 the reference positioning data set or the current positioning data set is adjusted based on the positioning feature deviation information between the matched first positioning feature information and the second positioning feature information.
  • the matched first location feature information and the second location feature information are measured location feature information
  • the server uses the vector deviation information between the matched first location feature information and the second location feature information
  • the first positioning feature information or the second positioning feature information corresponding to the reference positioning data set or the current positioning data set is adjusted to obtain new first feature information in the new reference positioning data set.
  • the matched first location feature information and second location feature information are both measured location feature information, and are described by a plurality of firstly connected location offset vectors, using one of the first location feature information and the second location feature information Adjust the corresponding first positioning feature information or second positioning feature information (including displacement deviation information and angle deviation information) between the vector deviation information.
  • the matched first positioning feature information and second positioning feature information are visual positioning feature information
  • the server terminal according to the feature deviation information between the matching first positioning feature information and the second positioning feature information
  • the first positioning feature information or the second positioning feature information corresponding to the reference positioning data set or the current positioning data set is adjusted to obtain new first feature information in the new reference positioning data set.
  • the matched first location feature information and second location feature information are both visual location feature information, and are described by a descriptor, using the feature deviation information between the first location feature information and the second location feature information (including Grayscale deviation information and/or brightness deviation information), adjust the corresponding first positioning feature information or second positioning feature information.
  • step S1225 each unmatched second positioning feature information in the current positioning data set is added to the reference positioning data set, or each unmatched first positioning feature information in the reference positioning data set is added to the current positioning data set.
  • the server supplements the unmatched positioning feature information in the two positioning data sets by performing any addition operation, so that the new reference data set can provide more abundant positioning feature information.
  • the first mobile device can perform the next navigation operation according to the new reference map and the new reference positioning data set.
  • the current map and current positioning data set provided by the first mobile device are not constructed based on the reference map and reference positioning data set used by the first mobile device.
  • the method for updating the map further includes :
  • the overlap condition includes, but is not limited to: the overall or edge coordinate error between the adjusted reference map and the coordinate information indicating the location of the obstacle in the current map is less than the preset coordinate error value, or the adjusted reference map The overall or edge pixel error between the two map image data formed by the current map is smaller than the preset pixel error value.
  • the method of adjusting the reference map or the current map may be adjusted step by step based on the preset unit angle and unit displacement, and/or based on statistics of the displacement and angle difference between the matched measurement positioning feature information in the two maps.
  • Translation and rotation operations After the overlap condition is met, the adjusted displacement and angle are determined, and the first positioning coordinate information in the reference positioning data set, the image coordinate information of the key frame image, etc. are adjusted accordingly, and the aforementioned matching and fusion operations are performed on this basis. I will not go into details here.
  • different physical spaces may have part of the same positioning feature information. To prevent the first mobile device from recognizing two different physical spaces with part of the same positioning feature information as the same physical space, in some examples , The server can determine whether it is the same physical space by matching the boundary information of the map.
  • the method for updating the map further includes detecting the completeness of the current map, and/or detecting the amount of information of the current positioning data set; and performing the data fusion processing based on the obtained detection result Steps of operation.
  • the method for detecting the integrity of the current map includes: detecting the time spent drawing the current map based on a preset duration condition to determine the integrity of the current map; and detecting the contour data in the current map based on the preset contour condition In this way, the integrity of the current map is determined; based on the overlap condition of the current map and the reference map, the integrity of the current map is detected.
  • the method of detecting the information amount of the current positioning data set includes: detecting the total amount of different second positioning feature information in the positioning data set based on a preset total quantity condition; and/or based on a preset difference total quantity condition, Detect the number of unmatched second positioning feature information in the current positioning data set and the reference positioning data set, etc.
  • the above detection methods are not optional, and one or more detection methods can be selected for detection according to the actual situation, thereby reducing unnecessary fusion operations.
  • the first mobile device does not perform a complete navigation and movement operation in the physical space every time. For example, in a home environment, the user needs to go out while the cleaning robot is working, and the cleaning robot needs to stop. Current operation. In this case, it is necessary to detect the completeness of the current map, or detect the amount of information in the current positioning data set, or simultaneously detect the completeness of the current map and the amount of information in the current positioning data set, so as to determine whether the current The map and its current positioning data set are fused with the reference map and its reference positioning data set.
  • the time spent by the first mobile device to perform the current navigation and movement operation task may be obtained, and this can be compared with the time spent by the first mobile device in performing the navigation and movement operation in the physical space in history.
  • the preset condition may be the time taken by the first mobile device to perform the current navigation movement operation task and the first mobile device’s historical history The ratio of the time spent to perform the navigation movement operation task in the physical space, etc.
  • the operating data of the motor when the first mobile device is performing the current navigation and movement operation task, and compare it with the first mobile device to perform the navigation and movement operation in the physical space in history.
  • the motor data at the time is compared to determine whether the fusion needs to be performed based on a preset condition.
  • the preset condition may be that the motor data and the first mobile device have historically The ratio of motor data when performing navigation and movement operation tasks in the physical space.
  • the distance moved by the first mobile device during the current navigation and movement operation task can be obtained, and the distance moved by the first mobile device during the navigation and movement operation in the physical space in the history can be obtained.
  • the preset condition may be the distance moved by the first mobile device to perform the current navigation movement operation task and the first mobile device in the history The ratio of the distance moved to perform the navigation movement operation task in the physical space, etc.
  • the data fusion map and its positioning data set are used as the new reference map and its reference positioning data set in the first mobile device.
  • the fused map and its positioning data set are stored in the storage medium.
  • the reference map and its reference positioning data set may be actively pushed to the first mobile device, or may be downloaded based on a request of the first mobile device.
  • the server after the server performs the fusion operation, it sends the new reference map and its reference positioning data set to the first mobile device located in the physical space, so that the first mobile device will perform the next navigation and movement operation. Use the new datum map and its datum positioning data set.
  • the server further executes the step of marking the position of at least one second device equipped with a camera device located in the physical space on the reference map.
  • the first mobile device in addition to the first mobile device that can perform navigation and movement operations in the physical space, it also includes a second device configured with a camera device arranged in the physical space.
  • the second device includes the aforementioned second mobile device, and/or an electronic device fixedly installed in the physical space and equipped with a camera device, such as a security camera.
  • the server also obtains the third key frame image captured by the second device, and determines the first positioning feature information in the first key frame image in the reference positioning data set by matching the third positioning feature information in the third key frame image
  • the coordinate position of the second device on the reference map, the location of the second device is marked on the reference map, and the reference map marked with the location of the second device and its reference positioning data set are sent to the first mobile device.
  • the user can interact with the first mobile device and/or each second device marked with a reference map through the smart terminal.
  • the second device may execute the user's instruction based on the reference map and its reference positioning data set.
  • the first mobile device interacts with the corresponding second device based on the used reference map.
  • the user makes a gesture instruction straight to the camera of the first mobile device, and the first mobile device communicates with the second device so that the second device executes the user's instruction based on the gesture instruction and the reference map and its reference positioning data set.
  • the map constructed in this application is a persistent map, that is, the map after the mobile device is restarted is in the same coordinate system as the map that was worked last time.
  • the user can mark the map on the terminal device to set the working area and working mode of the mobile device. For example, a user may mark an area that needs to work multiple times a day or a restricted area or designate a certain area to work in a map on a terminal device.
  • this application not only obtains the work record once, but also continuously collects and integrates information to enrich the positioning features.
  • the map can be used in different periods and lighting Provide positioning for mobile devices under conditions.
  • the map construction method disclosed in this application can obtain a more stable map, which facilitates the interaction between the user and the device, while also saving computing resources, and solves the problem of computing resources caused by changing the composition of the edge positioning in the prior art. tension.
  • the persistent map of this application can be used directly after the positioning is successful. If it was originally necessary to create many positioning features per second, the update map method of this application can only create the reference map and the positioning features not covered by the reference positioning data set. can.
  • FIG. 3 shows a schematic structural diagram of an embodiment of the server of this application.
  • the reference map constructed by the first mobile device during the navigation and movement operation process and its reference positioning data set and the current map and its The current positioning data set is stored in the server.
  • the server includes but is not limited to a single server, a server cluster, a distributed server cluster, a cloud server, etc.
  • the service end is provided by the cloud service end provided by the cloud provider.
  • the cloud server includes a public cloud (Public Cloud) server and a private cloud (Private Cloud) server, where the public or private cloud server includes Software-as-a-Service (Software-as-a-Service, SaaS) ), Platform-as-a-Service (Platform-as-a-Service, PaaS) and Infrastructure-as-a-Service (Infrastructure-as-a-Service, IaaS), etc.
  • the private cloud service terminal is for example Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, Tencent Cloud Computing Platform, etc.
  • the server is in communication connection with a first mobile device located in a physical space.
  • the physical space refers to a physical space provided for navigation and movement of a mobile device, and the physical space includes, but is not limited to, any of the following: indoor/outdoor space, road space, flight space, etc.
  • the mobile device is a drone
  • the physical space corresponds to flight space
  • the mobile device is a vehicle with autopilot function
  • the physical space Corresponding to tunnel roads where positioning cannot be obtained or road spaces where the network signal is weak but navigation is required
  • the mobile device is a sweeping robot, and the physical space corresponds to an indoor or outdoor space.
  • the mobile device is equipped with a camera device, a movement sensor device, and other sensing devices that provide navigation data for autonomous movement; it includes a first mobile device and/or at least one second mobile device, wherein the first mobile device and The second mobile device may be a device of the same type or a device of a different type.
  • both the first mobile device and the second mobile device are trucks with autonomous navigation capabilities.
  • the first mobile device is a cleaning robot
  • the second mobile device is a family companion robot.
  • both the first mobile device and the second mobile device are vehicle-mounted terminals.
  • the first mobile device or the second mobile device may also be a patrol robot or the like.
  • the server includes an interface device 11, a storage device 12, and a processing device 13.
  • the storage device 12 includes a non-volatile memory, a storage server, and the like.
  • the non-volatile memory is, for example, a solid state hard disk or a U disk.
  • the storage server is used to store various information related to power consumption and power supply.
  • the interface device 11 includes a network interface, a data line interface, and the like.
  • the network interface includes, but is not limited to: an Ethernet network interface device, a mobile network (3G, 4G, 5G, etc.)-based network interface device, a short-range communication (WiFi, Bluetooth, etc.)-based network interface device, etc.
  • the data line interface includes but is not limited to: USB interface, RS232, etc.
  • the interface device is data connected with the control system, third-party system, Internet, etc.
  • the processing device 13 is connected to the interface device 11 and the storage device 12, and includes at least one of a CPU or a chip integrated with the CPU, a programmable logic device (FPGA), and a multi-core processor.
  • the processing device 13 also includes a memory for temporarily storing data, such as a memory and a register.
  • the interface device 11 is used for data communication with a first mobile device located in a physical space.
  • the interface device 11 is, for example, a network interface device based on Ethernet, a network interface device based on mobile networks (3G, 4G, 5G, etc.), a network interface device based on short-range communication (WiFi, Bluetooth, etc.), etc. This is in communication with the first mobile device.
  • the storage device 12 is used to store a reference map and its reference positioning data set provided to the first mobile device, and store current data from the first mobile device performing navigation and movement operations in the physical space. Map and its current positioning data set, and store at least one program.
  • the storage device 12 includes, for example, a hard disk set on the server side and stores the at least one program.
  • the server stores the reference map and its reference positioning data set in the storage device 12.
  • the storage device 12 provides the reference map and its reference positioning data set to the interface device 11.
  • the storage device 12 stores the current map from the interface device 11 and its current positioning data set.
  • the storage device 12 provides the reference map and its reference positioning data set with the current map and its current positioning data set to the processing device 13 .
  • the processing device 13 is configured to call the at least one program to coordinate the interface device and the storage device to execute the method for updating the map mentioned in any of the foregoing examples.
  • the method for updating the map is shown in FIG. 1 and the corresponding description, which will not be repeated here.
  • the step of updating the map can also be completed by a mobile robot.
  • a mobile robot is provided.
  • FIG. 4 shows a schematic diagram of an embodiment of a module structure of a mobile robot.
  • the mobile robot 2 includes a storage device 24, a mobile device 23, a positioning sensing device 21 and a processing device 22.
  • the storage device is used to store a reference map describing a physical space and its reference positioning data set, a current map and a current positioning data set constructed by performing navigation and movement operations in the physical space, and at least one program;
  • the mobile device Is used to perform a movement operation based on the navigation route determined on the reference map;
  • the positioning sensing device is used to collect second positioning feature information during the execution of the navigation movement operation to form a current positioning data set;
  • the storage device, the camera device and the mobile device are connected, and are used to call and execute the at least one program to coordinate the storage device, the camera device and the mobile device.
  • the positioning sensing device includes, but is not limited to, at least one of the following: a camera device, an infrared distance measuring device, a laser distance measuring device, an angle sensor, a displacement sensor, a counter, and the like.
  • measurement and sensing devices such as laser ranging sensors and infrared ranging sensors are arranged on the side of the mobile robot body.
  • Measuring and sensing devices such as angle sensors, displacement sensors, and counters are arranged on the movement control system (such as drive motors, rollers, etc.) of the mobile robot.
  • Visual sensing devices such as 2D camera devices and 3D camera devices are arranged on the side or top of the mobile robot.
  • the processing device 22 navigates the mobile device 23 based on a reference map and its reference positioning data set, and captures key frame images through the camera device in the positioning sensing device 21 It is provided to the processing device 22, and the processing device 22 constructs a current map and a current positioning data set based on the key frame images provided by the camera device and provides them to the storage device 24 for storage.
  • the processing device 22 reads the current map and the current positioning data set from the storage device 24, and starts the construction of the reference map and the reference positioning data set.
  • the stored reference map and its reference positioning data set are constructed based on the mobile robot itself and/or at least one second mobile device respectively performing at least one navigation movement operation in the physical space, that is, the The reference map and its reference positioning data set are obtained after at least one mobile device performs multiple navigation movements in the same physical space and the maps and its positioning data sets constructed separately are merged.
  • the reference map and its reference positioning data set constitute the aforementioned map data.
  • the mobile device is equipped with a camera device, a mobile sensor device, and other sensing devices that provide navigation data for autonomous movement; it includes a mobile robot and/or at least one second mobile device, wherein the mobile robot and the second mobile device Mobile devices can be devices of the same type or different types of devices.
  • the mobile robot and the second mobile device are both trucks with autonomous navigation capabilities.
  • the mobile robot in an indoor space, the mobile robot is a cleaning robot, and the second mobile device is a family companion robot.
  • the mobile robot and the second mobile device are both vehicle-mounted terminals.
  • the mobile robot or the second mobile device may also be a patrol robot or the like.
  • the mobile robot further includes an interface device for data communication with at least one second mobile device; in some embodiments, the processing device also executes acquiring the second mobile device
  • the provided third map and the third positioning data set are operated to combine the reference map and its reference positioning data set, the second map and its second positioning data set, and the third map and its Three positioning data sets for data fusion processing.
  • a physical space includes a mobile robot and a second mobile device, the mobile robot constructs the current map and its current positioning data set during the navigation and movement operation, and the second mobile device is in the navigation movement operation.
  • a third map and its third positioning data set are constructed.
  • the mobile robot receives data from the second mobile device through the interface device, and combines the reference map and its reference positioning data set, the current map and its current location Performing data fusion processing on the data set, the third map and the third positioning data set.
  • the physical space contains multiple second mobile devices in addition to the mobile robot. Therefore, multiple third maps and their third locations will be constructed during the navigation and movement operations of multiple second mobile devices.
  • the mobile robot receives the plurality of third maps and their third positioning data sets through an interface device, and combines the plurality of third maps and their third positioning data sets with a reference map and its reference positioning data set The second map and its second positioning data set are subjected to data fusion processing.
  • FIG. 5 shows a schematic diagram of an embodiment of a working process of the mobile robot in this application.
  • the robot performs data fusion processing on the reference map and its reference positioning data set and the current map and its current positioning data set.
  • the fusion refers to the integration of maps and positioning data sets constructed at different times.
  • the integration of the map includes any of the following: integrating the coordinate information in the current map constructed at different times into the coordinate information in a unified reference map; or integrating the coordinate information in the current map into the reference map .
  • the second map and the reference map are differentially processed to obtain the differential coordinate information in the current map and integrate it into the reference map.
  • the integration of the map also includes the removal of geographic locations not included in the reference map recently, such as removing the coordinate information of the geographic location of an obstacle that has been temporarily placed.
  • the integration of the positioning data set includes any of the following: integrating the second positioning feature information in the current positioning data set constructed at different times into the first positioning feature information in a unified reference map; or integrating the second positioning data
  • the second positioning feature information is integrated into the benchmark positioning data set.
  • the current positioning data set and the reference positioning data set are differentially processed to obtain differential positioning feature information, and the two positioning data sets are integrated based on the differential positioning feature information.
  • the integration of the positioning data set also includes removing the first positioning feature information that has not been included in the reference positioning data set recently, for example, removing the first positioning feature information that is determined to reflect an obstacle that has been temporarily placed.
  • the merged map and positioning data set integrate all the map data collected by the mobile robot and/or the second mobile device in the historical navigation and movement operation.
  • the mobile robot performs the first navigation and movement operation Do1 under natural light during the day.
  • the current map P1 and the current positioning feature data set D1 constructed by the mobile robot are in the state presented under natural light and used as the reference map and its Benchmark positioning data set; at night, under the illumination of indoor lights, the brightness and angle of illumination have changed.
  • the mobile robot builds the current map P2 and current positioning feature data set D2 based on the second navigation movement operation Do2. The status of has changed.
  • the current map P2 and its current positioning data set D2 constructed by the mobile robot at night are fused with the reference map P1 and its reference positioning data set D1 constructed during the day, so that the fused reference map and its reference positioning data set At the same time, it contains the map and positioning data set constructed under the scene of the physical space during the day and night.
  • the mobile robot and/or at least one second mobile device has performed multiple navigation and movement operations in a physical space, and its reference map and its reference positioning data set have been integrated with the construction of multiple navigation and movement operations. Map and positioning data sets. After a new navigation movement operation of the mobile robot and/or at least one second mobile device, the current map constructed by the mobile robot and/or at least one second mobile device and its current positioning data set are merged with the historically constructed reference map and its reference positioning data set. Iteratively construct the current map and its current positioning data set.
  • process of the mobile robot performing step S210 is the same or similar to the process of performing step S120 by the first device in the foregoing example, and will not be described in detail here.
  • step S220 the mobile robot executes step S220 to use the data fusion map and its positioning data set as a new reference map and its reference positioning data set in the mobile robot, and store it .
  • the processing device of the mobile robot after fusing the reference map and its reference positioning data set with the second map and its second positioning data set, sends the fused new reference map and its reference positioning data set to the second In the mobile device, in order to use the new reference map and its reference positioning data set in the next navigation movement operation.
  • the processing device further executes the step of marking the location of at least one second device equipped with a camera device located in the physical space on the map.
  • a mobile robot in addition to a mobile robot that can perform navigation and movement operations in the physical space, it also includes a second device equipped with a camera device arranged in the physical space.
  • the second device includes the aforementioned second mobile device, and/or an electronic device fixedly installed in the physical space and equipped with a camera device, such as a security camera.
  • the mobile robot also obtains the third key frame image taken by the second device, and determines the first positioning feature information in the first key frame image in the reference positioning data set by matching the third positioning feature information in the third key frame image
  • the coordinate position of the second device on the reference map, the location of the second device is marked on the reference map, and the reference map marked with the location of the second device and its reference positioning data set are sent to the storage device of the mobile robot.
  • the user can interact with the mobile robot marked with a reference map and/or each second device through the smart terminal.
  • the second device may execute the user's instruction based on the reference map and its reference positioning data set.
  • the mobile robot interacts with the corresponding second device based on the used reference map.
  • the user makes a gesture instruction straight to the camera of the mobile robot, and the mobile robot communicates with the second device so that the second device executes the user's instruction based on the gesture instruction and the reference map and its reference positioning data set.
  • the map constructed by the mobile robot in this application is a persistent map, that is, the map after the mobile robot is restarted is in the same coordinate system as the map that was last worked.
  • the user can mark the map on the terminal device to set the working area and working mode of the mobile device. For example, a user may mark an area that needs to work multiple times a day or a restricted area or designate a certain area to work in a map on a terminal device.
  • the mobile robot in this application not only obtains the work record once, but also continuously collects and integrates information to enrich the positioning features.
  • the map can be changed Provide positioning for mobile devices under the time of day and light conditions.
  • the mobile robot disclosed in the present application can obtain a more stable map, which facilitates the interaction between the user and the device, saves computing resources, and solves the shortage of computing resources caused by changing the composition of the position in the prior art.
  • the persistent map of the mobile robot in this application can be used directly after the positioning is successful. If it was originally necessary to create many positioning features per second, the mobile robot of this application can only create the reference map and the positioning features not covered by the reference positioning data set OK.
  • the mobile robot 3 includes: an interface device 35 for data communication with a server; a storage device 34 for storing a reference map used to provide navigation services during navigation and mobile operations in a physical space And its reference positioning data set, storing the current map and its current positioning data set constructed during the execution of the navigation movement operation, and storing at least one program; the processing device 32 is connected to the storage device and the interface device for Invoke and execute the at least one program to coordinate the storage device and the interface device to execute the following method: send the current map and its current positioning data set to the server; and obtain the new one returned by the server The reference map and its reference positioning data set, and update the stored reference map and its reference positioning data set; wherein the acquired new reference map and its reference positioning data set are the reference map and the reference location data set before the server will update The reference positioning data set and the current map and the current map and the current
  • the mobile robot 3 completes the navigation movement operation by calling the reference map and its reference positioning data set in its storage device 34, and constructs the current map and its current positioning data set during the navigation movement operation.
  • the mobile robot stores the current map constructed during the navigation and movement operation and its current positioning data set in the storage device 34.
  • the processing device 32 of the mobile robot 3 calls the current map and its current location data set in the storage device, and sends the current map and its current location data set to the server through the interface device 35. After completing the fusion step of the reference map and its reference positioning data set with the current map and its current positioning data set on the server side, a new reference map and its reference positioning data set are formed.
  • the server sends the new reference map and its reference positioning data set to the interface device 35 of the mobile robot 3, and stores the new reference map and its reference positioning data set in the storage device 34 through the processing device 32. in.
  • the method for the server to update the reference map and its reference positioning data set by data fusion is the same or similar to the foregoing example of the method for updating the map, and will not be described in detail here.
  • the physical space includes a mobile robot and at least one second mobile device, and the at least one second mobile device constructs a third map and its third map during navigation and movement operations in the physical space. Locate the data set.
  • the new reference map and its reference positioning data set are also integrated with a third map and a third positioning data set provided by at least one second mobile device.
  • the mobile robot disclosed in the present application can cooperate with the server to jointly construct a persistent map, and the persistent map can make the map of the mobile robot after restarting and the last working map in the same coordinate system.
  • the user can mark the map on the terminal device to set the working area and working mode of the mobile device. For example, a user may mark an area that needs to work multiple times a day or a restricted area or designate a certain area to work in a map on a terminal device.
  • the visual information will be quite different. Therefore, the mobile robot in this application not only obtains the work record once, but also continuously collects and integrates information to enrich the positioning features.
  • the map can be changed Provide positioning for mobile devices under the time of day and light conditions.
  • the mobile robot disclosed in the present application can obtain a more stable map, which facilitates the interaction between the user and the device, saves computing resources, and solves the shortage of computing resources caused by changing the composition of the position in the prior art.
  • the persistent map of the mobile robot in this application can be used directly after the positioning is successful. If it was originally necessary to create many positioning features per second, the mobile robot of this application can only create the reference map and the positioning features not covered by the reference positioning data set OK.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Navigation (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)

Abstract

提供了一种更新地图的方法及移动机器人,其涉及导航控制技术领域。首先,获取第一移动设备在一物理空间内执行导航移动操作所构建的当前地图及其当前定位数据集(S110);其中,第一移动设备是利用预先存储的对应该物理空间的基准地图及其基准定位数据集进行导航移动的;再将基准地图及其基准定位数据集和当前地图及其当前定位数据集进行数据融合处理(S120);最后将数据融合后的地图及其定位数据集作为第一移动设备中新的基准地图及其基准定位数据集。该方法提供了一种地图持久化的方案,且该地图随着时间的积累,会涵盖不同的场景,由此在不同的时段和光照条件下,地图都可以为移动设备提供定位信息。

Description

更新地图的方法及移动机器人 技术领域
本申请涉及导航控制技术领域,特别是涉及一种更新地图的方法、服务端及移动机器人。
背景技术
以机器人为代表的自主移动设备是依据地图进行导航移动的。其中,在没有可供导航的地图的情况下,自主移动设备需更新地图。目前,自主移动设备在导航移动期间所构建的地图无法持久保存,是因为历次自主移动设备的初始位置、初始姿态、执行导航移动的物理空间无法保证完全一致。
然而,对于如扫地机器人、巡逻机器人等自主移动设备来说,由于无法保证历次自主移动设备的导航移动行为处于相同的移动环境,如处于相同的光线环境、处于障碍物摆放位置不变的物理空间等移动环境,故而对于这类自主移动设备来说,利用上一次构建的地图执行当前次导航操作将出现极大误差。
发明内容
鉴于以上所述现有技术的缺点,本申请的目的在于提供一种更新地图的方法及移动机器人,用于解决现有技术中地图非持久性的问题。
为实现上述目的及其他相关目的,本申请的第一方面提供一种更新地图的方法,包括:获取第一移动设备在一物理空间内执行导航移动操作所构建的当前地图及其当前定位数据集;其中,所述第一移动设备是利用预先存储的对应所述物理空间的基准地图及其基准定位数据集进行导航移动的;将所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合处理;将数据融合后的地图及其定位数据集作为所述第一移动设备中新的基准地图及其基准定位数据集。
在所述第一方面的某些实施方式中,所述基准地图及其基准定位数据集是基于所述第一移动设备和/或至少一个第二移动设备在所述物理空间内分别执行至少一次导航移动操作而构建的。
在所述第一方面的某些实施方式中,所述将基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合处理的步骤包括:确定所述基准定位数据集中的第一定位特征信息及其第一定位坐标信息、和所述当前定位数据集中的第二定位特征信息及其第二定位坐标信息相匹配;基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位 特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集。
在所述第一方面的某些实施方式中,所述确定基准定位数据集和所述当前定位数据集中相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息的步骤包括:匹配所述基准定位数据集中的各第一定位特征信息和所述当前定位数据集中的各第二定位特征信息;基于所得到的匹配结果确定相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。
在所述第一方面的某些实施方式中,所述基于所得到的匹配结果确定相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息的步骤包括:基于相匹配的第一定位特征信息和第二定位特征信息,将各自对应的第一定位坐标信息和第二定位坐标信息进行匹配,以得到相匹配的第一定位坐标信息和第二定位坐标信息。
在所述第一方面的某些实施方式中,所述第一定位特征信息包含基于基准地图中的空间特征而确定的第一测量定位特征信息,以及所述第二定位特征信息包含基于当前地图中的空间特征而确定的第二测量定位特征信息;和/或所述第一定位特征信息包含从基准定位数据集中的第一关键帧图像中提取的第一视觉定位特征信息;所述第二定位特征信息包含从当前定位数据集中的第二关键帧图像中提取的第二视觉定位特征信息。
在所述第一方面的某些实施方式中,所述第一测量定位特征信息包括以下至少一种:基于基准地图中空间特征的坐标信息组合而确定的测量数据,根据用于描述基准地图中空间特征的深度信息组合而确定的测量数据;以及所述第二测量定位特征信息包括以下至少一种:基于当前地图中对应空间特征的坐标信息组合而确定的测量数据,根据用于描述当前地图中空间特征的深度信息组合而确定的测量数据。
在所述第一方面的某些实施方式中,所述匹配基准定位数据集中的各第一定位特征信息和所述当前定位数据集中的各第二定位特征信息的步骤包括:将所述当前定位数据集中各第二关键帧图像中的第二定位特征信息与基准定位数据集中各第一关键帧图像中的第一定位特征信息进行匹配处理,以确定所述第一关键帧图像与第二关键帧图像中相匹配的第一定位特征信息和第二定位特征信息。
在所述第一方面的某些实施方式中,所述方法还包括:分析所述基准定位数据集中的第一关键帧图像,确定所述第一关键帧图像所对应的第一图像坐标信息相对于所述物理空间主方向的第一相对方位关系;以及基于所述第一相对方位关系调整在所述第一关键帧图像中的第一定位特征信息的像素位置;和/或分析所述当前定位数据集中的第二关键帧图像,确定所述第二关键帧图像所对应的第二图像坐标信息相对于所述物理空间主方向的第二相对方位关 系;以及基于所述第二相对方位关系调整在所述第二关键帧图像中的第二定位特征信息的像素位置;以便匹配调整后的第二关键帧图像中的第二定位特征信息与调整后的第一关键帧图像中的第一定位特征信息。
在所述第一方面的某些实施方式中,所述方法还包括:调整所述基准地图或当前地图直至调整后两幅地图符合预设的重叠条件的步骤;以便基于调整后的两地图确定所述基准定位数据集和所述当前定位数据集中相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。
在所述第一方面的某些实施方式中,所述基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集的步骤包括:基于相匹配的第一定位坐标信息和第二定位坐标信息之间的坐标偏差信息修正基准地图和/或当前地图中的坐标误差;基于修正后的至少一个地图执行合并操作,以得到新的基准地图;以及将基准定位数据集和当前定位数据集中至少相匹配的第一定位特征信息和第二定位特征信息标记在新的基准地图上,以得到新的定位坐标信息。
在所述第一方面的某些实施方式中,所述基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集的步骤包括以下至少一个步骤,以得到新的基准定位数据集:基于相匹配的第一定位特征信息和第二定位特征信息之间的定位特征偏差信息调整基准定位数据集或当前定位数据集;将当前定位数据集中未匹配的各第二定位特征信添加至基准定位数据集中,或者,将基准定位数据集中未匹配的各第一定位特征信息添加至当前定位数据集中。
在所述第一方面的某些实施方式中,还包括以下步骤:检测所述当前地图的完整程度,和/或检测所述当前定位数据集的信息量;基于所得到的检测结果执行所述数据融合处理的操作。
在所述第一方面的某些实施方式中,还包括将新的基准地图及其基准定位数据集发送至位于所述物理空间中的第一移动设备的步骤。
在所述第一方面的某些实施方式中,还包括将位于所述物理空间内的至少一个配置有摄像装置的第二设备的位置标记在所述基准地图上的步骤。
本申请第二方面提供一种服务端,包括:接口装置,用于与位于一物理空间中的第一移动设备进行数据通信;存储装置,用于存储用于提供给所述第一移动设备的基准地图及其基准定位数据集,存储来自所述第一移动设备在所述物理空间内执行导航移动操作所构建的当 前地图及其当前定位数据集,以及存储至少一个程序;处理装置,与所述存储装置和接口装置连接,用于调用并执行所述至少一个程序,以协调所述存储装置和接口装置执行如第一方面中任一所述的方法。
本申请第三方面提供一种移动机器人,包括:存储装置,用于存储一基准地图及其基准定位数据集,当前地图和当前定位数据集,以及至少一个程序;其中,所述当前地图和当前定位数据集为所述移动机器人执行一次导航移动操作所构建的;所述基准地图及其基准定位数据集为所述移动机器人执行所述导航移动操作所使用的;移动装置,用于基于所述基准地图而确定的导航路线执行移动操作;定位感应装置,用于在执行导航移动操作期间收集第二定位特征信息,以构成当前定位数据集;处理装置,与所述存储装置、摄像装置和移动装置相连,用于调用并执行所述至少一个程序,以协调所述存储装置、摄像装置和移动装置执行第一方面中所述的更新地图的方法。
在所述第三方面的某些实施方式中,所存储的基准地图及其基准定位数据集是基于所述移动机器人自身和/或至少一个第二移动设备在同一物理空间内分别执行至少一次导航移动操作而构建的。
在所述第三方面的某些实施方式中,还包括接口装置,用于与至少一个第二移动设备进行数据通信;所述处理装置还执行获取所述第二移动设备所提供的第三地图和第三定位数据集的操作,以便将所述基准地图及其基准定位数据集、所述第二地图及其第二定位数据集、和所述第三地图及其第三定位数据集进行数据融合处理。
本申请第四方面提供一种移动机器人,包括:接口装置,用于与一服务端进行数据通信;存储装置,用于存储用于在一物理空间内导航移动操作期间提供导航服务的基准地图及其基准定位数据集,存储在执行所述导航移动操作期间所构建的当前地图及其当前定位数据集,以及存储至少一个程序;处理装置,与所述存储装置和接口装置连接,用于调用并执行所述至少一个程序,以协调所述存储装置和接口装置执行如下方法:将所述当前地图及其当前定位数据集发送至所述服务端;以及获取所述服务端返回的新的基准地图及其基准定位数据集,并更新所存储的基准地图及其基准定位数据集;其中,所获取的新的基准地图及其基准定位数据集是所述服务端将更新前的基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合后得到的。
在所述第四方面的某些实施方式中,所述新的基准地图及其基准定位数据集还融合有至少一个第二移动设备所提供的第三地图及其第三定位数据集。
如上所述,本申请的更新地图的方法及移动机器人,具有以下有益效果:本申请提供了一种地图持久化的方案,即当移动设备重新开机后,其地图与上一次工作的地图在同一坐标 系中。用户可在终端设备上对地图进行标注,从而设置需要移动设备工作和工作方式的区域。同时,本申请中的地图随着时间的积累,会涵盖不同的场景,由此在不同的时段和光照条件下,地图都可以为移动设备提供定位信息。
附图说明
图1显示为本申请更新地图方法在一种实施方式中的流程示意图。
图2显示为本申请中将基准地图及其基准定位数据集与当前地图及其当前定位数据集融合的步骤流程图。
图3显示本申请服务端的一实施例结构示意图。
图4显示为本申请中一种移动机器人的模块结构实施例示意图。
图5显示为本申请中移动机器人在一工作中的流程实施例示意图。
图6显示为本申请中移动机器人的另一实施例示意图。
具体实施方式
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。
虽然在一些实例中术语第一、第二等在本文中用来描述各种元件,但是这些元件不应当被这些术语限制。这些术语仅用来将一个元件与另一个元件进行区分。例如,第一移动设备可以被称作第二移动设备,并且类似地,第二移动设备可以被称作第一移动设备,而不脱离各种所描述的实施例的范围。第一移动设备和移动设备均是在描述一个设备,但是除非上下文以其他方式明确指出,否则它们不是同一个移动设备。相似的情况还包括第一定位特征信息与第二定位特征信息、第一关键帧图像与第二关键帧图像、第一定位特征信息与第二定位特征信息。
再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合。因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
基于背景技术中所提及的自主移动设备推广至其他需依据地图而执行导航操作的可移动 设备,如汽车、配置在汽车上的车载终端等,当无法利用卫星定位技术(或其他定位技术)在地图数据中进行定位时,需要利用其它定位信息进行定位时,移动设备(如移动机器人)在相应物理空间中将无法获得相应的定位信息和地图。
因此,本申请提供一种更新地图的方法及移动机器人,以便向移动设备提供可供其在一物理空间内导航移动的可持久化使用的基准地图及其基准定位数据集。其中,所述物理空间表示为移动设备进行导航移动而提供的物理上的空间,所述物理空间包括但不限于以下任一种:室内/室外空间、道路空间、飞行空间等。在一些实施例中,所述移动设备为无人机,则所述物理空间对应为飞行空间;在另一些实施例中,所述移动设备为具有自动驾驶功能的车辆,则所述物理空间对应为无法获得定位的隧道道路或网络信号弱但需要导航的道路空间;在再一些实施例中,所述移动设备为扫地机器人,则所述物理空间对应为室内或室外的空间。
在此,所述基准地图及其基准定位数据集是基于所述第一移动设备和/或至少一个第二移动设备在所述物理空间内分别执行至少一次导航移动操作而构建的。即所述基准地图及其基准定位数据集融合于至少一个移动设备在同一物理空间中进行多次导航移动而各自构建的地图及其定位数据集。所述基准地图及其基准定位数据集构成前述地图数据。其中,所述移动设备配置有摄像装置、移动传感装置等为自主移动提供导航数据的感测装置;其包括第一移动设备和/或至少一个第二移动设备,其中,所述第一移动设备和第二移动设备可以为同类设备或不同类设备。例如,在仓储空间中,第一移动设备和第二移动设备均为具有自主导航能力的搬运车。又如,在室内空间中,第一移动设备为清洁机器人,第二移动设备为家庭陪伴机器人。再如,在隧道空间中,第一移动设备和第二移动设备均为车载终端。所述第一移动设备或第二移动设备还可以是巡逻机器人等。
其中,本申请所提供的更新地图的方法及移动机器人用以将每一次工作中构建的地图和定位数据集存储在一存储介质中,与存储介质中的基准地图及其基准定位数据集融合,从而对基准地图及其基准定位数据集进行不断优化,由此可使基准地图和基准定位数据集涵盖更多的场景。同时,利用基准地图及其基准定位数据集可使第一移动设备自主移动,使第一移动设备在不同的时段和光照条件下都可以准确定位其所处的环境位置。另外,使用本申请所更新后的地图数据,第一移动设备每一次的导航移动时均被定位在同一个坐标系中,为地图持久化的应用提供了基础。
其中,任一次第一移动设备导航移动所构建的地图可为以下任一种类型:栅格地图、或拓扑地图等。例如,在第一移动设备进行导航移动的过程中在路径规划时构建栅格地图。又如,将物理空间表示为带结点和相关连接线的拓扑结构图,其中结点表示环境中的重要位置点(拐角、门、电梯、楼梯等),边表示结点间的连接关系,如走廊等。在此基础上各类地图 上均可标记有语义标签以供用户与第一移动设备进行基于语义的导航控制,所述语义标签可以为所述物理空间内物体的名称,例如办公桌、笔记本等。在某些情况下,用户向第一移动设备发出语音指令如卧室在前方6米处右转的位置,由此所述地图又被称为语义地图。所述导航移动操作是指第一移动设备利用导航数据进行移动并更新地图数据的过程,在另一些情况下,所述第一移动设备在导航移动期间借助已构建完成的地图数据来实现后续导航。数据的导航下去指定的地点完成工作。又如,巡航状态下的车辆在隧道等无法获得定位的道路上依据地图数据来实现导航等。在又一些情况下,所述第一移动设备在导航移动期间构建工作环境的地图并存储到存储介质上,例如扫地机器人或车辆在工作的同时构建其工作环境的地图,并将其构建的地图传输到存储介质中。所述存储介质可以脱离于所述第一移动设备侧,如配置在服务端的存储装置、或与第一移动设备数据通信的智能终端的存储装置。例如配置在智能终端的SD卡、flash等存储介质。又如,配置在服务端的固态硬盘等存储介质。所述存储介质也可以是第一移动设备自身配置的存储装置例如配置在第一移动设备中的SD卡、flash等存储介质。
在此,本申请以第一移动设备进行一次导航操作所构建的当前地图数据,即包含当前地图和当前定位数据集的当前地图数据为例,描述更新用以提供持久化的基准地图及其基准定位数据集的执行过程。
其中,所述第一移动设备在执行当前次导航移动期间根据基准地图及其基准定位数据集自主移动,同时构建对应当前次导航移动的当前地图及其当前定位数据集。例如,在无法获得定位的隧道道路或网络信号弱但需要导航的路面上,具有巡航功能的车辆可根据基准地图及其基准定位数据集基于VSLAM(Visual Simultaneous Localization and Mapping,基于视觉的即时定位与地图构建)或SLAM(Simultaneous Localization and Mapping,即时定位与地图构建)进行自主驾驶,同时根据所行使过的路径构建该次所经过的路面所在物理空间的当前地图及其当前视觉数据集。又如,扫地机器人在执行清洁任务时,可根据基准地图及其基准定位数据集基于VSLAM或SLAM进行自主移动并清洁,同时根据所清洁过的路径构建该次所经过的路径所在物理空间的当前地图及其当前视觉数据集;再如,酒店中的导航机器人可根据基准地图及其基准定位数据集基于VSLAM或SLAM在收到顾客的语义指令后为顾客进行导航服务,同时根据所经过的路径构建该次所经过路径所在物理空间的当前地图及其当前视觉数据集等。
其中,为便于融合操作,当前地图和基准地图为同种类型。例如同为栅格地图、或同为拓扑地图等。或者,当前地图和基准地图可转换成同种类型。例如,藉由预设分辨率的像素数据,将栅格地图转换成拓扑地图等。其中,当前地图描绘了第一移动设备沿一次导航移动 的路线而检测到的物理空间的地理信息;换言之,当前地图是基于第一移动设备在移动期间的位置和姿态而确定的。所述当前地图包含有:所述第一移动设备的起始位置所对应的坐标信息,以及在移动期间所感测到的障碍物所对应的各坐标信息等地理信息。基准地图描绘了融合有第一移动设备和/或至少一个第二移动设备在同一物理空间中经多次导航移动的路线而检测到的所述物理空间的地理信息。所述基准地图包含有:所述第一移动设备基于定位匹配而确定的起始位置所对应的坐标信息,以及经定位匹配后确定的所述物理空间中的障碍物所对应的各坐标信息等地理信息。
其中,当前定位数据集中包含基于当前导航移动而收集的各第二定位特征信息及其在当前地图中的第二定位坐标信息。其中,根据第一移动设备所能提供的感测装置,在一些实施例中,所述第二定位特征信息包含基于描绘当前地图的空间特征而确定的第二测量定位特征信息。在一些示例中,所述第一移动设备包含激光测距传感器、红外测距传感器等测量感测装置,以及包含设置在第一移动设备的移动控制系统(如驱动电机、滚轮等)上的角度传感器、位移传感器、计数器等测量感应装置;基于相应测量感测装置对障碍物相对位置关系的测量,并结合移动控制系统的测量感应装置对第一移动设备移动距离和偏角的测量,第一移动设备根据障碍物对应于当前地图中的地理信息而形成的空间特征,提取第二测量定位特征信息。其中,所述空间特征包括:当前地图中所描绘的障碍物轮廓、特征点、特征线中的至少一种。所述第二测量定位特征包括以下至少一种描述方式:基于当前地图中对应空间特征的坐标信息组合而确定的测量数据,根据用于描述当前地图中空间特征的深度信息组合而确定的测量数据。例如,以当前地图中障碍物的轮廓中一测量点的坐标信息为起点构建该障碍物轮廓的位置偏移矢量,以得到所述第二测量定位特征至少包括首位相连的位置偏移矢量。又如,基于第一移动设备测量当前地图中障碍物的轮廓上一测量点的深度信息,以及第一移动设备所测得的该轮廓上的其他深度信息,得到所述第二测量定位特征至少包括:用于描述所述障碍物轮廓的深度偏移矢量。再如,第二测量定位特征还可以包含描述同一障碍物轮廓的深度偏移矢量和位置偏移矢量等。还如,基于当前地图中一拐点的坐标信息及其周围测量点的坐标信息的组合,得到所述第二测量定位特征信息。
在另一些实施例中,所述第二定位特征信息包含从当前定位数据集中的第二关键帧图像中提取的第二视觉定位特征信息。其中,所述第二关键帧图像为第一移动设备在导航移动期间所摄取的关键帧图像。所述第二视觉定位特征信息是利用图像特征提取及匹配方式从多个第二关键帧图像中得到的。其中,所述第二视觉定位特征信息包括但不限于:第二关键帧图像中的特征点、特征线等。所述第二视觉定位特征信息举例由描述子来描述。例如,基于SIFT算法(Scale-invariant feature transform尺度不变特征变换),从多个第二关键帧图像中提取 定位特征信息,并基于该多个第二关键帧图像中包含视觉定位特征信息的图像块得到用于描述该视觉定位特征信息的灰度值序列,并将该灰度值序列即为描述子。又如,所述描述子用以通过编码视觉定位特征信息的周围亮度信息来描述所述第二视觉定位特征信息,以所述第二视觉定位特征信息为中心在其周围一圈采样若干个点,其中采样点的数量为但不限于256或512个,将这些采样点两两比较,得到这些采样点之间的亮度关系并将亮度关系转换成二进制字符串或其他编码格式。
与当前定位数据集中的第二测量定位特征信息和第二视觉定位特征信息类似,基准定位数据集包含第一移动设备和/或至少一个第二移动设备在历次导航移动期间基于各次移动所收集并融合得到的第一定位特征信息的集合。其中,根据第一移动设备和/或至少一个第二移动设备所能提供的感测装置,在一些实施例中,第一定位特征信息包括基于基准地图中的空间特征而确定的第一测量定位特征信息。其中,所述第一测量定位特征信息包括以下至少一种:基于基准地图中空间特征的坐标信息组合而确定的测量数据,根据用于描述基准地图中空间特征的深度信息组合而确定的测量数据。在此所述第一测量定位特征信息描述空间特征的方式与第二测量定位特征信息相同或相似,在此不再详述。在又一些实施例中,所述第一定位特征信息包括从基准定位数据集中的第一关键帧图像中提取的第一视觉定位特征信息。其中,所述第一关键帧图像的获得方式与前述第二关键帧图像的获得方式相同或相似,在此不再详述;以及第一视觉定位特征信息描述图像中定位特征的方式与第二视觉定位特征信息相同或相似,在此不再详述。
应当理解,帧是指动画中最小单位的单幅影像画面,在动画软件的时间轴上帧表现为一格或一个标记。关键帧相当于二维动画中的原画,指物体运动或变化中的关键动作所处的那一帧。视觉传感器在移动设备运动的过程中会不断拍摄周围图像,其中相邻帧的图像具有较高的相似度。因此,如果对比相邻帧,则可能无法显而易见地判断移动设备的运动过程,故通过关键帧之间的对比则可更显著地判断移动设备的运动过程。由此,第一移动设备在导航移动期间所获取的各第一关键帧图像都对应第一移动设备在物理空间中的不同位置和姿态。利用第一移动设备在不同位置和姿态下所摄取的不同第一关键帧图像可确定图像中相匹配的定位特征,并将其作为第一定位特征信息,同时还确定第一定位特征信息在当前地图中的坐标信息。在此,利用前后至少两幅关键帧图像匹配定位特征信息,以及确定第一移动设备拍摄各关键帧图像的位置及姿态的方式可参见公开号为CN107907131A的专利申请,且该专利申请全文引用于此。
在第一移动设备执行一次导航移动期间,第一移动设备藉由基准地图及其基准定位数据集确定自身当前的位置以及为了沿导航路线而对姿态、移动方向、速度等进行控制。与此同 时,第一移动设备还基于该次导航移动的起始位置构建当前地图及其当前定位数据集。在一些示例中,在第一移动设备完成该次导航移动后,保存当前地图及其当前定位数据集,并在合适的时机启动本申请的更新地图的方法中的步骤S110。例如,在充电时,或在第一移动设备的系统资源充沛时,执行本申请的更新地图的方法。在另一些示例中,在第一移动设备执行该次导航移动期间,基于已构建的当前地图及其当前定位数据集执行本申请的更新地图的方法。
请参阅图1,其显示为更新地图的方法在一实施方式中的流程图。其中,在一些示例中,所述方法主要由服务端和第一移动设备配合执行。例如,第一移动设备在充电或系统资源占用率较低的时候将所保存的当前地图及其当前定位数据集提供给服务端,服务端执行步骤S110以启动对基准地图及其基准定位数据集的构建。在另一些示例中,所述方法由第一移动设备本身执行。例如,第一移动设备在充电或系统资源占用率较低的时候从存储介质中读取当前地图及其当前定位数据集以执行步骤S110,并启动对基准地图及其基准定位数据集的构建。
其中,所述步骤S110为服务端获取第一移动设备在一物理空间内执行导航移动操作所构建的当前地图及其当前定位数据集。或者第一移动设备获取自身所构建的当前地图及其当前定位数据集。
下述以服务端为例,描述所述方法的后续步骤。需要说明的是,第一移动设备也可以执行后续各步骤。
在步骤S120中,将所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合处理。
应当理解,所述融合指将不同次构建的当前地图和当前定位数据集进行整合。其中,对地图的整合包括以下任一种:将不同次构建的当前地图中的各坐标信息整合成统一的基准地图中的各坐标信息;或者将当前地图中的各坐标信息整合到基准地图中。在一些更具体示例中,差分处理当前地图与基准地图,得到当前地图中的差分坐标信息,并基于差分坐标信息修正基准地图中各坐标信息。对地图的整合还包括将基准地图中近期未包含的地理位置予以去除,例如去除被判定为曾经临时放置的障碍物的地理位置的坐标信息等。
对定位数据集的整合包括以下任一种:将不同次构建的当前定位数据集中的第二定位特征信息整合成统一对应到基准地图中的基准定位数据集;或者将当前定位数据集中的第二定位特征信息整合到已保存的基准定位数据集中。在一些更具体示例中,差分处理当前定位数据集和基准定位数据集中相匹配的定位特征信息,并基于差分定位特征信息更新基准定位数据集中的相应第一定位信息。对定位数据集的整合还包括:将基准定位数据集中近期未包含 的第一定位特征信息等予以去除,例如去除被判定为反映曾经临时放置的障碍物的第一定位特征信息等;和/或将当前定位数据集中的第二定位特征信息添加到基准定位数据集中。从而使融合后的地图和定位数据集一共集成了第一移动设备和/或第二移动设备在历史上进行导航移动操作时所采集到的所有地图数据。
例如,第一移动设备在白天自然光光照下进行了第一次导航移动操作Do1,第一移动设备所构建的当前地图P1和当前定位特征数据集D1均为自然光光照下呈现的状态并将其作为基准地图及其基准定位数据集;而在晚上时,在室内灯光的照射下,光照亮度和光照角度都发生了改变,第一移动设备基于第二次导航移动操作Do2所构建的当前地图P2和当前定位特征数据集D2的状态发生了改变。在此,将第一移动设备在晚上构建的当前地图P2及其当前定位数据集D2与白天构建的基准地图P1及其基准定位数据集D1进行融合,由此融合后的基准地图及其基准定位数据集就同时包含了该物理空间白天和晚上的场景下所构建的地图和定位数据集。
又如,所述第一移动设备和/或至少一个第二移动设备已在一物理空间内进行了多次导航移动操作,其基准地图及其基准定位数据集已融合了多次导航移动操作中构建的地图和定位数据集。在第一移动设备和/或至少一个第二移动设备新的一次导航移动操作后,将其构建的当前地图及其当前定位数据集与历史构建的基准地图及其基准定位数据集融合,由此通过不断迭代而构建所述当前地图及其当前定位数据集。
在此,由于每次第一移动设备和/或第二设备的导航移动所收集的各第二定位特征信息与基准定位数据集中的各第一定位特征信息并非完全一致,在进行融合期间需对定位特征信息进行匹配,请参阅图2,其显示为本申请中将基准地图及其基准定位数据集与当前地图及其当前定位数据集融合的步骤流程图。
所述将基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合处理的步骤包括:步骤S121,确定所述基准定位数据集中的第一定位特征信息及其第一定位坐标信息、和所述当前定位数据集中的第二定位特征信息及其第二定位坐标信息相匹配。
其中,所述基准定位数据集中包含第一定位特征信息及其在基准地图中的第一定位坐标信息,以及所述当前定位数据集中包含第二定位特征信息及其在当前地图中的第二定位坐标信息。服务端确认包含第一定位特征信息及其第一定位坐标信息的信息组合与包含第二定位特征信息及其第二定位坐标信息的信息组合相匹配。
在一些实施方式中,所述步骤S121包括步骤S1211:匹配所述基准定位数据集中的各第一定位特征信息和所述当前定位数据集中的各第二定位特征信息;以及基于所得到的匹配结果确定相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定 位坐标信息。
其中,所述匹配定位特征信息的方式举例包括:匹配两数据集中测量定位特征信息所描述的坐标矢量偏差值、或深度矢量偏差值中的至少一种是否在预设测量匹配误差范围内;或者匹配两数据集中视觉定位特征信息所描述的灰度值、灰度分布、颜色值、颜色分布、色差、或灰度阶跃中的至少一种是否在预设图像匹配误差范围内。
在一些示例中,所述服务端遍历地匹配第一定位特征信息和第二定位特征信息,当符合以下至少一种匹配条件时,所述服务端基于相匹配的第一定位特征信息和第二定位特征信息,确定相应第一定位特征信息所对应的第一定位坐标信息和相应第二定位特征信息所对应的第二定位坐标信息相匹配。其中,所述匹配条件包括:相匹配的第一定位特征信息占第一定位特征信息总和的比例大于一比例阈值,相匹配的第二定位特征信息占第二定位特征信息总和的比例大于一比例阈值,或相匹配的第一定位特征信息的总和大于一总和阈值等,所述匹配条件包括:基于预设权重对上述多种匹配条件进行评价所得到的评价结果满足一评价阈值区间等。
在另一些示例中,为防止第一移动设备在包含相似定位特征信息的不同物理空间内移动,在融合期间出现的融合误差,服务端还匹配各定位特征信息所对应的定位坐标信息。例如,单台或不同移动机器人在统一装修不同户型的房间内清洁期间,所获取的第二定位特征信息可能与其基准定位数据集中的第一定位特征信息具有较高匹配度,但相应的定位坐标信息差异较大。
为此,当第一定位特征信息和第二定位特征信息符合上示例中至少一种匹配条件时,所述步骤S1211还包括:基于相匹配的第一定位特征信息和第二定位特征信息,将各自对应的第一定位坐标信息和第二定位坐标信息进行匹配,以得到相匹配的第一定位坐标信息和第二定位坐标信息。
在此,基于相匹配的第一定位特征信息和第二定位特征信息,统计各自对应的第一定位坐标信息和第二定位坐标信息之间的位置关系误差是否满足预设的位置关系误差条件,若是,则确定相应第一定位特征信息所对应的第一定位坐标信息和相应第二定位特征信息所对应的第二定位坐标信息相匹配;反之,则确定所匹配的所有第一定位特征信息和第二定位特征信息不匹配。其中,上述位置关系误差条件举例包括以下至少一种及组合:相匹配的定位特征信息在各自地图中的定位坐标信息之间的位移误差小于预设位移误差阈值,相匹配的定位特征信息在各自地图中的定位坐标信息之间的偏角误差小于预设偏角误差阈值,符合预设位移误差阈值的定位坐标信息的数量占相匹配的定位特征信息的数量的比例超出预设比例阈值,符合预设位移误差阈值的定位坐标信息的数量超出预设总量阈值。
为减少计算量,服务端在匹配定位特征信息期间,还根据已匹配的定位特征信息在各自地图中的位置分布,来确定已匹配的定位特征信息各自所对应的定位坐标信息之间的位置关系误差是否满足预设的位置关系误差条件,若是,则确定相应第一定位特征信息所对应的第一定位坐标信息和相应第二定位特征信息所对应的第二定位坐标信息相匹配;反之,则确定所匹配的所有第一定位特征信息和第二定位特征信息不匹配。其中,所述位置分布包括但不限于以下至少一种:1)将相匹配的定位特征信息进行位置聚类;对应地,服务端根据两地图中聚类后的位置分布之间的位置关系误差,从已匹配的定位特征信息中筛选出第一定位特征信息及其第一定位坐标信息,以及第二定位特征信息及其第二定位坐标信息均匹配。2)以相匹配的定位特征信息各自对应的定位坐标信息在各自地图中所描绘的形状作为所述位置分布;对应地,服务端根据两地图中的形状之间的位置关系误差,从已匹配的定位特征信息中筛选出第一定位特征信息及其第一定位坐标信息,以及第二定位特征信息及其第二定位坐标信息均匹配。在再一些实施方式中,服务端利用关键帧图像中所包含的定位特征进行定位特征匹配,为此,在一些实施例中,所述步骤S121包括步骤S1212:将所述当前定位数据集中各第二关键帧图像中的第二定位特征信息与基准定位数据集中各第一关键帧图像中的第一定位特征信息进行匹配处理,以确定所述第二关键帧图像与第一关键帧图像中相匹配的第一定位特征信息和第二定位特征信息;以及基于所得到的匹配结果确定相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。其中,各关键帧图像中包含视觉定位特征信息。
在此,以关键帧图像所包含的描述定位特征信息的描述子为匹配索引,来提取两个数据库中各自关键帧图像中待匹配的定位特征信息,再基于对待匹配的两幅关键帧图像所包含的多个定位特征信息在各自关键帧图像中的像素位置关系来确定第二关键帧图像与第一关键帧图像中相匹配的第一定位特征信息和第二定位特征信息。例如,服务端以两个数据库中对应同一关键帧图像的多个定位特征信息的描述子(或描述子的摘要)进行粗略的第一次匹配,并利用预设的第一次匹配条件筛选出待进一步匹配的第二关键帧图像及其第二定位特征信息,以及第一关键帧图像及其第一定位特征信息,其中,所述第一次匹配条件包括但不限于:两个描述子中相符的比例在预设比例值以上,或两幅关键帧图像中符合相符条件的描述子的数量在预设数量值以上等。又如,服务端利用频率直方图的相似性条件,对两个数据库中关键帧图像进行粗略的第一次匹配,以筛选出待进一步匹配的第二关键帧图像及其第二定位特征信息,以及各自对应的第一关键帧图像及其第一定位特征信息。
服务端基于图像匹配技术将待进一步匹配的第二关键帧图像及其第二定位特征信息,以及第一关键帧图像及其第一定位特征信息进行逐一匹配,其中,所述图像匹配技术包括但不 限于匹配第一关键帧图像P1中多个第一定位特征信息所构成的形状与第二关键帧图像P2中多个第二定位特征信息所构成的形状之间的图像特征误差,若所述图像特征误差符合预设的图像特征误差条件,则确定相应两定位特征信息相匹配,反之,则不匹配。其中,所述图像特征误差条件包括但不限于以下至少一种:两个形状的边、角是否符合图像的平移、旋转和尺度不变性等匹配条件;第一定位特征信息的描述子和第二定位特征信息的描述子之间的误差小于预设误差阈值等。
在一些更具体示例中,服务端将所匹配的第一定位特征信息和第二定位特征信息各自所对应的第一定位坐标信息与第二定位坐标信息相匹配。在另一些更具体示例中,服务端利用前述各示例中所提及的坐标匹配方式对相匹配的定位特征信息各自所对应的定位坐标信息进行匹配操作,筛选出相匹配的第一定位特征信息及其第一定位坐标信息和第二定位特征信息及其第二定位坐标信息。
在又一些实施例中,当前地图描绘了第一移动设备沿一次导航移动的路线而检测到的物理空间的地理信息;换言之,当前地图是基于第一移动设备在移动期间的位置和姿态而确定的。由此可见,基于第一移动设备的导航移动而构建的当前地图与物理空间的主方向之间具有基于第一移动设备的姿态而产生的偏角差异。
为了在减少计算量的同时提供与物理空间的主方向基本一致的基准地图及其基准定位数据集,所述步骤S121还包括步骤S1213,基于物理空间的主方向而对任一或全部定位数据集进行调整,使得融合后的基准地图及其基准定位数据集与物理空间的主方向基本一致,由此便于多设备与用户交互时,便于用户辨识各设备在物理空间中的位置。
在此,第一移动设备所在物理空间通常具有一个或两个主方向。在一些示例中,所述主方向用于描述构成所述物理空间的分隔体的摆放方向,其中,所述分隔体举例包括墙、窗、屏风等。例如,所述第一移动设备在家居室内中导航移动,所述物理空间的主方向包含沿房间的墙体而确定的两个相交方向。在另一些示例中,所述主方向用于描述物理空间内藉由所摆放的分隔体而构建的可供移动的道路方向,其中,所述分隔体举例为沿路设置的标志线、路肩石、货架等。例如,所述第一移动设备在隧道中导航移动,所述物理空间的主方向为沿着基于隧道墙体所构建的道路而确定的单一方向。又如,所述第一移动设备在仓库中导航移动,所述物理空间的主方向为基于仓库货架所构建的沿相交道路而确定的两个方向。
为此,在一些示例中,所述步骤S1213包括:分析所述当前定位数据集中第二定位特征信息中的第二关键帧图像,确定所述第二关键帧图像所对应的第二坐标信息相对于所述物理空间主方向的第二相对方位关系;以及基于所述第二相对方位关系调整在所述第二关键帧图像中的第二定位特征信息的像素位置;以便通过执行步骤S1212来匹配调整后的第二关键帧 图像中的第二定位特征信息与调整后的第一关键帧图像中的第一定位特征信息,以及确定相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。
在此,第一移动设备在不同次导航移动期间,历次构建的地图均与物理空间的主方向之间具有各自的偏角差异。为便于将具有不同偏角差异的两幅地图融合在一起,在一种示例中,所述基准地图及其基准视觉数据集是基于所述物理空间主方向而构建的。为此,基于当前定位数据集中第二关键帧图像及其对应的第二坐标信息之间的对应关系,来确定该第二坐标信息相对于物理空间主方向的第二相对方位关系,进而基于所确定的第二相对方位关系调整所述第二关键帧图像中的第二定位特征信息的像素位置,再利用调整后的第二定位特征信息和所匹配的第一定位特征信息,执行前述融合操作。如此,可有效减少因所述偏角差异而带来的定位特征错误匹配,以及减少融合计算量。
在一些具体示例中,从所述第二关键帧图像的第二定位特征信息中选取直线线段,根据所识别出的特征线段确定在物理空间中第一移动设备与分隔体之间的第二相对方位关系。在此,公开号为CN109074084A的专利申请中提供一种根据所识别出的特征线段确定在物理空间中第一移动设备与分隔体之间的第二相对方位关系的技术方案,在此全文引用。其中,所述第二关键帧图像对应于所援引文件中机器人移动期间所摄取的图像,所述第一移动设备对应于所援引文件中的机器人,以及所述第二相对方位关系对应于所援引文件中的相对的方位关系,在此不再详述。
在确定所述第二相对方位关系后,基于所确定的第二相对方位关系调整所述第二关键帧图像中的第二定位特征信息的像素位置。在一些更具体示例中,第二关键帧图像中的像素坐标与当前地图所构建的地图坐标之间的对应关系可以是默认的。例如,摄像装置的主光轴与第一移动设备的移动平面基本垂直,可构建与地图坐标系中具有一致角度关系的像素坐标系,由此基于所述第二相对方位关系调整所述第二关键帧图像中的第二定位特征信息的像素位置。在另一些更具体示例中,第二关键帧图像中的像素坐标与当前地图所构建的地图坐标之间的对应关系是基于摄像装置在第一移动设备上所安装的倾斜角度、和相机参数而设置的,所述对应关系可与所述当前定位数据集一并获取,由此基于所述第二相对方位关系调整所述第二关键帧图像中的第二定位特征信息的像素位置。
在又一些具体示例中,基于所述第一关键帧图像的第一定位特征信息提取直线线段,例如,利用图像的膨胀算法将多个第一定位特征信息进行连线处理,并基于预设的直度和/或长度特征提取其中的直线线段,再利用前述具体示例确定第一相对方位关系,以及调整所述第一关键帧中的第一定位特征信息的像素位置。
根据实际调整情况,调整后的第二关键帧中的第二定位特征信息与相匹配的第一关键帧中的第一定位特征信息具有重叠、相差180°或相差±90°。与直接匹配的方式相比,上述预处理过程有利于优化可匹配的定位特征信息的算法,减少匹配过程中的多步计算。
在另一些示例中,与前述分析第二关键帧图像并调整第二定位特征信息相似,在一些情况中,所获取的当前地图及其当前视觉定位数据库已在第一移动设备侧被调整,故而本步骤还可以分析所述基准定位数据集中第一定位特征信息中的第一关键帧图像,确定所述第一关键帧图像所对应的第一坐标信息相对于所述物理空间主方向的第一相对方位关系;以及基于所述第一相对方位关系调整在所述第一关键帧图像中的第一定位特征信息的像素位置。
其中,所述确定第一相对方位关系的方式可与确定第二相对方位关系的方式相同或相似,在此不再详述。所述调整在第一关键帧图像中的第一定位特征信息的像素位置的方式可与调整第二定位特征信息的像素位置的方式相同或相似,在此不再详述。
在又一些示例中,本步骤还可以结合前述两示例:分析所述第一定位特征信息中的第一关键帧图像,确定所述第一关键帧图像所对应的第一坐标信息相对于所述物理空间主方向的第一相对方位关系,以及分析所述当前定位数据集中第二定位特征信息中的第二关键帧图像,确定所述第二关键帧图像所对应的第二坐标信息相对于所述物理空间主方向的第二相对方位关系;以及基于所述第一相对方位关系调整在所述第一关键帧图像中的第一定位特征信息的像素位置,以及基于所述第二相对方位关系调整在所述第二关键帧图像中的第二定位特征信息的像素位置。由此得到均基于物理空间主方向而确定的各定位特征在各关键帧图像中的像素位置。
基于调整后的第一定位特征信息和/或第二定位特征信息执行前述步骤S1212,以得到相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。
基于上述各示例而得到的匹配结果,所述服务端执行步骤S122。
在步骤S122中,基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集。其中,藉由相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,服务端得到当前地图和基准地图中的位移、角度偏差,以及得到当前定位数据集和基准定位数据集中至少相匹配的定位特征信息的特征偏差;并利用所得到的各偏差信息融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集。
在一些示例中,所述步骤S122包括步骤S1221,基于相匹配的第一定位坐标信息和第二 定位坐标信息之间的坐标偏差信息修正基准地图和/或当前地图中的坐标误差。
在此,服务端统计各相匹配的第一定位坐标信息和第二定位坐标信息之间的位移偏差信息和角度偏差信息,以得到平均位移偏差信息和平均角度偏差信息,并根据所得到的平均位移偏差信息和平均角度偏差信息,修正基准地图和/或当前地图中的各坐标信息。
所述步骤S122还包括步骤S1222,基于修正后的至少一个地图执行合并操作,以得到新的基准地图。
在一些具体示例中,服务端将修正后的地图作为新的基准地图。在另一些具体示例中,服务端确定修正后的基准地图与修正前(或后)的当前地图之间的重叠区域,并基于重叠区域更新基准地图,以得到新的基准地图。例如,服务端重叠修正后的基准地图和修正后的当前地图,以确定两地图之间的重叠区域,并根据修正后的当前地图更新修正后的基准地图中未重叠的区域,以得到新的基准地图。
所述步骤S122还包括步骤S1223,将基准定位数据集和当前定位数据集中至少相匹配的第一定位特征信息和第二定位特征信息标记在新的基准地图上,以得到新的定位坐标信息。
在此,服务端根据对地图中各坐标信息的修正操作,将定位特征信息所对应的定位坐标信息修正后标记在新的基准地图中。例如,根据前述提及的平均位移偏差信息和平均角度偏差信息,修正基准定位数据集和/或当前定位数据集中所有定位特征信息所对应的定位坐标,以得到新的定位坐标信息。又如,根据相匹配的第一定位坐标信息和第二定位坐标信息之间的位移偏差信息和角度偏差信息,修正其中的第一定位坐标信息或第二定位坐标信息,以标记在新的基准地图中。
所述步骤S122还包括融合两个定位数据集以得到新的基准定位数据集的步骤,该步骤至少包括以下步骤S1224和/或步骤S1225。
在步骤S1224中,基于相匹配的第一定位特征信息和第二定位特征信息之间的定位特征偏差信息调整基准定位数据集或当前定位数据集。
在一些具体示例中,相匹配的第一定位特征信息和第二定位特征信息为测量定位特征信息,服务端根据相匹配的第一定位特征信息和第二定位特征信息之间的矢量偏差信息,调整基准定位数据集或当前定位数据集中对应的第一定位特征信息或第二定位特征信息,以得到新的基准定位数据集中的新的第一特征信息。例如,相匹配的第一定位特征信息和第二定位特征信息均为测量定位特征信息,并由多个首位相连的位置偏移矢量描述,利用该第一定位特征信息和第二定位特征信息之间的矢量偏差信息(包含位移偏差信息和角度偏差信息),调整对应的第一定位特征信息或第二定位特征信息。
在一些具体示例中,相匹配的第一定位特征信息和第二定位特征信息为视觉定位特征信 息,服务端根据相匹配的第一定位特征信息和第二定位特征信息之间的特征偏差信息,调整基准定位数据集或当前定位数据集中对应的第一定位特征信息或第二定位特征信息,以得到新的基准定位数据集中的新的第一特征信息。例如,相匹配的第一定位特征信息和第二定位特征信息均为视觉定位特征信息,并由描述子描述,利用该第一定位特征信息和第二定位特征信息之间的特征偏差信息(包含灰度偏差信息和/或亮度偏差信息),调整对应的第一定位特征信息或第二定位特征信息。
在步骤S1225中,将当前定位数据集中未匹配的各第二定位特征信添加至基准定位数据集中,或者,将基准定位数据集中未匹配的各第一定位特征信息添加至当前定位数据集中。
在此,服务端通过执行任一种添加操作,将两个定位数据集中未匹配的定位特征信息予以补充,以使新的基准数据集能够提供更丰富的定位特征信息。
基于上述融合操作,第一移动设备可根据新的基准地图和新的基准定位数据集执行下一次的导航操作。
在又一些实施方式中,所述第一移动设备所提供的当前地图和当前定位数据集并未基于其使用的基准地图及基准定位数据集进行构建,为此,所述更新地图的方法还包括:
调整所述基准地图或当前地图直至调整后两幅地图符合预设的重叠条件的步骤;以便基于调整后的两地图确定所述基准定位数据集和所述当前定位数据集中相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。
其中,所述重叠条件包括但不限于:调整后的基准地图和当前地图中表示障碍物位置的坐标信息之间的整体或边缘坐标的误差小于预设的坐标误差值,或调整后的基准地图和当前地图所形成的两地图图像数据之间的整体或边缘像素的误差小于预设像素误差值。
在此,所述调整基准地图或当前地图的方式可基于预设的单位角度和单位位移逐步调整,和/或基于统计两地图中相匹配的测量定位特征信息之间的位移和角度差异进行对应的平移和旋转操作。在符合重叠条件后,确定所调整的位移和角度,并对应调整基准定位数据集中的各第一定位坐标信息、关键帧图像的图像坐标信息等,在此基础上,执行前述匹配和融合操作。在此不再详述。在又一些实施方式中,不同的物理空间内可能有部分的定位特征信息相同,为避免第一移动设备将两个具有部分相同定位特征信息的不同物理空间识别为同一物理空间,在一些示例中,服务端可通过匹配地图的边界信息来确定是否为同一物理空间。
在又一些示例中,所述更新地图的方法还包括检测所述当前地图的完整程度,和/或检测所述当前定位数据集的信息量;基于所得到的检测结果执行所述数据融合处理的操作的步骤。
其中,所述检测当前地图的完整程度的方式包括:基于预设的时长条件,检测绘制当前地图所花费的时长从而确定当前地图的完整度;基于预设的轮廓条件,检测当前地图中轮廓 数据从而确定当前地图的完整度;基于当前地图与基准地图的重叠条件,检测当前地图的完整度。
其中,所述检测当前定位数据集的信息量的方式包括:基于预设的总数量条件,检测定位数据集中不同第二定位特征信息的总数量;和/或基于预设的差分总数量条件,检测当前定位数据集和基准定位数据集中未匹配的第二定位特征信息的数量等。
在此,上述各检测方式并非择一而设,可根据实际情况选择其中一个或多个检测方式进行检测,由此减少不必要的融合操作。
在一些具体示例中,所述第一移动设备并非每一次都在所述物理空间内进行了完整的导航移动操作,例如在家庭环境中,扫地机器人工作的过程中用户需要外出,需要扫地机器人停止当前操作。在这种情况下,需要通过检测当前地图的完整程度,或检测当前定位数据集的信息量,或同时对当前地图的完整程度和当前定位数据集的信息量进行检测,从而确定是否需要将当前地图及其当前定位数据集与基准地图及其基准定位数据集进行融合。
在又一些具体示例中,可以获取第一移动设备执行当前导航移动操作任务所花费的时间,并将其与所述第一移动设备在历史上在执行所述物理空间内的导航移动操作所花费的时间进行比较,从而基于预设的条件确定是否需要进行融合,所述预设的条件可以是第一移动设备执行当前导航移动操作任务所花费的时间与所述第一移动设备在历史上在执行所述物理空间内的导航移动操作任务所花费的时间的比值等。
在另一些具体示例中,可以通过获取第一移动设备执行当前导航移动操作任务时电机的运行数据,并将其与所述第一移动设备在历史上在执行所述物理空间内的导航移动操作时的电机数据进行比较,从而基于预设的条件确定是否需要进行融合,所述预设的条件可以是第一移动设备执行当前导航移动操作任务时电机数据与所述第一移动设备在历史上在执行所述物理空间内的导航移动操作任务时电机数据的比值等。
在再一些具体示例中,可以获取第一移动设备执行当前导航移动操作任务所移动的距离,并将其与所述第一移动设备在历史上在执行所述物理空间内的导航移动操作所移动的距离进行比较,从而基于预设的条件确定是否需要进行融合,所述预设的条件可以是第一移动设备执行当前导航移动操作任务所移动的距离与所述第一移动设备在历史上在执行所述物理空间内的导航移动操作任务所移动的距离的比值等。
在利用上述任一示例完成融合操作后,将数据融合后的地图及其定位数据集作为所述第一移动设备中新的基准地图及其基准定位数据集。在此,融合后的地图及其定位数据集被保存在存储介质中。所述基准地图及其基准定位数据集可以是主动推送至所述第一移动设备,也可以是基于第一移动设备的请求而下载。
在一些示例中,服务端执行所述融合操作后,将新的基准地图及其基准定位数据集发送至位于所述物理空间中的第一移动设备,以便在下一次导航移动操作时第一移动设备使用新的基准地图及其基准定位数据集。
在又一些示例中,服务端还执行将位于所述物理空间内的至少一个配置有摄像装置的第二设备的位置标记在所述基准地图上的步骤。
在此,在所述物理空间内除了可进行导航移动操作的第一移动设备外,还包括布局在所述物理空间内的配置有摄像装置的第二设备。所述第二设备包括前述提及的第二移动设备,和/或固定安装在所述物理空间内并配置有摄像装置的电子设备,例如安防摄像头等。服务端还获取第二设备所摄取的第三关键帧图像,并通过匹配第三关键帧图像中第三定位特征信息与基准定位数据集中的第一关键帧图像中第一定位特征信息,确定第二设备在基准地图中的坐标位置,并将第二设备的位置标记在所述基准地图中,并将标记有第二设备位置的基准地图及其基准定位数据集发送至第一移动设备。
如此,在一些具体示例中,用户可藉由智能终端与标记有基准地图的第一移动设备和/或各第二设备进行交互。例如,用户在终端设备上向配置有摄像装置的第二设备发出指令后,所述第二设备可基于基准地图及其基准定位数据集执行用户的指令。第一移动设备基于所使用基准地图与相应的第二设备进行交互。例如,用户直向第一移动设备的摄像头作出手势指令,所述第一移动设备与第二设备进行通讯,使第二设备基于手势指令与基准地图及其基准定位数据集执行用户的指令等。
综上所述,本申请中所构建的地图为持久化地图,即移动设备在重新开机后的地图与上一次工作的地图在同一坐标系中。如此,用户可在终端设备上对地图进行标注处理,以设置移动设备的工作区域和工作方式。例如,用户在终端设备上在地图中标注需要每天工作多次的区域或禁止进入的区域或指定某一区域工作。同时,由于同样的物理空间在不同的时间段的视觉信息会有较大差异。因此,本申请不只获取一次的工作记录,还持续不断地收集信息并将其进行融合,以丰富定位特征,并随着时间的积累,在多次工作后,可使地图在不同的时段和光照条件下为移动设备提供定位。另外,本申请公开的地图构建方式可得到一种更稳定的地图,为用户和设备之间的交互提供便利,同时也节省了计算资源,解决了现有技术中边定位变构图造成的计算资源紧张。本申请的持久化地图在定位成功后可直接使用,如果原先每秒需要创建许多个定位特征,则采用本申请的更新地图方法可仅创建基准地图及其基准定位数据集未涵盖的定位特征即可。
本申请还提供一种服务端。请参阅图3,其显示为本申请服务端的一实施例结构示意图,如图所示,所述第一移动设备在导航移动操作过程中构建的基准地图及其基准定位数据集及 当前地图及其当前定位数据集被储存在服务端中。所述服务端包括但不限于单台服务器、服务器集群、分布式服务器群、云服务端等。在此,根据实际设计,所述服务端由云提供商所提供的云服务端提供。其中,所述云服务端包括公共云(Public Cloud)服务端与私有云(Private Cloud)服务端,其中,所述公共或私有云服务端包括Software-as-a-Service(软件即服务,SaaS)、Platform-as-a-Service(平台即服务,PaaS)及Infrastructure-as-a-Service(基础设施即服务,IaaS)等。所述私有云服务端例如阿里云计算服务平台、亚马逊(Amazon)云计算服务平台、百度云计算平台、腾讯云计算平台等等。
所述服务端与位于一物理空间中的第一移动设备通信连接。其中,所述物理空间表示为移动设备进行导航移动而提供的物理上的空间,所述物理空间包括但不限于以下任一种:室内/室外空间、道路空间、飞行空间等。例如在一些实施例中,所述移动设备为无人机,则所述物理空间对应为飞行空间;在另一些实施例中,所述移动设备为具有自动驾驶功能的车辆,则所述物理空间对应为无法获得定位的隧道道路或网络信号弱但需要导航的道路空间;在再一些实施例中,所述移动设备为扫地机器人,则所述物理空间对应为室内或室外的空间。所述移动设备配置有摄像装置、移动传感装置等为自主移动提供导航数据的感测装置;其包括第一移动设备和/或至少一个第二移动设备,其中,所述第一移动设备和第二移动设备可以为同类设备或不同类设备。例如,在仓储空间中,第一移动设备和第二移动设备均为具有自主导航能力的搬运车。又如,在室内空间中,第一移动设备为清洁机器人,第二移动设备为家庭陪伴机器人。再如,在隧道空间中,第一移动设备和第二移动设备均为车载终端。所述第一移动设备或第二移动设备还可以是巡逻机器人等。
请继续参阅图3,所述服务端包括接口装置11、存储装置12、以及处理装置13。其中,存储装置12包含非易失性存储器、存储服务器等。其中,所述非易失性存储器举例为固态硬盘或U盘等。所述存储服务器用于存储所获取的各种用电相关信息和供电相关信息。接口装置11包括网络接口、数据线接口等。其中所述网络接口包括但不限于:以太网的网络接口装置、基于移动网络(3G、4G、5G等)的网络接口装置、基于近距离通信(WiFi、蓝牙等)的网络接口装置等。所述数据线接口包括但不限于:USB接口、RS232等。所述接口装置与所述控制系统、第三方系统、互联网等数据连接。处理装置13连接接口装置11和存储装置12,其包含:CPU或集成有CPU的芯片、可编程逻辑器件(FPGA)和多核处理器中的至少一种。处理装置13还包括内存、寄存器等用于临时存储数据的存储器。
所述接口装置11用于与位于一物理空间中的第一移动设备进行数据通信。在此,所述接口装置11举例为以太网的网络接口装置、基于移动网络(3G、4G、5G等)的网络接口装置、基于近距离通信(WiFi、蓝牙等)的网络接口装置等,由此与第一移动设备通信连接。
所述存储装置12用以存储用于提供给所述第一移动设备的基准地图及其基准定位数据集,存储来自所述第一移动设备在所述物理空间内执行导航移动操作所构建的当前地图及其当前定位数据集,以及存储至少一个程序。在此,所述存储装置12举例包括设置在服务端的硬盘并储存有所述至少一种程序。服务端将基准地图及其基准定位数据集存储在该存储装置12中,当需要调用基准地图及其基准定位数据集时,所述存储装置12将基准地图及其基准定位数据集提供给接口装置11,同时,所述存储装置12存储来自接口装置11的当前地图及其当前定位数据集。当需要融合基准地图及其基准定位数据集与当前地图及其当前定位数据集时,所述存储装置12将基准地图及其基准定位数据集与当前地图及其当前定位数据集提供给处理装置13。
所述处理装置13用于调用所述至少一个程序以协调所述接口装置和存储装置执行前述任一示例所提及的更新地图的方法。其中,所述更新地图的方法如图1及所对应的描述所示,在此不再重述。
在一些情况下,更新地图的步骤也可以由移动机器人完成。在此,提供一种移动机器人,请参阅图4,其显示为一种移动机器人的模块结构实施例示意图。如图所示,所述移动机器人2包括存储装置24、移动装置23、定位感应装置21和处理装置22。
所述存储装置用于存储描述物理空间的基准地图及其基准定位数据集,在所述物理空间内执行导航移动操作所构建的当前地图和当前定位数据集,以及至少一个程序;所述移动装置用于基于所述基准地图而确定的导航路线执行移动操作;所述定位感应装置用于在执行导航移动操作期间收集第二定位特征信息,以构成当前定位数据集;所述处理装置与所述存储装置、摄像装置和移动装置相连,用于调用并执行所述至少一个程序,以协调所述存储装置、摄像装置和移动装置。其中,所述定位感应装置包括但不限于以下至少一种:摄像装置、红外测距装置、激光测距装置、角度传感器、位移传感器、计数器等。其中,激光测距传感器、红外测距传感器等测量感测装置配置在移动机器人体侧。角度传感器、位移传感器、计数器等测量感应装置设置在移动机器人的移动控制系统(如驱动电机、滚轮等)上。2D摄像装置、3D摄像装置等视觉感应装置设置在移动机器人的体侧或顶部。
例如,移动机器人在一物理空间导航移动操作时,所述处理装置22基于基准地图及其基准定位数据集为移动装置23进行导航,并通过所述定位感应装置21中的摄像装置摄取关键帧图像并提供给所述处理装置22,所述处理装置22基于所述摄像装置提供的关键帧图像构建当前地图和当前定位数据集并提供给存储装置24存储。移动机器人在充电或系统资源占用率较低的时候,处理装置22从存储装置24中读取当前地图及其当前定位数据集,并启动对基准地图及其基准定位数据集的构建。
在此,所存储的基准地图及其基准定位数据集是基于所述移动机器人自身和/或至少一个第二移动设备在所述物理空间内分别执行至少一次导航移动操作而构建的,即所述基准地图及其基准定位数据集是至少一个移动设备在同一物理空间中进行多次导航移动而各自构建的地图及其定位数据集融合后得到的。所述基准地图及其基准定位数据集构成前述地图数据。其中,所述移动设备配置有摄像装置、移动传感装置等为自主移动提供导航数据的感测装置;其包括移动机器人和/或至少一个第二移动设备,其中,所述移动机器人和第二移动设备可以为同类设备或不同类设备。例如,在仓储空间中,移动机器人和第二移动设备均为具有自主导航能力的搬运车。又如,在室内空间中,移动机器人为清洁机器人,第二移动设备为家庭陪伴机器人。再如,在隧道空间中,移动机器人和第二移动设备均为车载终端。所述移动机器人或第二移动设备还可以是巡逻机器人等。
在一个示例性的实施例中,所述移动机器人还包括接口装置,用于与至少一个第二移动设备进行数据通信;在一些实施例中,所述处理装置还执行获取所述第二移动设备所提供的第三地图和第三定位数据集的操作,以便将所述基准地图及其基准定位数据集、所述第二地图及其第二定位数据集、和所述第三地图及其第三定位数据集进行数据融合处理。例如,在一物理空间中包括移动机器人和一第二移动设备,所述移动机器人在导航移动操作过程中构建了当前地图及其当前定位数据集,所述第二移动设备在导航移动操作过程中构建了第三地图及其第三定位数据集,所述移动机器人通过接口装置接收来自所述第二移动设备的数据,并将基准地图及其基准定位数据集、所述当前地图及其当前定位数据集、和所述第三地图及其第三定位数据集进行数据融合处理。在一些情况下,物理空间中除所述移动机器人外还包含多个第二移动设备,因此在多个第二移动设备进行导航移动操作的过程中会构建多个第三地图及其第三定位数据集,所述移动机器人通过接口装置接收所述多个第三地图及其第三定位数据集,并将多个第三地图及其第三定位数据集与基准地图及其基准定位数据集和第二地图及其第二定位数据集进行数据融合处理。
在一个示例性的实施例中,请参阅图5,其显示为本申请中移动机器人在一工作中的流程实施例示意图。如图所示,在步骤S210中,所述机器人将所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合处理。
应当理解,所述融合指将不同次构建的地图和定位数据集进行整合。其中,对地图的整合包括以下任一种:将不同次构建的当前地图中的各坐标信息整合成统一的基准地图中的各坐标信息;或者将当前地图中的各坐标信息整合到基准地图中。在一些更具体示例中,差分处理第二地图与基准地图,得到当前地图中的差分坐标信息,并将其整合到基准地图中。对地图的整合还包括将基准地图中近期未包含的地理位置予以去除,例如去除被判定为曾经临 时放置的障碍物的地理位置的坐标信息等。
对定位数据集的整合包括以下任一种:将不同次构建的当前定位数据集中的第二定位特征信息整合成统一的基准地图中的各第一定位特征信息;或者将第二定位数据集中的第二定位特征信息整合到基准定位数据集中。在一些更具体示例中,差分处理当前定位数据集和基准定位数据集得到差分定位特征信息,并基于差分定位特征信息整合两个定位数据集。对定位数据集的整合还包括将基准定位数据集中近期未包含的第一定位特征信息等予以去除,例如去除被判定为反映曾经临时放置的障碍物的第一定位特征信息等。从而使融合后的地图和定位数据集一共集成了移动机器人和/或第二移动设备在历史上进行导航移动操作时所采集到的所有地图数据。
例如,移动机器人在白天自然光光照下进行了第一次导航移动操作Do1,移动机器人所构建的当前地图P1和当前定位特征数据集D1均为自然光光照下呈现的状态并将其作为基准地图及其基准定位数据集;而在晚上时,在室内灯光的照射下,光照亮度和光照角度都发生了改变,移动机器人基于第二次导航移动操作Do2所构建的当前地图P2和当前定位特征数据集D2的状态发生了改变。在此,将移动机器人在晚上构建的当前地图P2及其当前定位数据集D2与白天构建的基准地图P1及其基准定位数据集D1进行融合,由此融合后的基准地图及其基准定位数据集就同时包含了该物理空间白天和晚上的场景下所构建的地图和定位数据集。
又如,所述移动机器人和/或至少一个第二移动设备已在一物理空间内进行了多次导航移动操作,其基准地图及其基准定位数据集已融合了多次导航移动操作中构建的地图和定位数据集。在移动机器人和/或至少一个第二移动设备新的一次导航移动操作后,将其构建的当前地图及其当前定位数据集与历史构建的基准地图及其基准定位数据集融合,由此通过不断迭代而构建所述当前地图及其当前定位数据集。
在此,所述移动机器人执行步骤S210的过程与前述示例中第一设备执行步骤S120的过程相同或相似,在此不再详述。
在利用上述任一示例完成融合操作后,所述移动机器人执行步骤S220,将数据融合后的地图及其定位数据集作为所述移动机器人中新的基准地图及其基准定位数据集,并予以存储。
在一些示例中,移动机器人执行所述将数据融合后的地图及其定位数据集作为所述移动机器人中新的基准地图及其基准定位数据集并存储在所述存储装置中的步骤后,还将新的基准地图及其基准定位数据集发送至位于所述物理空间中的第二移动设备。在此,移动机器人的处理装置在融合了基准地图及其基准定位数据集与第二地图及其第二定位数据集后,将融合后的新的基准地图及其基准定位数据集发送到第二移动设备中,以便在下一次导航移动操 作时使用新的基准地图及其基准定位数据集。
在又一些示例中,处理装置还执行将位于所述物理空间内的至少一个配置有摄像装置的第二设备的位置标记在所述地图上的步骤。
在此,在所述物理空间内除了可进行导航移动操作的移动机器人外,还包括布局在所述物理空间内的配置有摄像装置的第二设备。所述第二设备包括前述提及的第二移动设备,和/或固定安装在所述物理空间内并配置有摄像装置的电子设备,例如安防摄像头等。移动机器人还获取第二设备所摄取的第三关键帧图像,并通过匹配第三关键帧图像中第三定位特征信息与基准定位数据集中的第一关键帧图像中第一定位特征信息,确定第二设备在基准地图中的坐标位置,并将第二设备的位置标记在所述基准地图中,并将标记有第二设备位置的基准地图及其基准定位数据集发送至移动机器人的存储装置。
如此,在一些具体示例中,用户可藉由智能终端与标记有基准地图的移动机器人和/或各第二设备进行交互。例如,用户在终端设备上向配置有摄像装置的第二设备发出指令后,所述第二设备可基于基准地图及其基准定位数据集执行用户的指令。移动机器人基于所使用基准地图与相应的第二设备进行交互。例如,用户直向移动机器人的摄像头作出手势指令,所述移动机器人与第二设备进行通讯,使第二设备基于手势指令与基准地图及其基准定位数据集执行用户的指令等。
综上所述,本申请中移动机器人所构建的地图为持久化地图,即移动机器人在重新开机后的地图与上一次工作的地图在同一坐标系中。如此,用户可在终端设备上对地图进行标注处理,以设置移动设备的工作区域和工作方式。例如,用户在终端设备上在地图中标注需要每天工作多次的区域或禁止进入的区域或指定某一区域工作。同时,由于同样的物理空间在不同的时间段的视觉信息会有较大差异。因此,本申请中的移动机器人不只获取一次的工作记录,还持续不断地收集信息并将其进行融合,以丰富定位特征,并随着时间的积累,在多次工作后,可使地图在不同的时段和光照条件下为移动设备提供定位。另外,本申请公开的移动机器人可得到一种更稳定的地图,为用户和设备之间的交互提供便利,同时也节省了计算资源,解决了现有技术中边定位变构图造成的计算资源紧张。本申请中移动机器人的持久化地图在定位成功后可直接使用,如果原先每秒需要创建许多个定位特征,则采用本申请的移动机器人可仅创建基准地图及其基准定位数据集未涵盖的定位特征即可。
依据本申请所提及的技术方案,本申请还提供一种移动机器人,请参阅图6,其显示为本申请中移动机器人的另一实施例示意图。如图所示,所述移动机器人3包括:接口装置35,用于与一服务端进行数据通信;存储装置34,用于存储用于在一物理空间内导航移动操作期间提供导航服务的基准地图及其基准定位数据集,存储在执行所述导航移动操作期间所构建 的当前地图及其当前定位数据集,以及存储至少一个程序;处理装置32,与所述存储装置和接口装置连接,用于调用并执行所述至少一个程序,以协调所述存储装置和接口装置执行如下方法:将所述当前地图及其当前定位数据集发送至所述服务端;以及获取所述服务端返回的新的基准地图及其基准定位数据集,并更新所存储的基准地图及其基准定位数据集;其中,所获取的新的基准地图及其基准定位数据集是所述服务端将更新前的基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合后得到的。在此,所述移动机器人3通过调用其存储装置34中的基准地图及其基准定位数据集完成导航移动操作,并在该导航移动操作过程中构建当前地图及其当前定位数据集。所述移动机器人将其在导航移动操作过程中构建的当前地图及其当前定位数据集存储在所述存储装置34中。所述移动机器人3的处理装置32调用所述存储装置中的当前地图及其当前定位数据集,通过接口装置35将所述当前地图及其当前定位数据集发送给服务端。在服务端侧完成基准地图及其基准定位数据集与所述当前地图及其当前定位数据集的融合步骤后,形成新的基准地图及其基准定位数据集。所述服务端将新的基准地图及其基准定位数据集发送给所述移动机器人3的接口装置35,并通过处理装置32将所述新的基准地图及其基准定位数据集存储在存储装置34中。
其中,所述服务端藉由数据融合更新基准地图及其基准定位数据集的方式与前述更新地图的方法示例相同或相似,在此不再详述。
在一个示例性的实施例中,物理空间中包括移动机器人和至少一个第二移动设备,所述至少一个第二移动设备在所述物理空间的导航移动操作过程中构建第三地图及其第三定位数据集。在此,所述新的基准地图及其基准定位数据集还融合有至少一个第二移动设备所提供的第三地图及其第三定位数据集。
本申请中公开的移动机器人可配合服务端共同构建持久化地图,该持久化地图可使移动机器人在重新开机后的地图与上一次工作的地图在同一坐标系中。如此,用户可在终端设备上对地图进行标注处理,以设置移动设备的工作区域和工作方式。例如,用户在终端设备上在地图中标注需要每天工作多次的区域或禁止进入的区域或指定某一区域工作。同时,由于同样的物理空间在不同的时间段的视觉信息会有较大差异。因此,本申请中的移动机器人不只获取一次的工作记录,还持续不断地收集信息并将其进行融合,以丰富定位特征,并随着时间的积累,在多次工作后,可使地图在不同的时段和光照条件下为移动设备提供定位。另外,本申请公开的移动机器人可得到一种更稳定的地图,为用户和设备之间的交互提供便利,同时也节省了计算资源,解决了现有技术中边定位变构图造成的计算资源紧张。本申请中移动机器人的持久化地图在定位成功后可直接使用,如果原先每秒需要创建许多个定位特征,则采用本申请的移动机器人可仅创建基准地图及其基准定位数据集未涵盖的定位特征即可。
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。

Claims (21)

  1. 一种更新地图的方法,其特征在于,包括:
    获取第一移动设备在一物理空间内执行导航移动操作所构建的当前地图及其当前定位数据集;其中,所述第一移动设备是利用预先存储的对应所述物理空间的基准地图及其基准定位数据集进行导航移动的;
    将所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合处理;
    将数据融合后的地图及其定位数据集作为所述第一移动设备中新的基准地图及其基准定位数据集。
  2. 根据权利要求1所述的更新地图的方法,其特征在于,所述基准地图及其基准定位数据集是基于所述第一移动设备和/或至少一个第二移动设备在所述物理空间内分别执行至少一次导航移动操作而构建的。
  3. 根据权利要求1所述的更新地图的方法,其特征在于,所述将基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合处理的步骤包括:
    确定所述基准定位数据集中的第一定位特征信息及其第一定位坐标信息、和所述当前定位数据集中的第二定位特征信息及其第二定位坐标信息相匹配;
    基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集。
  4. 根据权利要求3所述的更新地图的方法,其特征在于,所述确定基准定位数据集和所述当前定位数据集中相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息的步骤包括:
    匹配所述基准定位数据集中的各第一定位特征信息和所述当前定位数据集中的各第二定位特征信息;
    基于所得到的匹配结果确定相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。
  5. 根据权利要求4所述的更新地图的方法,其特征在于,所述基于所得到的匹配结果确定相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标 信息的步骤包括:
    基于相匹配的第一定位特征信息和第二定位特征信息,将各自对应的第一定位坐标信息和第二定位坐标信息进行匹配,以得到相匹配的第一定位坐标信息和第二定位坐标信息。
  6. 根据权利要求4所述的更新地图的方法,其特征在于,所述第一定位特征信息包含基于基准地图中的空间特征而确定的第一测量定位特征信息,以及所述第二定位特征信息包含基于当前地图中的空间特征而确定的第二测量定位特征信息;和/或
    所述第一定位特征信息包含从基准定位数据集中的第一关键帧图像中提取的第一视觉定位特征信息;所述第二定位特征信息包含从当前定位数据集中的第二关键帧图像中提取的第二视觉定位特征信息。
  7. 根据权利要求6所述的更新地图的方法,其特征在于,所述第一测量定位特征信息包括以下至少一种:基于基准地图中空间特征的坐标信息组合而确定的测量数据,根据用于描述基准地图中空间特征的深度信息组合而确定的测量数据;以及
    所述第二测量定位特征信息包括以下至少一种:基于当前地图中对应空间特征的坐标信息组合而确定的测量数据,根据用于描述当前地图中空间特征的深度信息组合而确定的测量数据。
  8. 根据权利要求4所述的更新地图的方法,其特征在于,所述匹配基准定位数据集中的各第一定位特征信息和所述当前定位数据集中的各第二定位特征信息的步骤包括:
    将所述当前定位数据集中各第二关键帧图像中的第二定位特征信息与基准定位数据集中各第一关键帧图像中的第一定位特征信息进行匹配处理,以确定所述第一关键帧图像与第二关键帧图像中相匹配的第一定位特征信息和第二定位特征信息。
  9. 根据权利要求8所述的更新地图的方法,其特征在于,还包括:
    分析所述基准定位数据集中的第一关键帧图像,确定所述第一关键帧图像所对应的第一图像坐标信息相对于所述物理空间主方向的第一相对方位关系;以及基于所述第一相对方位关系调整在所述第一关键帧图像中的第一定位特征信息的像素位置;和/或
    分析所述当前定位数据集中的第二关键帧图像,确定所述第二关键帧图像所对应的第二图像坐标信息相对于所述物理空间主方向的第二相对方位关系;以及基于所述第二相对 方位关系调整在所述第二关键帧图像中的第二定位特征信息的像素位置;
    以便匹配调整后的第二关键帧图像中的第二定位特征信息与调整后的第一关键帧图像中的第一定位特征信息。
  10. 根据权利要求3所述的更新地图的方法,其特征在于,还包括:调整所述基准地图或当前地图直至调整后两幅地图符合预设的重叠条件的步骤;以便基于调整后的两地图确定所述基准定位数据集和所述当前定位数据集中相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息。
  11. 根据权利要求3所述的更新地图的方法,其特征在于,所述基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集的步骤包括:
    基于相匹配的第一定位坐标信息和第二定位坐标信息之间的坐标偏差信息修正基准地图和/或当前地图中的坐标误差;
    基于修正后的至少一个地图执行合并操作,以得到新的基准地图;以及
    将基准定位数据集和当前定位数据集中至少相匹配的第一定位特征信息和第二定位特征信息标记在新的基准地图上,以得到新的定位坐标信息。
  12. 根据权利要求3或11所述的更新地图的方法,其特征在于,所述基于相匹配的第一定位特征信息及其第一定位坐标信息、和第二定位特征信息及其第二定位坐标信息,融合所述基准地图及其基准定位数据集和所述当前地图及其当前定位数据集的步骤包括以下至少一个步骤,以得到新的基准定位数据集:
    基于相匹配的第一定位特征信息和第二定位特征信息之间的定位特征偏差信息调整基准定位数据集或当前定位数据集;
    将当前定位数据集中未匹配的各第二定位特征信添加至基准定位数据集中,或者,将基准定位数据集中未匹配的各第一定位特征信息添加至当前定位数据集中。
  13. 根据权利要求1所述的更新地图的方法,其特征在于,还包括以下步骤:
    检测所述当前地图的完整程度,和/或检测所述当前定位数据集的信息量;
    基于所得到的检测结果执行所述数据融合处理的操作。
  14. 根据权利要求1所述的更新地图的方法,其特征在于,还包括将新的基准地图及其基准定位数据集发送至位于所述物理空间中的第一移动设备的步骤。
  15. 根据权利要求1所述的更新地图的方法,其特征在于,还包括将位于所述物理空间内的至少一个配置有摄像装置的第二设备的位置标记在所述基准地图上的步骤。
  16. 一种服务端,其特征在于,包括:
    接口装置,用于与位于一物理空间中的第一移动设备进行数据通信;
    存储装置,用于存储用于提供给所述第一移动设备的基准地图及其基准定位数据集,存储来自所述第一移动设备在所述物理空间内执行导航移动操作所构建的当前地图及其当前定位数据集,以及存储至少一个程序;
    处理装置,与所述存储装置和接口装置连接,用于调用并执行所述至少一个程序,以协调所述存储装置和接口装置执行如权利要求1-15中任一所述的方法。
  17. 一种移动机器人,其特征在于,包括:
    存储装置,用于存储一基准地图及其基准定位数据集,当前地图和当前定位数据集,以及至少一个程序;其中,所述当前地图和当前定位数据集为所述移动机器人执行一次导航移动操作所构建的;所述基准地图及其基准定位数据集为所述移动机器人执行所述导航移动操作所使用的;
    移动装置,用于基于所述基准地图而确定的导航路线执行移动操作;
    定位感应装置,用于在执行导航移动操作期间收集第二定位特征信息,以构成当前定位数据集;
    处理装置,与所述存储装置、摄像装置和移动装置相连,用于调用并执行所述至少一个程序,以协调所述存储装置、摄像装置和移动装置执行如权利要求1、或3-15中任一所述的更新地图的方法。
  18. 根据权利要求17所述的移动机器人,其特征在于,所存储的基准地图及其基准定位数据集是基于所述移动机器人自身和/或至少一个第二移动设备在同一物理空间内分别执行至少一次导航移动操作而构建的。
  19. 根据权利要求18所述的移动机器人,其特征在于,还包括接口装置,用于与至少一个第 二移动设备进行数据通信;所述处理装置还执行获取所述第二移动设备所提供的第三地图和第三定位数据集的操作,以便将所述基准地图及其基准定位数据集、所述第二地图及其第二定位数据集、和所述第三地图及其第三定位数据集进行数据融合处理。
  20. 一种移动机器人,其特征在于,包括:
    接口装置,用于与一服务端进行数据通信;
    存储装置,用于存储用于在一物理空间内导航移动操作期间提供导航服务的基准地图及其基准定位数据集,存储在执行所述导航移动操作期间所构建的当前地图及其当前定位数据集,以及存储至少一个程序;
    处理装置,与所述存储装置和接口装置连接,用于调用并执行所述至少一个程序,以协调所述存储装置和接口装置执行如下步骤:
    将所述当前地图及其当前定位数据集发送至所述服务端;以及
    获取所述服务端返回的新的基准地图及其基准定位数据集,并更新所存储的基准地图及其基准定位数据集;其中,所获取的新的基准地图及其基准定位数据集是所述服务端将更新前的基准地图及其基准定位数据集和所述当前地图及其当前定位数据集进行数据融合后得到的。
  21. 根据权利要20所述的移动机器人,其特征在于,所述新的基准地图及其基准定位数据集还融合有至少一个第二移动设备所提供的第三地图及其第三定位数据集。
PCT/CN2019/086281 2019-05-09 2019-05-09 更新地图的方法及移动机器人 WO2020223974A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2019/086281 WO2020223974A1 (zh) 2019-05-09 2019-05-09 更新地图的方法及移动机器人
CN201980000681.2A CN110268354A (zh) 2019-05-09 2019-05-09 更新地图的方法及移动机器人
US16/663,293 US11204247B2 (en) 2019-05-09 2019-10-24 Method for updating a map and mobile robot
US17/520,224 US20220057212A1 (en) 2019-05-09 2021-11-05 Method for updating a map and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/086281 WO2020223974A1 (zh) 2019-05-09 2019-05-09 更新地图的方法及移动机器人

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/663,293 Continuation US11204247B2 (en) 2019-05-09 2019-10-24 Method for updating a map and mobile robot

Publications (1)

Publication Number Publication Date
WO2020223974A1 true WO2020223974A1 (zh) 2020-11-12

Family

ID=67912944

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/086281 WO2020223974A1 (zh) 2019-05-09 2019-05-09 更新地图的方法及移动机器人

Country Status (3)

Country Link
US (2) US11204247B2 (zh)
CN (1) CN110268354A (zh)
WO (1) WO2020223974A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112826373A (zh) * 2021-01-21 2021-05-25 深圳乐动机器人有限公司 清洁机器人的清洁方法、装置、设备和存储介质
CN115177178A (zh) * 2021-04-06 2022-10-14 美智纵横科技有限责任公司 一种清扫方法、装置和计算机存储介质

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215887B (zh) * 2019-07-09 2023-09-08 深圳市优必选科技股份有限公司 一种位姿确定方法、装置、存储介质及移动机器人
DE102019128253B4 (de) * 2019-10-18 2024-06-06 StreetScooter GmbH Verfahren zum Navigieren eines Flurförderzeugs
DE102019132363A1 (de) * 2019-11-28 2021-06-02 Bayerische Motoren Werke Aktiengesellschaft Verfahren zum Betreiben einer Umgebungserfassungsvorrichtung mit einer gridbasierten Auswertung und mit einer Fusionierung, sowie Umgebungserfassungsvorrichtung
CN111024100B (zh) 2019-12-20 2021-10-29 深圳市优必选科技股份有限公司 一种导航地图更新方法、装置、可读存储介质及机器人
CN111145634B (zh) * 2019-12-31 2022-02-22 深圳市优必选科技股份有限公司 一种校正地图的方法及装置
CN111220148A (zh) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 移动机器人的定位方法、系统、装置及移动机器人
CN113701767B (zh) * 2020-05-22 2023-11-17 杭州海康机器人股份有限公司 一种地图更新的触发方法和系统
CN112068552A (zh) * 2020-08-18 2020-12-11 广州赛特智能科技有限公司 一种基于cad图纸的移动机器人自主建图方法
CN112101177B (zh) * 2020-09-09 2024-10-15 东软睿驰汽车技术(沈阳)有限公司 地图构建方法、装置及运载工具
CN112190185B (zh) * 2020-09-28 2022-02-08 深圳市杉川机器人有限公司 扫地机器人及其三维场景的构建方法、系统及可读存储介质
CN112284402B (zh) * 2020-10-15 2021-12-07 广州小鹏自动驾驶科技有限公司 一种车辆定位的方法和装置
CN114490675A (zh) * 2020-10-26 2022-05-13 华为技术有限公司 一种地图更新方法、相关装置、可读存储介质和系统
CN113093731A (zh) * 2021-03-12 2021-07-09 广东来个碗网络科技有限公司 智能回收箱的移动控制方法及装置
CN112927256A (zh) * 2021-03-16 2021-06-08 杭州萤石软件有限公司 一种分割区域的边界融合方法、装置、移动机器人
CN113112847A (zh) * 2021-04-12 2021-07-13 蔚来汽车科技(安徽)有限公司 用于固定泊车场景的车辆定位方法及其系统
CN113183153A (zh) * 2021-04-27 2021-07-30 北京猎户星空科技有限公司 一种地图创建方法、装置、设备及介质
CN113590728B (zh) * 2021-07-09 2024-10-29 北京小米移动软件有限公司 地图切换方法和装置、清扫设备、存储介质
CN114019953B (zh) * 2021-10-08 2024-03-19 中移(杭州)信息技术有限公司 地图构建方法、装置、设备及存储介质
CN114111758B (zh) * 2021-11-01 2024-06-04 广州小鹏自动驾驶科技有限公司 一种地图数据的处理方法和装置
CN114237217B (zh) * 2021-11-04 2024-08-02 深圳拓邦股份有限公司 一种工作场地切换方法、装置及机器人
CN114543808B (zh) * 2022-02-11 2024-09-27 杭州萤石软件有限公司 室内重定位方法、装置、设备及存储介质
CN114674308B (zh) * 2022-05-26 2022-09-16 之江实验室 基于安全出口指示牌视觉辅助激光长廊定位方法及装置
CN115930971B (zh) * 2023-02-01 2023-09-19 七腾机器人有限公司 一种机器人定位与建图的数据融合处理方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101953172A (zh) * 2008-02-13 2011-01-19 塞格瑞德公司 分布式多机器人系统
CN103674011A (zh) * 2012-09-19 2014-03-26 联想(北京)有限公司 即时定位与地图构建设备、系统与方法
CN105203094A (zh) * 2015-09-10 2015-12-30 联想(北京)有限公司 构建地图的方法和设备
CN105373610A (zh) * 2015-11-17 2016-03-02 广东欧珀移动通信有限公司 一种室内地图的更新方法以及服务器
US20170083005A1 (en) * 2011-05-06 2017-03-23 X Development Llc Methods and Systems for Multirobotic Management
CN106885578A (zh) * 2015-12-16 2017-06-23 北京奇虎科技有限公司 地图更新方法和装置
CN107449431A (zh) * 2016-03-04 2017-12-08 通用汽车环球科技运作有限责任公司 移动导航单元中的渐进式地图维护
CN107515006A (zh) * 2016-06-15 2017-12-26 华为终端(东莞)有限公司 一种地图更新方法和车载终端
CN107544515A (zh) * 2017-10-10 2018-01-05 苏州中德睿博智能科技有限公司 基于云服务器的多机器人建图导航系统与建图导航方法
CN108896050A (zh) * 2018-06-26 2018-11-27 上海交通大学 一种基于激光传感器的移动机器人长期定位系统及方法

Family Cites Families (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2222897A (en) * 1988-04-08 1990-03-21 Eliahu Igal Zeevi Vehicle navigation system
JPH0814930A (ja) * 1994-07-04 1996-01-19 Japan Radio Co Ltd ナビゲーション装置
JP2674521B2 (ja) * 1994-09-21 1997-11-12 日本電気株式会社 移動体誘導装置
JP3564547B2 (ja) * 1995-04-17 2004-09-15 本田技研工業株式会社 自動走行誘導装置
JP3893647B2 (ja) * 1996-09-30 2007-03-14 マツダ株式会社 ナビゲーション装置
JP3546680B2 (ja) * 1998-01-26 2004-07-28 トヨタ自動車株式会社 ナビゲーション装置
JP4024450B2 (ja) * 2000-03-03 2007-12-19 パイオニア株式会社 ナビゲーションシステム
DE10127399A1 (de) * 2001-05-31 2002-12-12 Univ Dresden Tech Verfahren und Vorrichtung zur autonomen Navigation von Satelliten
JP4600357B2 (ja) * 2006-06-21 2010-12-15 トヨタ自動車株式会社 測位装置
JP4257661B2 (ja) * 2006-06-30 2009-04-22 アイシン・エィ・ダブリュ株式会社 ナビゲーション装置
US9733091B2 (en) * 2007-05-31 2017-08-15 Trx Systems, Inc. Collaborative creation of indoor maps
AU2008283845A1 (en) * 2007-08-06 2009-02-12 Trx Systems, Inc. Locating, tracking, and/or monitoring personnel and/or assets both indoors and outdoors
WO2009101163A2 (de) * 2008-02-15 2009-08-20 Continental Teves Ag & Co. Ohg Fahrzeugsystem zur navigation und/oder fahrerassistenz
US9103917B2 (en) * 2010-02-12 2015-08-11 Broadcom Corporation Method and system for determining location within a building based on historical location information
US20120143495A1 (en) * 2010-10-14 2012-06-07 The University Of North Texas Methods and systems for indoor navigation
US10027952B2 (en) * 2011-08-04 2018-07-17 Trx Systems, Inc. Mapping and tracking system with features in three-dimensional space
FI124665B (en) * 2012-01-11 2014-11-28 Indooratlas Oy Creating a magnetic field map for indoor positioning
US9418478B2 (en) * 2012-06-05 2016-08-16 Apple Inc. Methods and apparatus for building a three-dimensional model from multiple data sets
WO2014057540A1 (ja) * 2012-10-10 2014-04-17 三菱電機株式会社 ナビゲーション装置およびナビゲーション用サーバ
US11156464B2 (en) * 2013-03-14 2021-10-26 Trx Systems, Inc. Crowd sourced mapping with robust structural features
KR101288953B1 (ko) * 2013-03-14 2013-07-23 주식회사 엠시스템즈 레저 선박용 블랙박스 시스템
US9749801B2 (en) * 2013-03-15 2017-08-29 Honeywell International Inc. User assisted location devices
US9723251B2 (en) * 2013-04-23 2017-08-01 Jaacob I. SLOTKY Technique for image acquisition and management
CN105637323B (zh) * 2013-10-22 2018-09-25 三菱电机株式会社 导航用服务器、导航系统以及导航方法
DE102014002150B3 (de) * 2014-02-15 2015-07-23 Audi Ag Verfahren zur Ermittlung der absoluten Position einer mobilen Einheit und mobile Einheit
CN107110651B (zh) * 2014-09-08 2021-04-30 应美盛股份有限公司 用于使用地图信息辅助的增强型便携式导航的方法和装置
DE102016211805A1 (de) * 2015-10-09 2017-04-13 Volkswagen Aktiengesellschaft Fusion von Positionsdaten mittels Posen-Graph
JP2017161501A (ja) * 2016-03-07 2017-09-14 株式会社デンソー 走行位置検出装置、走行位置検出方法
JP6804865B2 (ja) * 2016-04-21 2020-12-23 クラリオン株式会社 情報提供システム、情報提供装置および情報提供方法
GB201613105D0 (en) * 2016-07-29 2016-09-14 Tomtom Navigation Bv Methods and systems for map matching
CN108732584B (zh) * 2017-04-17 2020-06-30 百度在线网络技术(北京)有限公司 用于更新地图的方法和装置
CN107144285B (zh) * 2017-05-08 2020-06-26 深圳地平线机器人科技有限公司 位姿信息确定方法、装置和可移动设备
CN107145578B (zh) * 2017-05-08 2020-04-10 深圳地平线机器人科技有限公司 地图构建方法、装置、设备和系统
US9838850B2 (en) * 2017-05-12 2017-12-05 Mapsted Corp. Systems and methods for determining indoor location and floor of a mobile device
CN107504971A (zh) * 2017-07-05 2017-12-22 桂林电子科技大学 一种基于pdr和地磁的室内定位方法及系统
CN109388093B (zh) 2017-08-02 2020-09-15 苏州珊口智能科技有限公司 基于线特征识别的机器人姿态控制方法、系统及机器人
JP7035448B2 (ja) * 2017-10-26 2022-03-15 株式会社アイシン 移動体
CN107907131B (zh) 2017-11-10 2019-12-13 珊口(上海)智能科技有限公司 定位系统、方法及所适用的机器人
WO2019212698A1 (en) * 2018-05-01 2019-11-07 Magic Leap, Inc. Avatar animation using markov decision process policies
WO2020006685A1 (zh) * 2018-07-03 2020-01-09 深圳前海达闼云端智能科技有限公司 一种建立地图的方法、终端和计算机可读存储介质
CN109074638B (zh) * 2018-07-23 2020-04-24 深圳前海达闼云端智能科技有限公司 融合建图方法、相关装置及计算机可读存储介质

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101953172A (zh) * 2008-02-13 2011-01-19 塞格瑞德公司 分布式多机器人系统
US20170083005A1 (en) * 2011-05-06 2017-03-23 X Development Llc Methods and Systems for Multirobotic Management
CN103674011A (zh) * 2012-09-19 2014-03-26 联想(北京)有限公司 即时定位与地图构建设备、系统与方法
CN105203094A (zh) * 2015-09-10 2015-12-30 联想(北京)有限公司 构建地图的方法和设备
CN105373610A (zh) * 2015-11-17 2016-03-02 广东欧珀移动通信有限公司 一种室内地图的更新方法以及服务器
CN106885578A (zh) * 2015-12-16 2017-06-23 北京奇虎科技有限公司 地图更新方法和装置
CN107449431A (zh) * 2016-03-04 2017-12-08 通用汽车环球科技运作有限责任公司 移动导航单元中的渐进式地图维护
CN107515006A (zh) * 2016-06-15 2017-12-26 华为终端(东莞)有限公司 一种地图更新方法和车载终端
CN107544515A (zh) * 2017-10-10 2018-01-05 苏州中德睿博智能科技有限公司 基于云服务器的多机器人建图导航系统与建图导航方法
CN108896050A (zh) * 2018-06-26 2018-11-27 上海交通大学 一种基于激光传感器的移动机器人长期定位系统及方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112826373A (zh) * 2021-01-21 2021-05-25 深圳乐动机器人有限公司 清洁机器人的清洁方法、装置、设备和存储介质
CN112826373B (zh) * 2021-01-21 2022-05-06 深圳乐动机器人有限公司 清洁机器人的清洁方法、装置、设备和存储介质
CN115177178A (zh) * 2021-04-06 2022-10-14 美智纵横科技有限责任公司 一种清扫方法、装置和计算机存储介质

Also Published As

Publication number Publication date
US20220057212A1 (en) 2022-02-24
US11204247B2 (en) 2021-12-21
CN110268354A (zh) 2019-09-20
US20200356582A1 (en) 2020-11-12

Similar Documents

Publication Publication Date Title
WO2020223974A1 (zh) 更新地图的方法及移动机器人
CN110874100B (zh) 用于使用视觉稀疏地图进行自主导航的系统和方法
US11816907B2 (en) Systems and methods for extracting information about objects from scene information
CN110497901B (zh) 一种基于机器人vslam技术的泊车位自动搜索方法和系统
CN108090958B (zh) 一种机器人同步定位和地图构建方法和系统
CN110850863B (zh) 自主移动装置、自主移动方法以及存储介质
CN113168717B (zh) 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达
CN104536445B (zh) 移动导航方法和系统
WO2021035669A1 (zh) 位姿预测方法、地图构建方法、可移动平台及存储介质
CN113674416B (zh) 三维地图的构建方法、装置、电子设备及存储介质
KR20200109260A (ko) 지도 구축 방법, 장치, 기기 및 판독가능 저장 매체
CN112734765A (zh) 基于实例分割与多传感器融合的移动机器人定位方法、系统及介质
CN111220148A (zh) 移动机器人的定位方法、系统、装置及移动机器人
Zhang LILO: A novel LiDAR–IMU SLAM system with loop optimization
AU2024219616A1 (en) Generating and validating a virtual 3D representation of a real-world structure
WO2023070115A1 (en) Three-dimensional building model generation based on classification of image elements
WO2018133074A1 (zh) 一种基于大数据及人工智能的智能轮椅系统
CN117036447A (zh) 基于多传感器融合的室内场景稠密三维重建方法及装置
WO2023030062A1 (zh) 一种无人机的飞行控制方法、装置、设备、介质及程序
CN114299192A (zh) 定位建图的方法、装置、设备和介质
Zhang et al. Recent Advances in Robot Visual SLAM
KR20220050386A (ko) 맵 생성 방법 및 이를 이용한 이미지 기반 측위 시스템
Liu et al. Real-time trust region ground plane segmentation for monocular mobile robots
CN115752476B (zh) 一种基于语义信息的车辆地库重定位方法、装置、设备和介质
Wendel Scalable visual navigation for micro aerial vehicles using geometric prior knowledge

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19927761

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19927761

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19927761

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/05/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19927761

Country of ref document: EP

Kind code of ref document: A1