WO2022246812A1 - Positioning method and apparatus, electronic device, and storage medium - Google Patents

Positioning method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2022246812A1
WO2022246812A1 PCT/CN2021/096828 CN2021096828W WO2022246812A1 WO 2022246812 A1 WO2022246812 A1 WO 2022246812A1 CN 2021096828 W CN2021096828 W CN 2021096828W WO 2022246812 A1 WO2022246812 A1 WO 2022246812A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
view
pose
robot
positioning
Prior art date
Application number
PCT/CN2021/096828
Other languages
French (fr)
Chinese (zh)
Inventor
宋乐
郭鑫
李国林
谭浩轩
王世魏
陈侃
霍峰
秦宝星
程昊天
Original Assignee
上海高仙自动化科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海高仙自动化科技发展有限公司 filed Critical 上海高仙自动化科技发展有限公司
Priority to PCT/CN2021/096828 priority Critical patent/WO2022246812A1/en
Publication of WO2022246812A1 publication Critical patent/WO2022246812A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the present application relates to the technical field of intelligent robots, for example, to a positioning method, device, electronic equipment and storage medium.
  • service robots in daily life include mobile robots and fixed-position robots.
  • the necessary requirement for the mobile robot to realize the required functions is that the mobile robot accurately knows its own position, that is, accurately locates its own position in the environment, so as to complete the instructions issued by the user.
  • the present application provides a positioning method, device, electronic equipment and storage medium, which effectively improves the positioning accuracy of the electronic equipment.
  • An embodiment of the present application provides a positioning method, which is applied to an electronic device, where the electronic device includes at least one top-view sensor, and the method includes:
  • the electronic device is located based on the processed sensor data.
  • the electronic device includes at least two top-view sensors
  • the electronic device includes a top-view sensor and at least one head-up sensor
  • the sensor data includes the head-up environment data collected by the at least one head-up sensor and the top-view environment data collected by the one top-view sensor;
  • the electronic device includes at least two top-view sensors and at least one head-up sensor, and the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by at least two top-view sensors;
  • Positioning the electronic device according to the processed sensor data includes: generating a head-up grid map and a top-view grid map according to the processed sensor data;
  • the head-up grid map locates the electronic device.
  • the determining the feature point cloud based on the depth data includes: extracting at least one edge point from the infrared data, and converting the at least one edge point into a feature point cloud according to the depth data.
  • determining the feature point cloud based on the depth data includes: extracting the outer contour points of at least one plane from the depth data and forming the feature point cloud from the outer contour points of the at least one plane.
  • a processing module configured to process the sensor data
  • a positioning module configured to locate the electronic device according to the processed sensor data.
  • the embodiment of the present application also provides a positioning device, which is configured in an electronic device, and the device includes:
  • the data collection module is configured to collect the current pose of the robot and depth data in at least one preset direction
  • the electronic device further includes: at least two top-view sensors: there is a common-view area in the field of view between the at least two top-view sensors, and the at least two top-view sensors are configured to collect the top-view environment data.
  • the electronic device further includes: a top-view sensor and at least one head-up sensor; the at least one head-up sensor is configured to collect head-up environment data; the one top-view sensor is configured to collect top-view environment data.
  • the electronic device further includes: further comprising: at least two head-up sensors and at least one head-up sensor; the at least one head-up sensor is configured to collect head-up environment data;
  • FIG. 5 is a schematic flowchart of another positioning method provided in the embodiment of the present application.
  • FIG. 7 is a schematic diagram of the installation of a dual top-view TOF camera provided in the embodiment of the present application.
  • FIG. 15 is a schematic flowchart of another positioning method provided by the embodiment of the present application.
  • FIG. 16 is a schematic flowchart of another positioning method provided in the embodiment of the present application.
  • FIG. 17 is a schematic flowchart of another multi-layer grid map positioning method provided by the embodiment of the present application.
  • Fig. 18 is a schematic flowchart of another positioning method provided by the embodiment of the present application.
  • FIG. 21 is an example diagram of a preset global grid map provided by an embodiment of the present application.
  • Fig. 23 is an example diagram of a coordinate transformation provided by the embodiment of the present application.
  • Fig. 25 is an example diagram of a positioning method provided by an embodiment of the present application.
  • Fig. 26 is a schematic flowchart of another positioning method provided by the embodiment of the present application.
  • Fig. 27 is a schematic flowchart of another positioning method provided by the embodiment of the present application.
  • Fig. 28 is a schematic flowchart of another positioning method provided by the embodiment of the present application.
  • Fig. 29 is an example diagram of another positioning method provided by the embodiment of the present application.
  • Fig. 31 is a schematic structural diagram of another positioning device provided by the embodiment of the present application.
  • Fig. 32 is a schematic structural diagram of another positioning device provided by the embodiment of the present application.
  • Fig. 33 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 34 is a schematic structural diagram of a storage medium provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a positioning method provided in an embodiment of the present application.
  • the method is applicable to the situation of locating an electronic device, and the method can be executed by a locating device, wherein the device can be implemented by software and/or hardware, and is generally integrated on the electronic device.
  • the electronic device includes at least one top-view sensor.
  • a top-view sensor can be thought of as a sensor that sits on top of an electronic device.
  • the data collected by the top-view sensor during the mapping phase is used to build a global map.
  • the data collected by the top-view sensor in the positioning phase is used in combination with the global map for positioning.
  • the electronic device in this application can perform indoor positioning, and can perform outdoor positioning when the outdoor top-view environment is not uniform.
  • the electronic device in this application may be a mobile electronic device. Exemplarily, the electronic device is a robot.
  • a reference coordinate system is also called a reference coordinate system, which is a reference standard for describing the positions or angles of points, lines, planes, and coordinate systems.
  • the camera extrinsics are used to represent the pose of the camera in three-dimensional space relative to the reference coordinate system.
  • Pose refers to position and attitude.
  • FIG. 2 is a schematic diagram of attitude definition in robotics in the related art. Referring to FIG. 2 , the position includes x, y, z, and the attitude includes yaw, pitch, and roll.
  • Coordinate transformation in this application is the process of transforming from one coordinate system to another.
  • Statistical filter calculate the average distance from each point to the nearest k points of each point, and the distances of all points in the point cloud should form a Gaussian distribution. Calculate the mean and variance, and remove noise points through the 3 ⁇ principle, and k is a positive integer.
  • Nonlinear optimization method In a given objective function f(x), find the optimal set of numerical mappings to make f(x) maximum or minimum. When f(x) is a nonlinear function, the solution method is called In the nonlinear optimization method, the dimension of x in this application is 3, f(x1, y1, ⁇ ), that is, the objective function is a function about x1, y1 and ⁇ .
  • x1 and y1 can represent the relative distance between the top-looking sensors, such as the distance between the top-looking sensors, and the distance between the top-looking sensor and the roof of the house.
  • x1 and y1 may be values in the coordinate system where the top-view sensor is located.
  • can be considered as the angle between the top-view sensor and the vertical direction.
  • the TOF camera continuously sends light pulses to the target, then uses the sensor to receive the light returned from the target, and obtains the distance of the target by detecting the time-of-flight of the light pulse.
  • a target may be considered as an object on top of an electronic device, such as a roof in a house.
  • mapping and positioning electronic devices include cameras and lidar.
  • the mapping and positioning method based on lidar has high precision and strong anti-interference ability of data precision. It is a relatively mature and widely used technology, but the cost of a single laser is high, and it faces the problem of high cost in the process of widespread promotion.
  • the use of cameras for positioning has the advantages of low hardware cost and rich information. Cameras mainly include monocular cameras, binocular cameras, depth cameras, infrared cameras and other types. Using camera mapping and positioning generally requires the fusion of multiple sensor data such as wheel odometers or inertial measurement units (Inertial Measurement Unit, IMU).
  • IMU Inertial Measurement Unit
  • the binocular camera needs to restore the depth information in the scene through the parallax calculation of the two cameras, the depth camera directly obtains the depth in the scene through TOF, structured light and other technologies, and the infrared camera obtains the infrared image in the scene.
  • the depth camera is used to restore the point in the two-dimensional space to the point cloud in the three-dimensional space, that is, the pixel point of the visual data can be converted into the virtual radar technology of the point cloud data, which is also applicable to the laser positioning algorithm.
  • Laser-like point cloud information, visual image information, and infrared image information can be simultaneously obtained through the above-mentioned camera, which combines the advantages of both laser and camera sensors.
  • Positioning first requires the construction of an accurate indoor map, which is used for the calculation of the electronic device's own pose in the absolute coordinate system and the path planning for the subsequent movement of the electronic device.
  • the direction of traditional sensors is set to the horizontal direction to collect environmental data in the horizontal direction. Since the environment changes rapidly in reality, and it takes a long time to establish a map, the accuracy of the established map is poor, and the subsequent robot positioning process is easy. Loss of positioning data, that is, when drastic changes occur in the environment, the positioning accuracy will drop sharply.
  • the data collected by the forward-looking sensor is easily affected by dynamic objects such as the flow of people, such as: flow of people, pets, mobile robots blocking the sensor, which will also affect the accuracy of positioning, thus restricting the electronic equipment in the aircraft.
  • dynamic objects such as the flow of people, such as: flow of people, pets, mobile robots blocking the sensor, which will also affect the accuracy of positioning, thus restricting the electronic equipment in the aircraft.
  • a positioning method provided in the embodiment of the present application includes the following steps:
  • processing the sensor data may include removing noise data in the sensor data.
  • the processed sensor data may be matched with local map data in the global map, and the pose of the electronic device in the global map may be determined according to the local map data in the global map matched with the processed sensor data.
  • determining the pose of the electronic device in the global map according to the local map data matched with the processed sensor data in the global map includes: according to the local map data matched with the processed sensor data in the global map and the global The pose transformation relationship of the map, the pose transformation relationship between the processed sensor data and the local map data, determines the pose of the electronic device in the global map.
  • This embodiment locates the electronic device based on the data collected by the top-view sensor. Since the environment above the computer device is not easy to change, the positioning method provided by this embodiment effectively improves the positioning accuracy of the electronic device.
  • Fig. 3 is a flow chart of another positioning method provided by the embodiment of the present application.
  • the computer device includes at least two top-view sensors.
  • the method provided in this embodiment includes the following steps:
  • S110 Acquire sensor data collected by at least two top-view sensors.
  • sensor data can be considered as data collected by sensors.
  • a sensor may be a detection device on an electronic device.
  • the content of the sensor data is not limited here, and may be determined based on the type of sensor included in the electronic device.
  • the sensor data includes the top-view environment data collected by the top-view sensor.
  • the top-view environment data may be data collected by a top-view sensor, for example, the environment data on top of an electronic device is collected as top-view environment data.
  • the content of the environmental data is determined based on the collected equipment and is not limited here.
  • the sensor data collected by the sensor on the electronic device may be obtained first, so as to process the sensor data and locate the electronic device.
  • the method of obtaining the sensor data is not limited here.
  • the sensor data can be processed so that localization can be performed based on the processed sensor data.
  • How to process the sensor data is not limited here, and different sensor data corresponds to different processing means.
  • the time stamps of the sensor data collected by at least two top-looking sensors can be aligned so as to locate the electronic device.
  • the sensor data includes top-view environment data
  • this embodiment when positioning the electronic device based on the top-view environment data, this embodiment can extract the point cloud information of the surface edge based on the top-view environment data, so as to perform global map matching based on the point cloud information, so as to realize electronic The location of the device.
  • the global map may be a grid map established based on the positioning scene.
  • the positioning scene can be regarded as a scene where the electronic device is currently located and needs to be positioned.
  • a global map can be constructed during the mapping phase.
  • the positioning instruction can be regarded as an instruction to trigger the electronic device to perform positioning.
  • the positioning instruction can be triggered through a human-computer interaction interface, which is not limited here.
  • the local map data can be regarded as the data of the local map in the global map.
  • a global map can be composed of local maps.
  • the pose transformation relationship between the processed sensor data and the local map data may be determined, and the electronic device positioning may be performed in combination with the pose transformation relationship between the local map and the global map.
  • the pose transformation relationship between the two can be obtained, and then the electronic device can be positioned by combining the pose transformation relationship between the local map and the global map.
  • the pose transformation relation is the pose relation.
  • the global map can be constructed based on the top-view environment data collected by multiple top-view sensors, so in the positioning phase, based on the processed sensor data, the corresponding local map data can be matched in the global map, and then Ability to determine the pose of the electronic device in the global map.
  • sensor data is firstly obtained; secondly, the sensor data is processed; and then, when a positioning instruction is obtained, the processed sensor data is matched with the local map data in the global map, Determine the pose of the electronic device in the global map.
  • the sensor data includes top-view environment data; processing the sensor data includes: aligning the top-view environment data collected by the at least two top-view sensors based on time stamps; preprocessing the aligned top-view environment data environmental data.
  • This embodiment refines the operation of processing sensor data to ensure that the processed sensor can achieve accurate positioning.
  • matching the processed sensor data with local map data in the global map to determine the pose of the electronic device in the global map includes: The point cloud information of the edge is converted into raster data; determine the local map data matching the raster data in the global map; according to the local map data and the raster data, determine the pose in .
  • the processed sensor data is matched with the local map data in the global map to determine the pose of the electronic device, and accurately locate the electronic device.
  • Fig. 4 is a schematic flow chart of another positioning method provided by the embodiment of the present application. Referring to Fig. 4, this method comprises the steps:
  • Sensor data includes top-looking environment data.
  • Aligning based on time stamps can be considered as establishing a corresponding relationship between sensor data collected by multiple sensors based on time.
  • the technical means of alignment are not limited here.
  • Preprocessing the aligned top-view environment data can be considered as processing the aligned top-view environment data to facilitate positioning.
  • the technical means of preprocessing is not limited, as long as it is convenient to match the preprocessed visual environment data with the local map data in the global map.
  • the means of preprocessing include but not limited to denoising and splicing.
  • the mapping conversion can be performed based on the Cartesian coordinate system, such as converting the point cloud information of the edge of the surface into the Cartesian coordinate system, and then converting to The point cloud information of the surface edge in the Cartesian coordinate system is projected into the grid coordinate system to obtain the grid data.
  • the global map can be regarded as a grid map. To facilitate matching, this embodiment performs matching with the global map based on the grid data, so as to obtain local map data corresponding to the grid data.
  • the corresponding local map data can be matched from the global map based on the raster data, and the local map data can be determined based on the matched local map data.
  • the matching technical means are not limited here.
  • S260 Determine the pose of the electronic device in the global map according to the local map data and the grid data.
  • this embodiment can determine the pose of the electronic device based on the pose transformation relationship between the local map data obtained through matching and the grid data.
  • this embodiment determines the pose of the electronic device in the global map through the pose transformation relationship between the local map and the global map corresponding to the local map data, and the pose transformation relationship between the grid data and the local map data.
  • This embodiment refines the operations of processing sensor data and determining the pose of an electronic device.
  • the sensor data is effectively processed, and the electronic device is positioned based on the processed sensor data and the global map. Improved positioning accuracy.
  • the preprocessing of the aligned top-view environment data includes: removing noise points in the aligned top-view environment data; splicing the top-view environment data after denoising; extracting the spliced top-view environment data Point cloud information of the edge of the midplane.
  • This embodiment refines the operation of preprocessing the aligned top-view environment data.
  • the field of view angle of the top-view sensor is increased, and the positioning is improved by extracting point cloud information at the edge of the surface. efficiency.
  • the noise in the aligned top-view environment data can be removed first to improve the positioning accuracy.
  • the method for removing noise is not limited here.
  • the noise-removed top-view environment data can be spliced, so as to expand the field of view of the top-view sensor and improve positioning accuracy.
  • the method of splicing the noise-removed top-view environment data is not limited, for example, splicing the noise-removed top-view environment data by means of coordinate transformation.
  • point cloud information of surface edges in the spliced top-view environment data may be extracted, so as to locate the electronic device based on the extracted point cloud information.
  • the technical means for extracting point cloud information is not limited here.
  • the splicing the noise-removed top-view environment data includes: converting the noise-removed top-view environment data into the coordinate system where the target top-view sensor is located, and the target top-view sensor is the at least One of the two top-looking sensors.
  • the target top-looking sensor is any one of the at least two top-looking sensors.
  • the conversion can be performed based on the external parameters of the top-view sensor, and the conversion method is not limited here.
  • the top-looking environment data collected by the two top-looking sensors can be transformed into the Cartesian coordinate system, and then based on the external parameters of the two top-looking sensors, the two The noise-removed top-view environment data corresponding to one top-view sensor among the top-view sensors is converted to the coordinate system where the other top-view sensor is located.
  • This embodiment refines the operation of splicing and denoising the top-view environment data, and effectively increases the field of view angle of the top-view sensor through the technical means of coordinate system conversion.
  • the determining the pose of the electronic device in the global map according to the local map data and the grid data includes: according to the relationship between the local map data and the global map and the pose relationship between the raster data and the local map data to determine the pose of the electronic device in the global map.
  • the pose relationship between the grid data and the global map can be determined, that is, the global pose in the map.
  • This embodiment refines the technical means for locating electronic devices through local map data and global maps, effectively locating electronic devices.
  • the method further includes: if the mapping instruction is obtained, adding the point cloud information of the edge of the surface included in the processed sensor data to the matched local map data; adding the point cloud information of the edge of the surface The local map data after the cloud information is updated to the global map.
  • the mapping instruction can be regarded as an instruction that triggers the electronic device to perform mapping.
  • the acquisition of the mapping instruction is not limited, for example, it can be acquired through a human-computer interaction interface.
  • this embodiment can add point cloud information of the edge of the surface to the matched local map data, and then use the local map data after adding the point cloud information of the edge of the surface to update the global map.
  • the mapping is effectively performed based on the processed sensor data, so as to facilitate the positioning of the electronic device.
  • FIG. 5 is a schematic flow chart of another positioning method provided by the embodiment of the present application. Referring to Fig. 5, this method comprises the steps:
  • S310 Acquire sensor data collected by at least two top-view sensors.
  • mapping instruction If the mapping instruction is obtained, add the point cloud information of the surface edge included in the processed sensor data to the matched local map data.
  • S340 and S330 are not limited here, and may be executed in parallel or sequentially.
  • the electronic device can acquire sensor data in real time, and after monitoring the positioning instruction, it can perform positioning based on the sensor data; after monitoring the mapping instruction, it can build a map based on the sensor data.
  • the electronic device may also acquire sensor data after receiving a mapping instruction or a positioning instruction.
  • the positioning method has strong anti-interference ability, and is suitable for dynamic environments or environments with many moving obstacles, and overcomes the small field of view of the camera when the robot is in a changing environment or when there are many people.
  • Step s1 collect data.
  • the sensor data can be obtained, that is, the data collected by the sensor can be obtained.
  • Time stamp alignment of multiple sensors means aligning sensor data collected by multiple sensors based on time stamps.
  • Step s2 Preprocessing TOF camera data.
  • the step s2 provided in the embodiment of the present application also includes the following steps: using a statistical filter to remove noise from the original data of the TOF camera 1 and TOF camera 2, that is, remove the noise in the aligned top-view environment data; Using the external parameters between TOF camera 1 and TOF camera 2, the data of TOF camera 2 is converted to the coordinate system of TOF camera 1, that is, the top-view environment data after splicing and removing noise.
  • Transforming the coordinate system also includes the following steps: converting the point clouds acquired by TOF camera 1 and TOF camera 2 into the Cartesian coordinate system, and the coordinates of any point in the TOF camera 1 after converting to the Cartesian coordinate system are expressed as After converting to the Cartesian coordinate system, the coordinates of any point in the TOF camera 2 are expressed as
  • Extracting the point cloud information of the edge of the surface further includes the following steps: screening the points with edge features in the point cloud of the current frame, where the point cloud of the current frame can be the data of the enlarged TOF camera 1 .
  • Step s4 Add the point cloud information of the surface edge to the local map data that is successfully matched.
  • the current frame point cloud is added to the local map, that is, the point cloud information is added to the matched local map data.
  • Step s5 Match the local map data with the global map to obtain the pose.
  • Step s6 Determine whether the current mode is the positioning mode, if the current mode is the positioning mode, execute s1, and determine the position of the electronic device on the global map based on the pose between the local map data and the raster data and the pose obtained in step s5 pose in ; if the current mode is not positioning mode, go to s7.
  • step s7 is executed. Select the mapping mode or positioning mode on the application (Application, APP). If the mapping mode is selected, a map will be generated. If the positioning mode is selected, the robot will be positioned on the existing map.
  • Step s7 Add the local map data to the global map.
  • Step s7 includes the following steps: Transform the point cloud in the local map into the global map coordinate system through the pose obtained in step s5, and the position of the local map relative to the robot is known, and the position of the robot in the global map is known, then The position of the local map on the global map can be calculated through pose transformation. Similar to A is 1m north of B, and B is 1m north of C, then the distance between A and C is 2m.
  • Fig. 8 is a schematic flow chart of another positioning method provided by the embodiment of the present application.
  • This method is applicable to indoor positioning of electronic equipment.
  • This method can be performed by a positioning device, wherein the device can be implemented by software and/or hardware. Realized and generally integrated on electronic equipment.
  • An electronic device may be a device capable of active or passive movement.
  • the electronic device may be a robot.
  • the application scenario of the robot is not limited, and can be indoor or outdoor.
  • an indoor robot is taken as an example to describe the electronic equipment, and the electronic equipment is not limited here, and the implementation manner of the electronic equipment other than the indoor robot is the same as or similar to that of the indoor robot.
  • the robots in this embodiment include indoor robots and outdoor robots.
  • Outdoor robots can be considered as robots that work outdoors and can move.
  • Indoor robots can be considered as robots that work indoors and can move. Since the working scenes of indoor robots are generally in high-dynamic environments such as shopping malls, garages, and supermarkets, the head-up lidar (such as two-dimensional lidar) of indoor robots is often blocked, resulting in lidar data containing many dynamic objects or The lidar data are all blocked to become invalid. At the same time, the scene of the indoor robot changes frequently, and the traditional head-up laser mapping and positioning scheme is not robust.
  • the electronic device in the embodiment of the present application includes a top-view sensor, and the top-view sensor is used to collect top-view environment data.
  • the structure of the electronic equipment will be described by taking the electronic equipment as an indoor robot as an example.
  • FIG. 9 is a schematic structural diagram of an indoor robot provided by an embodiment of the present application.
  • a top-view sensor such as a depth camera
  • the sensors included in the indoor robot include at least: a depth camera, a laser radar, and a wheel odometer. The depth camera faces upwards and is used to collect top-view environmental data.
  • a positioning method provided in the embodiment of the present application includes the following steps:
  • a grid map is a map expression method.
  • the grid map divides the spatial plane into grids with a certain resolution, and the value in the grid is the probability that the current grid is occupied.
  • the grid map includes a top-view grid map and a horizontal grid map.
  • the point cloud information can be Map to raster map to update the raster map and calculate the probability corresponding to the raster.
  • the closed-loop detection may be calculating the matching rate between the point cloud information corresponding to the sensor data and the grid map, and the specific formula is as follows:
  • k represents the number of preprocessed laser points in the current frame
  • T represents the pose of the current frame in the map
  • h i represents the i-th laser point in the current frame after preprocessing
  • ⁇ i , ⁇ i Indicates that the i-th laser point among the preprocessed laser points in the current frame is projected to the grid where the grid map falls according to the pose T
  • Score represents the matching situation between the current point cloud information and the raster map, that is, the matching rate.
  • the Score is in the range of 0-1. The larger the score, the better the matching, and the more likely it is a closed loop.
  • the relative positions of the two nodes can be obtained, and then the global pose graph can be optimized based on the relative positions of the two nodes.
  • a node is an abstract concept, which represents the information encapsulated by a measurement, such as point cloud information and pose information; the relative position of the pose between two nodes obtained based on the closed-loop information can be used for global pose graph optimization, global pose graph optimization After that, the pose information of the nodes will be adjusted, and finally the optimized global pose result will be output.
  • the function h is as follows, which is used to calculate the relative position of the pose between two nodes:
  • c i , c j represent the pose information of nodes i and j respectively;
  • R i represent the rotation vector of node i;
  • t i , t j represent the pose translation vectors of nodes i and j respectively;
  • ⁇ i , ⁇ j represent the pose angle vectors of nodes i and j respectively;
  • Z ij represent the laser matching between nodes i and j, that is, the observed pose transformation between two laser frames;
  • X 2 pose Graph residual;
  • ⁇ ij is the part corresponding to ij in the information matrix, which represents the amount of observation information between i and j, and is used as the weight when optimizing the global pose graph.
  • a positioning method provided by an embodiment of the present application, first, sensor data is obtained, and the sensor data includes head-up environment data and top-view environment data; then the sensor data is processed; secondly, a head-up grid map and a head-up grid map are generated according to the processed sensor data A top-view grid map; finally, the electronic device is positioned according to the processed sensor data, the top-view grid map, and the head-up grid map.
  • the head-up grid map generated by combining the top-view environment data collected by the top-view sensor and the head-up environment data collected by the head-up sensor avoids The impact of the environment on the positioning of electronic devices improves the robustness of positioning.
  • the processing of the sensor data includes: preprocessing the sensor data; transforming the preprocessed sensor data into the body coordinate system; optimizing the sensor data after the coordinate system transformation; according to the optimized
  • the sensor data is processed top-view environment data and processed head-up environment data.
  • the preprocessing means of the head-up environment data in the sensor data includes but not limited to: time stamp alignment, extracting the features in the aligned head-up environment data, and segmenting the point cloud information of the surface edge in the head-up environment data .
  • the processed top-view environment data and the processed head-up environment data can be obtained according to the optimized sensor data, such as reading the optimized top-view environment data in the optimized sensor data as For the processed top-view environment data, the optimized head-up environment data in the optimized sensor data is read as the processed head-up environment data.
  • This embodiment refines the operation of processing sensor data to ensure that the processed sensor data can be positioned.
  • the optimization of the sensor data after the coordinate system transformation includes: processing the sensor data after the coordinate system transformation by brute force matching.
  • the sensor data after transforming the coordinate system is processed by brute force matching, which can eliminate the influence of initial value sensitivity.
  • the method of violent matching is not limited here, such as optimizing the sensor data after transforming the coordinate system through the Correlation Scan Match (CSM) algorithm.
  • CSM can calculate the relative pose between the laser and the map. Optimizing the sensor data after transforming the coordinate system through CSM can make the positioning result more accurate.
  • This embodiment refines the technical means of optimizing the sensor data after the coordinate system is optimized, so that the top-view grid map and the horizontal grid map generated based on the sensor data after the coordinate system optimization conversion can be positioned more accurately.
  • the preprocessing of the sensor data includes: aligning the head-up environment data and the top-view environment data based on the time stamp; extracting the point cloud information of the surface edge in the aligned top-view environment data.
  • This embodiment refines the technical means of preprocessing the sensor.
  • the top-view environment data and the head-down environment data may be aligned based on the time stamp first. .
  • This embodiment can effectively associate sensor data based on time, and effectively extract point cloud information of surface edges in the top-view environment data, so as to generate a top-view grid map.
  • the point cloud information of the surface edge in the aligned top-view environment data can be extracted, so that the extracted point cloud information can be mapped to the top-view grid map to realize mapping and Positioning:
  • the point cloud information of the edge of the face in the aligned head-up environment data can be extracted, so that the point cloud information can be mapped to the head-up grid map for mapping and positioning.
  • Fig. 10 is a schematic flowchart of another positioning method provided in the embodiment of the present application. Referring to Fig. 10, the method includes the following steps:
  • S510 Acquire sensor data, where the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by a top-view sensor.
  • Aligning head-up environment data and top-view environment data based on time stamps can be considered as establishing the corresponding relationship of data collected by multiple sensors based on time, so as to process the aligned top-view environment data and realize positioning.
  • the point cloud information of the surface edge in the aligned top-view environment data can be extracted for positioning.
  • the method for extracting the point cloud information of the surface edge is not limited here.
  • Fig. 11 is a schematic flowchart of another positioning method provided in the embodiment of the present application. Referring to Fig. 11, the method includes the following steps:
  • S610 Acquire sensor data, where the sensor data includes head-up environment data collected by the head-up sensor and top-view environment data collected by the top-view sensor.
  • S640 Perform closed-loop detection on the head-up grid map according to the processed head-up environment data to obtain a head-up matching rate.
  • the head-up matching rate can be considered as the probability of matching the head-up grid map with the head-up environment data.
  • the closed-loop detection of the top-view grid map can be performed to obtain the top-view matching rate.
  • loop closure detection also known as loop-back detection
  • loop-back detection can be performed on the head-up grid map and the top-view grid map to determine the corresponding top-view matching rate and horizontal matching rate.
  • the execution order of determining the head-up matching rate and the top-view matching rate is not limited.
  • the head-up matching rate and the top-view matching rate may be determined in parallel, or the head-up matching rate and the top-view matching rate may be determined sequentially.
  • generating the head-up grid map and the top-view grid map according to the processed sensor data includes: generating a head-up grid map based on the processed head-up environment data; generating a top-view grid map based on the processed top-view environment data grid map.
  • top-view environmental data and horizontal-view environmental data is used to construct maps, that is, map building and positioning are performed based on the fusion of top-view sensors and horizontal-view sensors.
  • Top-view sensor) measurement data characteristics and application scene characteristics give full play to the advantages of data collected by multiple sensors, and achieve accurate real-time mapping and highly robust positioning.
  • FIG. 12 is a schematic flowchart of a multi-layer grid map positioning method provided in the embodiment of the present application.
  • FIG. 12 takes the number of top-view sensors as an example, and the method includes:
  • the point cloud information of the lidar and the point cloud information of the surface edge of the top-view environment data are transferred to the body coordinate system to obtain the point cloud information after coordinate transformation.
  • the body coordinate system can be customized or the coordinate system of the wheel odometer.
  • the current pose is optimized through the CSM algorithm, where the optimized pose can be considered as the position of the coordinate transformed point cloud information in the grid map in the current body coordinate system, and the current point cloud information In the body coordinate system, through the matching of the current point cloud information and the map, the pose of the point cloud information after coordinate transformation in the body coordinate system can be corrected in the map.
  • the head-up laser data can be considered as the processed head-up environment data
  • the top-view surface edge point cloud can be considered as the processed top-view environment data
  • Fig. 13 is a schematic flow chart of another positioning method provided by the embodiment of the present application.
  • This method is applicable to indoor positioning of electronic equipment.
  • This method can be performed by a positioning device, wherein the device can be implemented by software and/or hardware. Realized and generally integrated on electronic equipment.
  • An electronic device may be a device capable of active or passive movement.
  • the electronic device may be a robot.
  • the application scenario of the robot is not limited, and can be indoor or outdoor.
  • an indoor robot is taken as an example to describe the electronic equipment, and the electronic equipment is not limited here, and the implementation manner of the electronic equipment other than the indoor robot is the same as or similar to that of the indoor robot.
  • the robots in this embodiment include indoor robots and outdoor robots.
  • Outdoor robots can be considered as robots that work outdoors and can move.
  • Indoor robots can be considered as robots that work indoors and can move. Since the working scenes of indoor robots are generally in highly dynamic environments such as shopping malls, garages, and supermarkets, the head-up lidar (especially 2D lidar) of indoor robots is often blocked, resulting in many dynamic objects or The lidar data are all blocked to become invalid. At the same time, the scene of the indoor robot changes frequently, and the traditional head-up laser mapping and positioning scheme is not robust.
  • the electronic device in the embodiment of the present application includes at least two top-view sensors, and the top-view sensors are used to collect top-view environment data.
  • the structure of the electronic equipment will be described by taking the electronic equipment as an indoor robot as an example.
  • Figure 14 is a schematic structural diagram of another indoor robot provided by the embodiment of the present application, see Figure 9 and Figure 14, top-view sensors, such as dual depth cameras, that is, two depth cameras are located on the top of the indoor robot, used to collect the top of the indoor robot Top-view environment data for different regions.
  • the sensors included in the indoor robot include at least: a depth camera, a laser radar, and a wheel odometer. The depth camera faces upwards and is used to collect top-view environmental data.
  • the angle between the depth camera and the horizontal direction is not limited here, as long as the adjacent The field of view between the two depth cameras only needs to have a common view area.
  • the angle between the depth camera and the horizontal direction is 90° ⁇ 20°.
  • LiDAR that is, the installation of the head-up sensor is not limited, as long as the head-up environment data can be collected.
  • the orientation of the lidar is horizontally forward, and the angle between the positive direction and the horizontal direction is 0°.
  • a positioning method provided in the embodiment of the present application includes the following steps:
  • the sensor data can be regarded as the data collected by the sensor on the electronic device.
  • the content of the sensor data is not limited here, and may be determined according to the type of sensor included in the electronic device.
  • the sensor data may include: environment data collected by the top-view sensor (ie, top-view environment data), environment data collected by the head-up sensor (ie, head-up environment data), and data collected by the wheel odometer.
  • the environment data includes head-up environment data and top-view environment data, and the environment data can be regarded as the data collected by the sensor representing the surrounding environment of the electronic device.
  • the head-up environmental data can be considered as the environmental data of the running direction of the electronic equipment collected by the head-up sensor, and the head-up sensor can be considered as the sensor that collects the environmental data of the running direction of the electronic equipment, such as lidar.
  • the top-view environment data can be considered as the environment data on the top of the electronic device collected by the top-view sensor, and the top-view sensor can be considered as a sensor that collects the environment data on the top of the electronic device, such as a depth camera.
  • the top-view environment data such as ceiling information is relatively fixed, and the probability of being blocked is relatively small, so the top-view environment data can maintain stable positioning when the head-up environment data is blocked.
  • the sensor data collected by the sensor on the electronic device may be obtained first, so as to process the sensor data for positioning, and the acquisition method is not limited here.
  • a processor of an electronic device communicates with a sensor to obtain sensor data collected by the sensor.
  • a grid map is a map expression method.
  • the grid map divides the spatial plane into grids with a certain resolution, and the value in the grid is the probability that the current grid is occupied.
  • the grid map includes a top-view grid map and a horizontal grid map.
  • processing the top-view sensor data may include splicing the top-view sensors collected by at least two top-view sensors Data manipulation for better positioning.
  • processing the head-up environment data when multiple heads-up sensors collect head-up environment data, when processing the head-up environment data, the head-up environment data collected by multiple heads-up sensors can be spliced.
  • the point cloud information of the surface edge in the top-view environment data can be extracted; the point cloud information of the surface edge in the head-up environment data can also be extracted.
  • time stamps can be aligned between top-looking sensor data and head-up sensor data.
  • the head-up grid map can be regarded as a map established based on the environment of the head-up direction of the electronic device (the running direction of the electronic device), and the top-view grid map can be regarded as a map established based on the environment of the electronic device's top-view direction.
  • the head-up grid map may be a grid map generated based on head-up environment data.
  • the top-view grid map may be a grid map generated based on top-view environment data. The data used in the establishment of the head-up grid map and the top-view grid map are not limited here.
  • the point cloud information can be Map to raster map to update the raster map and calculate the probability corresponding to the raster.
  • this embodiment can combine the head-up grid map to eliminate duplicate top-view environment data, so as to avoid the problem of low positioning efficiency caused by the single environment in the top-view direction of the electronic device.
  • this embodiment can combine the top-view grid map to improve the positioning accuracy.
  • the electronic device can be positioned based on the processed sensor data, the top-view grid map and the head-up grid map.
  • this embodiment can perform pose prediction based on the wheel odometer data in the processed sensor data, and then realize global pose determination based on the top-view grid map and the flat-view grid map.
  • loop closure detection can be performed on the top-view grid map and the top-view grid map, and then the global pose can be optimized and output based on the global pose graph.
  • the closed-loop detection can be calculated by calculating the matching rate of the point cloud information corresponding to the sensor data and the grid map, and the specific formula is as follows:
  • k represents the number of preprocessed laser points in the current frame
  • T represents the pose of the current frame in the map
  • h i among the preprocessed laser points in the current frame
  • the height value of the laser point converted from attitude T; ⁇ i , ⁇ i indicates the i-th laser point in the laser point after preprocessing in the current frame is projected to the grid where the grid map falls according to the pose T Gaussian distribution parameter for the height values of all laser points.
  • Score represents the matching situation between the current point cloud information and the grid map, that is, the matching rate.
  • the Score is in the range of 0-1. The larger the score, the better the matching, and the more likely it is a closed loop.
  • the relative positions of the two nodes can be obtained, and then the global pose graph can be optimized based on the relative positions of the two nodes.
  • a node is an abstract concept, which represents the information encapsulated by a measurement, such as point cloud information and pose information; the global pose graph can be made based on the relative position of the pose between two nodes obtained from the closed-loop information. After the global pose graph is optimized The pose information of the node will be adjusted, and finally the optimized global pose result will be output.
  • the function h is as follows, which is used to calculate the relative position of the pose between two nodes:
  • c i , c j represent the pose information of nodes i and j respectively;
  • Ri represent the rotation vector of node i;
  • t i , t j represent the pose translation vectors of nodes i and j respectively;
  • ⁇ i , ⁇ j represent the pose angle vectors of nodes i and j respectively;
  • Z ij laser matching between nodes i and j, that is, the pose transformation between two observed laser frames;
  • X2 pose graph residual , ⁇ ij is the part corresponding to ij in the information matrix, which represents the amount of observation information between i and j, and is used as the weight when optimizing the global pose graph.
  • a positioning method provided by an embodiment of the present application, first, sensor data is obtained, and the sensor data includes head-up environment data and top-view environment data; then the sensor data is processed; secondly, a head-up grid map and a head-up grid map are generated according to the processed sensor data A top-view grid map; finally, the electronic device is positioned according to the processed sensor data, the top-view grid map, and the head-up grid map.
  • the processing of the sensor data includes: preprocessing the sensor data; transforming the preprocessed sensor data into the body coordinate system; optimizing the sensor data after the coordinate system transformation; according to the optimized
  • the sensor data is processed top-view environment data and processed head-up environment data.
  • the sensor data When processing sensor data, the sensor data may be preprocessed first, and the preprocessing means are not limited.
  • the preprocessing means may be determined based on the content of the sensor data. For example, the corresponding relationship between sensor data can be established based on time; the top-view environment data can also be spliced; the point cloud information that is convenient for positioning in the top-view environment data can be extracted, and the point cloud information in the head-up environment data can be extracted.
  • the preprocessing means of the head-up environment data in the sensor data includes but not limited to: time stamp alignment, extracting the features in the head-up sensor data after alignment, and segmenting the point cloud information of the surface edge in the head-up sensor data.
  • the preprocessed sensor data can be transformed into the body coordinate system, so as to locate the electronic device in the body coordinate system.
  • the conversion means is not limited here.
  • the sensor data after the coordinate system conversion can be optimized for better positioning.
  • the optimization methods are not limited, such as Iterative Closest Point (ICP), ICP variants and brute force matching.
  • the processed top-view environment data and the processed head-up environment data can be obtained according to the sensor data, such as reading the optimized top-view environment data in the sensor data as the processed top-view environment Data, read the optimized head-up environment data in the sensor data as the processed head-up environment data.
  • the optimization of the sensor data after the coordinate system transformation includes: processing the sensor data after the coordinate system transformation by brute force matching.
  • the sensor data after transforming the coordinate system is processed by brute force matching, which can eliminate the influence of initial value sensitivity.
  • the method of violent matching is not limited here, such as optimizing the converted data through the Correlation Scan Match (CSM) algorithm.
  • CSM can calculate the relative pose between the laser and the map. Optimizing the sensor data after transforming the coordinate system through CSM can make the positioning result more accurate.
  • This embodiment refines the specific technical means for optimizing the sensor data after the coordinate system conversion, so that the top-view grid map and the horizontal grid map generated based on the sensor data after the coordinate system optimization conversion can be positioned more accurately.
  • the preprocessing of the sensor data includes: aligning the head-up environment data and the top-view environment data based on time stamps; splicing the aligned top-view environment data; extracting the top-view environment data The point cloud information of the face edge in .
  • This embodiment refines the technical means of preprocessing the sensor.
  • the top-view environment data and the head-down environment data may be aligned based on the time stamp first.
  • the aligned top-view environment data can be spliced to perform positioning based on the spliced top-view environment data.
  • the method of splicing is not limited here.
  • This embodiment can effectively associate sensor data based on time, and effectively extract point cloud information of surface edges in the top-view environment data, so as to generate a top-view grid map.
  • this embodiment can extract the point cloud information of the surface edge in the top-view environment data, so as to map the point cloud information to the top-view grid
  • the point cloud information of the edge of the face in the aligned head-up environment data can be extracted, so that the point cloud information can be mapped to the head-up grid map to realize the mapping and positioning.
  • Fig. 15 is a schematic flowchart of another positioning method provided in the embodiment of the present application. Referring to Fig. 15, the method includes the following steps:
  • Aligning the head-up environment data and the top-view environment data based on the time stamp can be considered as establishing at least one sensor data correspondence based on time, so as to process the aligned top-view environment data, and then realize positioning.
  • the method of splicing the aligned top-view environment data is not limited.
  • the splicing can be realized based on the top-view environment data in the common-view area, so as to splice the top-view environment data collected by multiple top-view sensors.
  • the aligned top-view environment data can be spliced based on the positions of multiple top-view sensors in the electronic device.
  • this embodiment may extract point cloud information of surface edges in the spliced top-view environment data for positioning.
  • the specific means for extracting the point cloud information of the surface edge is not limited here.
  • This example only shows the process of processing the top-view environment data, and the technical means for preprocessing the rest of the sensor data are not limited here.
  • the positioning of the electronic device according to the processed sensor data, the top-view grid map, and the head-up grid map includes: performing closed-loop detection on the head-up grid map to obtain head-up matching rate; performing closed-loop detection on the top-view grid map to obtain the top-view matching rate; when the head-up matching rate is greater than the first set threshold and/or the top-view matching rate is greater than the second set threshold, determine the The global pose of the electronic device.
  • This embodiment refines the positioning technical means, and determines the global pose of the electronic device based on the head-up matching rate and the top-looking matching rate, which can ensure more accurate positioning results.
  • Fig. 16 is a schematic flowchart of another positioning method provided in the embodiment of the present application. Referring to Fig. 16, the method includes the following steps:
  • the head-up matching rate can be considered as the probability of matching the head-up grid map with the head-up environment data.
  • the head-up matching rate can be obtained by performing closed-loop detection on the head-up grid map.
  • the top-view matching rate can be considered as the probability of matching the top-view grid map with the top-view environmental data.
  • no limitation is imposed on the technical means of loop closure detection, as long as the top-view matching rate can be determined.
  • the top-view matching rate can be obtained by performing closed-loop detection on the top-view grid map.
  • loop closure detection also known as loop-back detection
  • loop-back detection can be performed on the head-up grid map and the top-view grid map to determine the corresponding top-view matching rate and horizontal matching rate.
  • the execution order of determining the head-up matching rate and the top-view matching rate is not limited.
  • the head-up matching rate and the top-view matching rate can be determined in parallel, or can be determined sequentially.
  • this embodiment may determine the global pose of the electronic device based on the top-view grid map and the head-up grid map. For example, the technical means based on pose graph optimization determines the residual error of the pose graph to determine the global pose.
  • the first set threshold and the second set threshold are not limited.
  • the head-up matching rate and the top-view matching rate may have different setting thresholds.
  • generating the head-up grid map and the top-view grid map according to the processed sensor data includes: generating a head-up grid map based on the processed head-up environment data; generating a top-view grid map based on the processed top-view environment data grid map.
  • the head-up grid map is generated based on the head-up environment data
  • the top-view grid map is generated based on the top-view environment data
  • This embodiment refines the technical means for generating a head-up grid map and a top-view grid map. This embodiment can effectively combine the head-up and top-view environments of electronic devices for precise positioning.
  • top-view environmental data and horizontal-view environmental data is used to construct maps, that is, map building and positioning are performed based on the fusion of top-view sensors and horizontal-view sensors.
  • Top-view sensor) measurement data characteristics and application scene characteristics give full play to the advantages of data collected by multiple sensors, and achieve accurate real-time mapping and highly robust positioning.
  • Fig. 17 is a schematic flowchart of a multi-layer grid map positioning method provided in the embodiment of the present application.
  • Fig. 17 takes two top-view sensors as an example, and the method includes:
  • the acquired sensor data includes lidar data, wheel odometer data, and depth camera data, namely top-view environment data and head-up environment data.
  • the current pose is optimized through the CSM algorithm, where the optimized pose can be considered as the position of the coordinate transformed point cloud information in the grid map in the current body coordinate system, and the current point cloud information In the body coordinate system, through the matching of the current point cloud information and the map, the pose of the point cloud information after coordinate transformation in the body coordinate system can be corrected in the map.
  • the head-up laser data can be considered as the processed head-up environment data
  • the top-view surface edge point cloud can be considered as the processed top-view environment data
  • lidar has the ability of positioning accuracy and anti-interference, but because mobile server robots often run in more complex scenes, the application of lidar is limited, for example, the feature frame identified by lidar in crowded scenes
  • the large space-time transformation makes the pose determined by the motion server robot have a large error, and the field of view of the lidar is restricted by the crowd, resulting in limited data collection by the lidar.
  • Mobile service robots need a high-precision localization method when operating in this densely populated scene.
  • the service quality of robots mainly depends on accurate location information.
  • Mobile robots mostly use lidar sensors to measure location information, but this is limited by the scene.
  • the characteristics of the variable are large and cannot be precisely located.
  • the technical solution of the present application improves the accuracy of robot positioning by performing mapping and positioning based on depth data in a preset direction (for example, basically unchanged indoor ceiling features) in a short period of time.
  • S1010 Collect the current pose of the robot and depth data in at least one preset direction.
  • At least one preset direction may be at least one of a horizontal direction and a vertical direction.
  • Depth data in at least one preset direction can be collected based on sensors preset on the robot.
  • a feature point cloud is a point cloud that characterizes features in the depth data.
  • S1030 Determine an obstacle score of the feature point cloud in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds.
  • the preset global grid map may include one or more grids, and each grid may include a probability value that the robot is located in the grid.
  • the preset global grid map is divided into obstacle areas, barrier-free areas and unknown areas.
  • each point in the feature point cloud can be mapped to a grid in the preset global grid map, and the probability corresponding to the grid is used as the obstacle score of the point, and the feature point cloud The sum of the obstacle scores of all the points mapped to the obstacle region in , is used as the obstacle score of the feature point cloud.
  • the positioning information can reflect the position information of the robot at the current moment, and can include the coordinates of the robot in the world coordinate system and the angle between the robot and the X-axis of the world coordinate system.
  • the obstacle score of the feature point cloud can be used as the constraint condition of the current pose
  • the current pose can be optimized on the basis of the above obstacle score
  • the optimized current pose can be used as the positioning information of the robot. It can be understood that the way to optimize the current pose can be optimized by nonlinear least square method and Lagrange multiplier method.
  • the technical solution of the present application improves the accuracy of robot positioning by performing mapping and positioning based on depth data in a preset direction within a short period of time.
  • Fig. 19 is a schematic flowchart of another positioning method provided by the embodiment of the present application. Referring to FIG. 19 , the method provided by the embodiment of the present application includes the following steps.
  • S1110 Collect the current pose of the robot, depth data in at least one preset direction, and infrared data in at least one preset direction.
  • the current pose can be information representing the current position and state of the robot, and can include the position coordinates in the world coordinate system and the angle between the robot direction and the X-axis of the world coordinate system.
  • Fig. 20 is an example diagram of a pose provided by the embodiment of the present application. Referring to Fig. 20, the current pose of the robot in the embodiment of the present application may contain three unknown quantities, namely x, y and yaw, where x represents The robot is used as the abscissa in the world coordinate system, y represents the ordinate in the world coordinate system, and yaw represents the angle between the robot direction and the X-axis of the world coordinate system.
  • the preset direction can be a preset data collection direction, which can be set by the user or the service provider.
  • the preset direction can be any direction in space, for example, the vertical direction of the robot and the horizontal direction of 45 degrees upward of the robot.
  • the depth data can be data reflecting the distance from the object to the robot, and the depth data can be collected by sensors installed on the robot.
  • the infrared data can be the data collected by the infrared sensor, and the infrared data can be generated by collecting objects in the preset direction of the robot by the infrared sensor, and can indicate the distance between the robot and the object.
  • the current pose of the robot can be acquired by using the sensors arranged on the robot, for example, the moving distance of the robot can be collected by inertial navigation or displacement sensors to determine the current pose of the robot.
  • Depth data and infrared data can be collected in a preset direction of the robot, for example, depth data of obstacles can be collected in the direction of the top of the robot using a Time Of Flight (TOF) camera.
  • TOF Time Of Flight
  • the robot can collect depth data and infrared data in multiple preset directions to further improve the accuracy of robot positioning.
  • the edge point can be the position point at the edge position in the image composed of infrared data, and the edge point can be detected in the infrared data through differential edge detection, Reborts operator edge detection, Sobel edge detection, Laplacian edge detection, Prewitt operator edge detection, etc. detection.
  • the edge points can be extracted from the infrared data according to the edge detection method, and the coordinates of at least one edge point can be converted from two-dimensional coordinates to three-dimensional coordinates according to the depth value of the depth data.
  • each edge point can be extracted from the depth data.
  • the depth value corresponding to each edge point can be used as the third dimension of the three-dimensional coordinates of each edge point.
  • the edge points after coordinate transformation can be composed into a feature point cloud.
  • the obstacle score may be the total probability score of the grid mapped to the edge point of the grid in the obstacle area when all the edge points in the feature point cloud are mapped to the preset global grid map.
  • the preset global grid map can be composed of historical point clouds, which can reflect the situation in the space where the robot is located.
  • the preset global grid map can include one or more grids, and each grid can include grid probability value.
  • the preset global grid map can be gradually improved during the movement of the robot.
  • the obstacle score may be the sum of the probability values that the edge point is located at the obstacle position in the preset global grid map.
  • FIG. 21 is an example diagram of a preset global grid map provided by an embodiment of the present application. Referring to Fig. 21, the preset global grid map can include three parts: unknown area, obstacle area and unobstructed area, and the obstacle score can be determined by the sum of the probability values of the grids in the obstacle area mapped to the statistical feature point cloud.
  • all the edge points in the feature point cloud can be mapped to the preset global grid map one by one, the probability value of the corresponding grid where each edge point is located can be determined, and the edge points can be counted.
  • the sum of the probability values of the grids in the obstacle area is used as the obstacle score of the feature point cloud in the preset global grid map.
  • the positioning information can reflect the position information of the robot at the current moment, and can include the coordinates of the robot in the world coordinate system and the angle between the robot and the X-axis of the world coordinate system.
  • the obstacle score of the feature point cloud can be used as the constraint condition of the current pose
  • the current pose can be optimized on the basis of the above obstacle score
  • the optimized current pose can be used as the positioning information of the robot.
  • the way to optimize the current pose can be nonlinear least square optimization and Lagrange multiplier optimization.
  • the robot Service quality helps to improve user experience.
  • Fig. 22 is a schematic flowchart of another positioning method provided by the embodiment of the present application, and the embodiment of the present application is described on the basis of the foregoing embodiments. Referring to Fig. 22, the method provided by the embodiment of the present application includes the following steps.
  • the world coordinate system can be the absolute coordinate system of the robot, and the origin of the world coordinate system can be determined when the robot is initialized.
  • the current pose may include three elements, the abscissa, the ordinate, and the angle between the robot direction and the X-axis of the world coordinate system, and the current pose may be acquired by sensors set in the robot.
  • the data collected by the sensor of the robot can be obtained, and based on the data collected by the sensor, the abscissa and ordinate of the robot in the world coordinate system at the current moment and the angle between the robot direction and the X-axis of the world coordinate system are determined as the current pose.
  • At least one depth data sensor pre-set on the robot to collect depth data in at least one preset direction and at least one infrared data sensor to collect infrared data in at least one preset direction, wherein the preset direction includes at least a horizontal direction and a vertical direction At least one of the vertical directions.
  • the depth data sensor can be a device that collects depth data. It can collect the distance from the robot to the object to be collected, and can perceive the depth of objects in space.
  • the depth data sensor can include structured light depth sensors, camera array depth sensors, and time-of-flight depth sensors.
  • Infrared Data sensors, etc. Infrared data sensors can be devices that generate thermal images of objects to be collected.
  • Infrared data sensors can include infrared imagers and time-of-flight cameras.
  • Depth data sensors and infrared data sensors on robotic devices can be used for integrated data acquisition
  • a device, such as a TOF camera can directly control the depth data and infrared data of the collected object collected by the TOF camera.
  • At least one preset direction may be at least one of a horizontal direction and a vertical direction of the robot.
  • a depth data sensor and an infrared data sensor are pre-installed on the robot, and the depth data sensor and the infrared data sensor can be used to respectively collect horizontal or vertical data of the robot.
  • a plurality of depth data sensors and a plurality of infrared data sensors can be preset on the robot, and the preset direction of data collection preset by multiple depth data sensors can be different, and the preset direction of data collection preset by multiple infrared data sensors can be Differently, the preset direction may be a preset direction set by a user or a service provider, for example, it may be a direction convenient for collecting indoor ceiling features, and the preset direction may include directions such as vertically upward or horizontally upward at 45 degrees.
  • the noise in the infrared data can be filtered .
  • the filtering method may include Gaussian filtering, bilateral filtering, median filtering, mean filtering and other methods.
  • the Gaussian filter can be a linear smoothing filter, which can eliminate Gaussian noise during image processing and achieve noise reduction of image data.
  • a template (convolution or mask) can be used to scan each pixel in the infrared data, and the template can be used to determine the The weighted average gray value replaces the value of the center pixel of the template to achieve noise filtering of infrared data.
  • edge points may be extracted from the image formed by the infrared data. All the edge points in the infrared data can be extracted, or an edge point can be extracted at a certain distance to further improve the efficiency of robot positioning.
  • the way of extracting edge points can be through image recognition, for example, the points in the image formed by the infrared data that have a large color difference from other surrounding pixel points can be regarded as edge points.
  • the infrared data can be processed according to the Canny edge detection algorithm, and the infrared data is sequentially subjected to Gaussian filtering to reduce noise, using the finite difference of the first-order partial derivative to calculate the magnitude and direction of the gradient, and the gradient magnitude Perform non-maximum suppression and use a double-threshold algorithm to detect and connect edges to obtain edge points in infrared data.
  • the Canny edge detection algorithm can be an algorithm for detecting the edge of an image, for example, it can include image Gaussian filtering to reduce noise, use the finite difference of the first-order partial derivative to calculate the magnitude and direction of the gradient, and perform non-maximum suppression and Steps such as detecting and connecting edges using a double-threshold algorithm. You can also use: Sobel edge detection, Prewitt edge detection, Roberts edge detection, Canny edge detection, Marr-Hildreth edge detection and other methods to detect edge points in infrared data.
  • the camera model can be a camera model established by the robot for three-dimensional transformation, and can be used to correct the coordinates of edge points using depth data to obtain a feature point cloud with less distortion.
  • the depth data may include depth information of at least one edge point, and the depth information may serve as third-dimensional information corresponding to the three-dimensional coordinates of the edge point.
  • the camera model may include one or more of an Euler camera model, a UVN camera model, a pinhole camera model, a fisheye camera model, and a wide-angle camera model.
  • the coordinates of each edge point can be converted using the preset camera model as the reference system, so that the two-dimensional coordinates of the edge point and the coordinates of the world coordinate system are in the same reference system, and at least one edge point corresponding to the depth data can be determined
  • the two-dimensional coordinates and depth information of each edge point can be used to form three-dimensional coordinates, and multiple edge points with three-dimensional coordinates can be used to form a feature point cloud.
  • the conversion of the two-dimensional data of the edge points to the three-dimensional point cloud can be realized in the following conversion manner:
  • Z is the depth information of the edge point
  • (u, v) is the two-dimensional coordinates of the edge point
  • K is the internal parameter matrix of the camera, which can be determined by the camera model
  • P is the coordinate of the three-dimensional point cloud (feature point cloud).
  • Coordinate system transformation may be performed on the coordinates of at least one edge point in the feature point cloud, so that the coordinates of at least one edge point are based on the world coordinate system.
  • Fig. 23 is an example diagram of a coordinate transformation provided by the embodiment of the present application. See Figure 23.
  • the collected edge points are located in the robot coordinate system, and the camera model corresponds to a coordinate system.
  • the depth data can be added to the edge points in the robot coordinate system according to the coordinate system of the camera model to achieve undistorted or low-distorted coordinates. Transform, and then convert the three-dimensional coordinates to world coordinates.
  • the Proj() function maps the three-dimensional coordinates to the two-dimensional coordinates
  • T is the current pose of the robot, represented by x, y and yaw
  • T can be expressed as follows:
  • At least one edge point after determining the coordinates of at least one edge point in the world coordinate system, at least one edge point can be mapped to the target grid of the preset global grid map in sequence according to the coordinates, for example, the preset global grid Different grids in the grid map have different coordinate ranges, at least one edge point can be mapped to the corresponding grid according to their respective coordinate ranges, and the grid with the edge point mapping can be marked as the target grid.
  • the target grid to which an edge point is mapped is a grid within the obstacle area in the preset global grid map, acquire the probability value of the target grid as the obstacle score of the one edge point.
  • Grids within obstacle areas can be identified by information.
  • the target grid with edge point mapping in the preset global grid map can be tested. If the target grid is in the obstacle area, the probability value stored in the target grid will be used as the obstacle of the corresponding edge point Score.
  • the edge point can not be The corresponding probability value is counted, and the edge point can be deleted or the probability value stored in the grid corresponding to the edge point can not be acquired.
  • the obstacle scores of all edge points in the feature point cloud can be counted, and the sum of the obstacle scores can be used as the obstacle score of the feature point cloud.
  • the residual function for optimizing the current pose can be constructed according to the current pose and the obstacle score of the feature point cloud, where the residual function can be the functional relationship of the optimized robot pose, which can represent the robot's current
  • the residual function can be an optimization function formed by a nonlinear least squares problem, for example,
  • e 1 is the residual
  • p k is the kth edge point in the feature point cloud
  • M(T,p k ) is the kth edge point
  • p k is projected to the preset global grid when the robot pose is T
  • n is the number of edge points in the feature point cloud.
  • the parameter information in the residual function is at least one of the abscissa and ordinate of the current pose and the angle between the robot direction and the X-axis of the world coordinate system. The value of the parameter.
  • the value of the current pose in the residual function can be adjusted to minimize the result value of the residual formula, and the methods for adjusting the current pose can include gradient descent method, Newton method, and quasi-Newton method.
  • the positioning pose information may be pose information used for robot positioning, and the positioning pose information may indicate the most likely current state of the robot.
  • the result value of the residual function when the result value of the residual function is the smallest, it can be determined that the current pose is optimally optimized, and the abscissa, ordinate, and robot direction of the adjusted current pose can be compared with the world coordinates
  • the included angle of the X-axis of the system is used as the output positioning pose information.
  • the depth data process the infrared data into a feature point cloud, convert the coordinates of at least one edge point in the feature point cloud to the world coordinate system, and map each edge point to the target grid of the preset global grid map, in the target grid Obtain the probability value of the target grid as the obstacle score of the corresponding edge point when it is the obstacle position, and the sum of the obstacle scores of at least one edge point of the statistical feature point cloud is used as the obstacle score of the feature point cloud, which is constructed using the obstacle score and the current pose Residual function, adjust the value of the current pose in the residual function, so that the result value of the residual function is the smallest, and the current pose when the result value is the smallest is used as the positioning information of the robot.
  • the infrared data corresponding to The edge points form a feature point cloud, which reduces the amount of data calculation without changing the accuracy of positioning information, improves the efficiency of positioning information determination, optimizes the current pose by using obstacle scores, improves the accuracy of robot position information determination, and enhances the service quality of robots , which helps to improve the user experience.
  • the method further includes: optimizing the positioning information according to the moving speed and obstacle score of the robot.
  • the moving speed of the robot can also be obtained, and the current pose can be optimized by using the moving speed and the obstacle score to obtain positioning information to further improve the positioning accuracy of the robot.
  • the robot's moving speed can be collected Moving speed, constituting a restriction condition based on the moving speed, based on the restriction condition, a nonlinear least squares problem can be constructed for the positioning information obtained after optimizing the current pose based on the obstacle score, and the problem can be solved according to the gradient descent method. In the nonlinear least squares When the result value of the multiplication problem is the smallest, the final positioning information can be obtained.
  • Fig. 24 is a flowchart of another positioning method provided by the embodiment of the present application. The implementation of the present application is described on the basis of the above embodiment. Referring to Fig. 24, the method provided by the embodiment of the present application includes the following steps.
  • S1310. Collect the current pose of the robot, depth data in at least one preset direction, and infrared data in at least one preset direction.
  • the first residual term corresponding to the nonlinear least squares problem can be constructed based on the current pose and the obstacle score of the feature point cloud, for example, Among them, e 1 is the residual, p k is the kth edge point in the feature point cloud, M(T,p k ) is the kth edge point in the feature point cloud, p k is projected to the grid when the robot pose is T The obstacle score calculated by the grid map, n is the number of edge points in the feature point cloud.
  • the predicted pose may be the pose of the robot determined according to the moving speed.
  • the moving position of the robot may be determined through the moving speed, and the pose of the robot may be determined according to the moving position as the predicted pose.
  • the moving speed of the robot can be collected, the moving speed can be used to generate the position of the robot, and the predicted pose can be determined according to the position, based on the difference between the predicted pose and the historical pose at the previous moment as the second residual term, where , the historical pose can be the pose information determined by the robot at the last moment, which can include the coordinates and the angle between the robot's direction and the abscissa axis of the world coordinate system.
  • At least one parameter among the abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system in the predicted pose and/or the current pose can be adjusted separately, so that the first residual term and the second residual term have the smallest value
  • the adjustment methods can include gradient descent method, Newton method and quasi-Newton method, etc.
  • the current pose of the robot can be optimized, and the adjusted predicted pose and/or current pose information can be used as the robot’s
  • the positioning pose information can be used as the positioning information used in the robot positioning process, wherein the relevant information includes the abscissa, the ordinate, and the angle between the robot direction and the X-axis of the world coordinate system.
  • the final pose of the robot can be determined according to the abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system in the positioning information. Based on the final pose, multiple The probability value of the grid, and the probability value is added to the corresponding grid in the corresponding preset global grid map, so as to realize the update of the preset global grid map.
  • the processed at least one edge constitutes a feature Point cloud
  • determine the obstacle score of the feature point cloud in the preset global grid map construct the first residual item based on the obstacle score and the current pose
  • construct the second residual item based on the predicted pose determined by the moving pose and the historical pose
  • the difference term adjust the current pose and the predicted pose so that the sum of the first residual term and the second residual term is the smallest, and the abscissa and ordinate of the current pose and the predicted pose when the sum is minimized, as well as the direction of the robot and the world
  • the angle between the X-axis of the coordinate system is used as the positioning pose information of the robot, and the positioning pose information is used as the positioning information, and the pose corresponding to the positioning information is updated to the preset global grid map, which realizes accurate acquisition
  • FIG. 25 is an example diagram of a positioning method provided in the embodiment of the present application.
  • the robot positioning and mapping method based on a top-view TOF camera may include the following steps:
  • Step 1 Obtain the top-view feature point cloud:
  • the edge point extracted in 2 is the coordinate point of the two-dimensional image layer, and the depth information of at least one edge point in the depth data and the camera model are used to obtain the three-dimensional feature point cloud.
  • the conversion method is as follows:
  • Z is the depth information of the edge point
  • (u, v) is the two-dimensional coordinates of the edge point
  • K is the internal parameter matrix of the camera, which can be determined by the depth camera model
  • P is the coordinate of the three-dimensional point cloud.
  • the robot When the robot performs positioning and mapping, it can use the historical feature point cloud converted to the world coordinate system to construct a grid map, and use the newly generated feature point cloud to match the grid map.
  • the matching process can include:
  • Each edge point in the feature point cloud is mapped to a grid map, and the probability that the grid to which the edge point is mapped is an obstacle is taken as the matching score (obstacle score) of the edge point, and the sum of the matching scores of all points is recorded as The score of the feature point cloud:
  • s is the matching score of an edge point
  • p cell is the occupancy probability of the grid to which the edge point is mapped.
  • e 1 is the residual
  • p k is the kth edge point of the feature point cloud
  • M(T,p k ) is the kth edge point of the feature point cloud
  • p k is projected to the grid when the robot pose is T
  • n is the number of edge points in the feature point cloud.
  • step 2 For the feature point cloud in step 2 and the vehicle speed (moving speed of the robot) as constraints to construct a nonlinear least squares problem together, jointly optimize the pose of the robot, including the following steps:
  • e 1 is the residual
  • p k is the kth edge point of the feature point cloud
  • M(T,p k ) is the kth edge point of the feature point cloud
  • p k is projected to the grid when the robot pose is T
  • n is the number of edge points in the feature point cloud.
  • L is the pose of the robot at the current moment pushed by the vehicle speed
  • L last is the pose of the robot obtained at the previous moment.
  • step 4 For the feature point cloud obtained in step 2, transform it into the world coordinate system through the optimized pose, and update the grid map based on the feature point cloud.
  • Fig. 26 is a schematic flow chart of another positioning method provided by the embodiment of the present application. This embodiment is applicable to robot positioning in crowded scenes. The method can be performed by a positioning device, which can use hardware and/or It is realized by software, referring to FIG. 26 , the positioning method provided by the embodiment of the present application includes the following steps.
  • the current pose can be information representing the current position and state of the robot, and can include the position coordinates in the world coordinate system and the angle between the robot direction and the X-axis of the world coordinate system.
  • the current pose of the robot in the embodiment of the present application may contain three unknown quantities, namely x, y and yaw, where x represents the abscissa of the robot in the world coordinate system, and y represents the vertical in the world coordinate Coordinates, yaw indicates the angle between the robot direction and the X-axis of the world coordinate system.
  • the preset direction can be a preset data collection direction, which can be set by the user or the service provider.
  • the preset direction can be any direction in space, for example, the vertical direction of the robot and the horizontal direction of 45 degrees upward of the robot.
  • the depth data can be the data reflecting the distance from the object to the robot.
  • the depth data can include the position information of the object in space and the distance from the depth data acquisition device.
  • the depth data can be collected by sensors installed on the robot.
  • the current pose of the robot can be collected using sensors installed on the robot.
  • inertial navigation or displacement sensors can be used to collect the moving distance to determine the current pose of the robot.
  • Depth data may be collected in at least one preset direction of the robot, for example, depth data of obstacles may be collected in the direction of the top of the robot or in a 45-degree horizontal direction. It can be understood that the robot collects depth data in multiple preset directions to further improve the accuracy of robot positioning.
  • the plane can be one or more planes included in the point cloud composed of depth data.
  • the plane can be divided and generated along the vertical or horizontal direction in the point cloud composed of depth data.
  • the outer contour points can be the set of position points that constitute the outer contour of the plane. , through the outer contour points to represent all the position points of the plane, which can reduce the amount of data used by the robot positioning.
  • the feature point cloud can be a collection of position points reflecting the characteristics of the depth data.
  • the depth data can be divided into one or more planes in different directions, and the position points at the outer contour points in each plane can be extracted, and the extracted multiple position points can form a feature point cloud, which can reflect the depth data through the feature point cloud Characteristics.
  • all the position points in the feature point cloud can be mapped to the preset global grid map one by one, the probability value of the corresponding grid where each position point is located can be determined, and the obstacles where the position point is located can be counted
  • the sum of the probability values of the grids in the area is used as the obstacle score of the feature point cloud in the preset global grid map.
  • the positioning information can reflect the position information of the robot at the current moment, and can include the coordinates of the robot in the world coordinate system and the angle between the robot and the X-axis of the world coordinate system.
  • the obstacle score of the feature point cloud can be used as the constraint condition of the current pose
  • the current pose can be optimized on the basis of the above obstacle score
  • the optimized current pose can be used as the positioning information of the robot.
  • the way to optimize the current pose can be nonlinear least square optimization and Lagrange multiplier optimization.
  • the robot by acquiring the current pose of the robot and the depth data in the preset direction, extracting the outer contour points of at least one plane from the depth data, and using the outer contour points to form a feature point cloud, it is determined that the feature point cloud is in the preset
  • the obstacle score in the global grid map using the obstacle score to optimize the current pose to obtain the robot's positioning information, realizes the accurate acquisition of the robot's position information, reduces the impact of complex environments on the pose determination, and can enhance the robot's service quality. Help improve user experience.
  • Fig. 27 is a flowchart of another positioning method provided by the embodiment of the present application.
  • the embodiment of the present application is described on the basis of the above-mentioned embodiments. Referring to Fig. 27, the method provided by the embodiment of the present invention specifically includes the following steps:
  • the world coordinate system can be the absolute coordinate system of the robot, and the origin of the world coordinate system can be determined when the robot is initialized.
  • the current pose may include three elements, the abscissa, the ordinate, and the angle between the robot direction and the X-axis of the world coordinate system, and the current pose may be acquired by sensors set in the robot.
  • the data collected by the sensor of the robot can be obtained, and based on the data collected by the sensor, the abscissa and ordinate of the robot in the world coordinate system at the current moment and the angle between the robot direction and the X-axis of the world coordinate system are determined as the current pose.
  • S1520 Use at least one depth data sensor preset on the robot to collect depth data in the at least one preset direction, where the at least one preset direction includes at least one of a horizontal direction and a vertical direction.
  • a depth data sensor can be a device that collects depth data. It can collect the distance from the robot to the object to be collected, and can perceive the depth of objects in space.
  • the depth data sensor can include a structured light depth sensor, a camera array depth sensor, and a time-of-flight depth sensor. At least one preset direction may be at least one of a horizontal direction and a vertical direction of the robot.
  • a depth camera is preset installed on the robot, and the data collection direction of the depth camera can be the horizontal direction or the vertical direction of the robot, and the data collection direction of the depth camera can be a preset value set by the user or the service provider. direction.
  • the robot can use the depth camera to collect the depth data of the object in the corresponding preset direction.
  • a plurality of depth data sensors can be preset on the robot, and the preset directions of collecting data set by the multiple depth data sensors can be different, and the preset directions can be preset directions set by users or service providers, for example, it can be convenient for collecting For the direction of ceiling features, the preset direction can include vertical upward or horizontal upward 45 degrees, etc.
  • the reliability of depth data is further improved by means of multi-direction and multi-data collection sources to enhance the accuracy of positioning information.
  • the noise in the infrared data can be filtered remove.
  • the filtering method may include Gaussian filtering, bilateral filtering, median filtering, mean filtering and other methods.
  • the Gaussian filter can be a linear smoothing filter, which can eliminate Gaussian noise during image processing and achieve noise reduction of image data.
  • a template (convolution or mask) can be used to scan each pixel in the infrared data, and the template can be used to determine the The weighted average gray value replaces the value of the center pixel of the template to achieve noise filtering of infrared data.
  • the depth data is composed of position point information and depth information.
  • the position point information can form an image in a plane.
  • the depth information can be the depth distance between the position point corresponding to the position point information and the acquisition device.
  • the depth data can be a depth image.
  • Each The three dimensions of a pixel point are abscissa, ordinate and depth information respectively.
  • the camera model can be a camera model established by a robot for three-dimensional conversion, and can be used to correct the coordinates of edge points using depth data to obtain a feature point cloud with less distortion.
  • the depth data can include depth information of multiple edge points.
  • the depth The information may be used as third-dimensional information corresponding to the three-dimensional coordinates of the edge points.
  • the camera model may include one or more of an Euler camera model, a UVN camera model, a pinhole camera model, a fisheye camera model, and a wide-angle camera model.
  • the location point information and depth information in the depth data can be extracted, and one or more three-dimensional location points can be determined in space based on the location point information and depth information in the depth data, and the robot can be used to predict
  • the established camera model converts the obtained 3D position points to reduce the distortion of the position points caused by the depth data acquisition device and improve the accuracy of positioning.
  • the converted 3D position points can be used to construct a 3D point cloud.
  • the conversion from depth data to 3D point cloud can be realized in the following conversion manner:
  • Z is the depth information of the position point
  • (u, v) is the position point information of the depth data
  • K is the camera internal reference matrix, which can be determined by the camera model
  • P is the coordinate of the 3D point cloud.
  • the depth information may be the depth information included in the depth data in the process of segmenting the plane, and the preset normal vector information may include the normal information and normal vector information used for segmenting the 3D point cloud.
  • the preset normal vector information can be preset inside the system, or can be input by the user.
  • the depth information and the preset normal vector information can be obtained, and the 3D point cloud can be divided into multiple planes according to the depth information and the preset normal vector information.
  • the division of the plane can be based on the normal vector information, and the maximum iteration depth of the plane division does not exceed the acquired depth.
  • the SACMODEL_PLANE model in the Point Cloud Library (PCL) function can be used to divide the obtained 3D point cloud, and the depth and normal vector information and the 3D point cloud can be input to the SACMODEL_PLANE model to obtain multiple flat.
  • PCL Point Cloud Library
  • the outer contour points in each plane may be selected to represent each plane, and the set of all extracted outer contour points may be used as a feature point cloud.
  • the coordinates of all the position points in the feature point cloud can be transformed into a coordinate system, so that the coordinates of the feature point cloud are based on the world coordinate system.
  • the extracted outer contour points are located in the robot coordinate system, and the camera model corresponds to a coordinate system.
  • the depth data can be added to the outer contour points in the robot coordinate system according to the coordinate system of the camera model to achieve no distortion or low Distorted coordinate conversion, and then convert the three-dimensional coordinates to world coordinates.
  • the conversion process can be expressed by the following formula:
  • T the current pose of the robot, represented by x, y and yaw
  • T can be formulated as follows:
  • all location points can be mapped to the target grid of the preset global grid map in sequence according to the coordinates, for example, the preset grid map
  • Different grids have different coordinate ranges, and all position points can be mapped to corresponding grids according to their respective coordinate ranges, and the grid with position point mapping can be marked as the target grid.
  • Step S1580 if the target grid to which an edge point is mapped is a grid within the obstacle area in the preset global grid map, obtain the probability value of the target grid as the obstacle score of the one location point.
  • Grids within obstacle areas can be identified by information.
  • the target grid It is possible to check the target grid with location point mapping in the preset global grid map. If the target grid is a grid in the obstacle area, the probability value stored in the target grid will be used as the obstacle of the corresponding location point Score.
  • the probability value corresponding to the location point may not be counted.
  • the location point can be deleted or the probability value stored in the grid corresponding to the location point can not be acquired.
  • the obstacle scores of all position points in the feature point cloud can be counted, and the sum of the obstacle scores can be used as the obstacle score of the feature point cloud.
  • the residual function for optimizing the current pose can be constructed according to the current pose and the obstacle score of the feature point cloud, where the residual function can be the functional relationship of the optimized robot pose, which can represent the robot's current
  • the residual function can be an optimization function formed by a nonlinear least squares problem, for example,
  • e 1 is the residual
  • p k is the kth edge point in the feature point cloud
  • M(T,p k ) is the kth edge point
  • p k is projected to the preset global grid when the robot pose is T
  • m is the number of location points in the feature point cloud.
  • the parameter information in the residual function is at least one of the abscissa and ordinate of the current pose and the angle between the robot direction and the X-axis of the world coordinate system The value of the parameter.
  • the value of the current pose in the residual function can be adjusted to minimize the result value of the residual formula, and the methods for adjusting the current pose can include gradient descent method, Newton method, and quasi-Newton method.
  • Step 15120 take the abscissa and ordinate of the current pose when the result value is the smallest, and the angle between the robot direction and the X-axis of the world coordinate system as the positioning pose information.
  • the positioning pose information may be pose information used for robot positioning, and the positioning pose information may indicate the most likely current state of the robot.
  • the result value of the residual function when the result value of the residual function is the smallest, it can be determined that the current pose is optimally optimized, and the abscissa, ordinate, and robot direction of the adjusted current pose can be compared with the world coordinates
  • the included angle of the X-axis of the system is used as the positioning pose information.
  • the outline points of the plane in the 3D point cloud corresponding to the depth data are selected to form the feature point cloud, and the data calculation is reduced without changing the accuracy of the positioning information. Quantity, improving the efficiency of positioning information determination, using obstacle scores to optimize the current pose, improving the accuracy of robot position information determination, can enhance the service quality of the robot, and help improve the user experience.
  • the method further includes: optimizing the positioning information according to the moving speed and obstacle score of the robot.
  • the moving speed of the robot can also be obtained, and the current pose can be optimized by using the moving speed and the obstacle score to obtain positioning information to further improve the positioning accuracy of the robot.
  • the robot's moving speed can be collected Moving speed, constituting a restriction condition based on the moving speed, based on the restriction condition, a nonlinear least squares problem can be constructed for the positioning information obtained after optimizing the obstacle score, and the problem can be solved according to the gradient descent method.
  • the nonlinear least squares problem When the result value is minimum, the final positioning information can be obtained.
  • Fig. 28 is a flow chart of another positioning method provided by the embodiment of the present application. The implementation of the present application is described on the basis of the above embodiments. Referring to Fig. 28, the method provided by the embodiment of the present application includes the following steps.
  • the first residual term corresponding to the nonlinear least squares problem can be constructed based on the current pose and the obstacle score of the feature point cloud, for example, Among them, e 1 is the residual, p k is the kth position point in the feature point cloud, M(T,p k ) is the kth position point in the feature point cloud, p k is projected to the grid when the robot pose is T The obstacle score calculated by the grid map, m is the number of position points in the feature point cloud.
  • the predicted pose may be the pose of the robot determined according to the moving speed.
  • the moving position of the robot may be determined through the moving speed, and the pose of the robot may be determined according to the moving position as the predicted pose.
  • the moving speed of the robot can be collected, the moving speed can be used to generate the position of the robot, and the predicted pose can be determined according to the position, based on the difference between the predicted pose and the previous historical pose as the second residual term,
  • the historical pose may be the pose information determined by the robot at the last moment, and may include the coordinates and the angle between the robot direction and the robot direction abscissa axis.
  • the parameter information includes the value of at least one parameter among the abscissa, ordinate and the angle between the robot direction and the X-axis of the world coordinate system.
  • At least one parameter among the abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system in the speed prediction pose and/or the current pose can be adjusted respectively, so that the first residual
  • the sum of the term and the second residual term is the smallest
  • the adjustment methods can include gradient descent method, Newton method, and quasi-Newton method.
  • the current pose of the robot can be optimized, and the adjusted predicted pose and/or current pose information can be used as the robot’s
  • the positioning pose information can be used as the positioning information used in the robot positioning process, wherein the relevant information includes the abscissa, the ordinate, and the angle between the robot direction and the X-axis of the world coordinate system.
  • the final pose of the robot can be determined according to the abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system in the positioning information. Based on the final pose, multiple The probability value of the grid, and the probability value is added to the corresponding grid in the corresponding preset global grid map, so as to realize the update of the preset global grid map.
  • the first residual term is constructed based on the obstacle score and the current pose
  • the second residual term is constructed based on the predicted pose determined according to the mobile pose and the historical pose
  • the current pose and the predicted pose are adjusted so that the first residual term
  • the sum of the second residual term is the smallest, and the abscissa and ordinate of the current displacement and the predicted pose when the sum is the smallest, and the angle between the robot direction and the X-axis of the world coordinate system are used as the positioning information of the robot, and the positioning information corresponds to
  • the pose of the robot is updated to the preset global grid map, which realizes accurate acquisition of robot positioning information, reduces the impact of complex environments on pose determination, enhances the quality of robot service, and helps improve user experience.
  • FIG. 29 is an example diagram of another positioning method provided in the embodiment of the present application.
  • the robot positioning and mapping method based on a top-view TOF camera may include the following steps:
  • Step 1 Obtain the top-view feature point cloud:
  • Z is the depth information of the position point
  • (u, v) is the two-dimensional coordinates of the depth data (position point information)
  • K is the camera internal reference matrix, which can be determined by the camera model
  • P is the coordinate of the three-dimensional point cloud.
  • the robot When the robot performs positioning and mapping, it can use the historical feature point cloud converted to the world coordinate system to construct a grid map, and use the newly generated feature point cloud to match the grid map.
  • the matching process can include:
  • T Convert the extracted feature point cloud P to the world coordinate system through the robot pose T.
  • the Proj() function maps the three-dimensional coordinates to two-dimensional coordinates.
  • T is the current pose of the robot, represented by x, y and yaw. T can be expressed as follows:
  • Each location point in the feature point cloud is mapped to the grid map, and the probability that the grid to which the edge point is mapped is an obstacle is taken as the matching score (obstacle score) of the location point, and the sum of the matching scores of all points is recorded as The score of the feature point cloud:
  • s is the matching score of a location point
  • p cell is the occupancy probability of the grid to which the location point is mapped.
  • e 1 is the residual
  • p k is the kth position point of the feature point cloud
  • M(T,p k ) is the kth position point of the feature point cloud
  • p k is projected to the grid when the robot pose is T
  • m is the number of position points in the feature point cloud.
  • e 1 is the residual
  • p k is the kth position point of the feature point cloud
  • M(T,p k ) is the kth position point of the feature point cloud
  • p k is projected to the grid when the robot pose is T
  • m is the number of location points in the feature point cloud.
  • L is the pose of the robot at the current moment pushed by the speed of the vehicle
  • L last is the pose of the robot obtained after optimization at the last moment.
  • step 4 For the feature point cloud obtained in step 2, transform it into the world coordinate system through the optimized pose, and update the grid map based on the feature point cloud.
  • FIG. 30 is a schematic structural diagram of a positioning device provided in an embodiment of the present application.
  • the device is applicable to the positioning of electronic equipment, and the device is configured in the electronic equipment.
  • the device includes: an acquisition module 21, configured to acquire sensor data collected by at least one top-view sensor; a processing module 22, configured to process the sensor data; The data locates the electronic device.
  • the positioning device provided in this embodiment locates the electronic equipment based on the data collected by the top-view sensor. Since the environment above the computer equipment is not easy to change, the positioning method provided in this embodiment effectively improves the positioning accuracy of the electronic equipment.
  • the electronic device includes at least two top-view sensors; the positioning module 23 is configured to match the processed sensor data with the local map data in the global map to determine the The pose of the electronic device in the global map.
  • the device first acquires sensor data through the acquisition module 21; secondly, processes the sensor data through the processing module 22; Match the local map data in the global map to determine the pose of the electronic device in the global map.
  • This embodiment provides a positioning device, which effectively avoids the technical problem of poor positioning accuracy when the environment changes during positioning, and effectively improves the positioning accuracy.
  • the processing module 22 is configured to: align the top-view environment data collected by at least two top-view sensors based on time stamps; and preprocess the aligned top-view environment data.
  • the processing module 22 is configured to preprocess the aligned top-view environment data in the following manner: remove the noise points in the aligned top-view environment data; stitch the top-view environment data after the noise removal; extract the stitched top-view environment data The point cloud information of the surface edge in the top-view environment data of .
  • the processing module 22 is configured to splice the top-view environment data after noise removal in the following manner: convert the top-view environment data after noise removal to the coordinate system where the target top-view sensor is located, and the target top-view The sensor is one of the at least two top-looking sensors.
  • the positioning module 23 is configured to: convert the point cloud information of the edge of the surface included in the processed sensor data into raster data; determine the local map data in the global map that matches the raster data; The local map data and the grid data determine the pose of the electronic device in the global map.
  • the determining module 23 is configured to determine the pose of the electronic device in the global map according to the local map data and the grid data in the following manner: according to the local map data and the grid data The pose relationship between the global maps, and the pose relationship between the grid data and the local map data determine the pose of the electronic device in the global map.
  • the device further includes a mapping module, and the mapping module is configured to: if the mapping instruction is obtained, add the point cloud information of the edge of the surface included in the processed sensor data to the processed sensor In the local map data for data matching; update the local map data after adding the processed sensor data to the global map.
  • the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by a top-view sensor;
  • the positioning module is configured to generate a head-up grid map and a top-view grid map based on the processed sensor data
  • a viewing grid map positioning the electronic device according to the processed sensor data, the top-viewing grid map, and the head-up viewing grid map.
  • the device first obtains sensor data through the acquisition module 21, and the sensor data includes head-up environment data and top-view environment data; secondly, the sensor data is processed through the processing module 22; The head-up grid map and the top-view grid map are generated from the sensor data, and the electronic device is positioned according to the processed sensor data, the top-view grid map, and the head-up grid map.
  • This embodiment provides a positioning device, which avoids the influence of the environment on the positioning of electronic devices by combining the top-view grid map and the flat-view grid map, thereby improving the robustness of positioning.
  • the processing module 22 includes: a preprocessing unit 221 configured to preprocess the sensor data; a conversion unit 222 configured to convert the preprocessed sensor data into the body coordinate system;
  • the optimization unit 223 is configured to optimize the sensor data after the coordinate system transformation, and obtain processed top-view environment data and processed head-up environment data according to the sensor data.
  • the preprocessing unit 221 is configured to: align the head-up environment data and the top-view environment data based on the time stamp; extract the point cloud information of the surface edge in the aligned top-view environment data.
  • the optimization unit 223 is configured to: process the sensor data after the coordinate system transformation by brute force matching.
  • the positioning module 23 is configured to locate the electronic device according to the processed sensor data, the top-view grid map and the head-up grid map in the following manner: according to the processed head-up data set The head-up grid map is subjected to closed-loop detection to obtain a head-up matching rate; according to the processed top-view grid map, a closed-loop detection is performed on the top-view grid map to obtain a top-view matching rate; when the head-up matching rate is greater than a first set threshold And/or when the top-view matching rate is greater than a second set threshold, determine the global pose of the electronic device.
  • the positioning module 23 is configured to generate a head-up grid map and a top-view grid map according to the processed sensor data in the following manner: generate a head-up grid map based on the processed head-up environment data; Generate top-view raster maps based on environment data.
  • the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by at least two top-view sensors.
  • the viewing grid map is used to locate the electronic device according to the processed sensor data, the top-viewing grid map and the head-up viewing grid map.
  • the device first obtains sensor data through the acquisition module 21, and the sensor data includes head-up environment data and top-view environment data; secondly, the sensor data is processed through the processing module 22; The head-up grid map and the top-view grid map are generated from the sensor data, and the electronic device is positioned according to the processed sensor data, the top-view grid map, and the head-up grid map.
  • This embodiment provides a positioning device, which avoids the influence of the environment on the positioning of electronic devices by combining the top-view grid map and the flat-view grid map, thereby improving the robustness of positioning.
  • the processing module 22 includes: a preprocessing unit 221, configured to preprocess the sensor data; a conversion unit 222, configured to convert the preprocessed sensor data into the body coordinate system; an optimization unit 223, It is set to optimize the sensor data after the coordinate system transformation, and obtain the processed top-view environment data and the processed head-up environment data according to the sensor data.
  • the preprocessing unit 221 is configured to: align the head-up environment data and the top-view environment data based on the time stamp; splice the aligned top-view environment data; extract the spliced top-view environment data Point cloud information of the edge of the midplane.
  • the optimization unit 223 is configured to: process the sensor data after the coordinate system transformation by brute force matching.
  • the positioning module 24 is configured to locate the electronic device according to the processed sensor data, the top-view grid map and the head-up grid map in the following manner: according to the processed head-up data set Perform closed-loop detection on the head-up grid map to obtain a head-up matching rate; perform closed-loop detection on the top-view grid map according to the processed top-view data set to obtain a top-view matching rate; when the head-up matching rate is greater than the first setting When the threshold and/or the top-view matching rate is greater than a second set threshold, the global pose of the electronic device is determined.
  • the positioning module is configured to generate a head-up grid map and a top-view grid map according to the processed sensor data in the following manner: generate a head-up grid map based on the processed head-up environment data; generate a head-up grid map based on the processed top-view Environment data generate a top-view raster map.
  • the above-mentioned positioning device can execute the positioning method provided by any embodiment of the present application, and has corresponding functional modules for executing the method.
  • Fig. 32 is a schematic structural diagram of another positioning device provided by this magical embodiment, which can execute the positioning method provided by any embodiment of this application, and has corresponding functional modules for executing the method.
  • the device can be implemented by software and/or hardware, including: a data acquisition module 31 , a feature point cloud determination module 32 , an obstacle score determination module 33 and a location determination module 34 .
  • the data acquisition module 31 is set to: collect the current pose of the robot and the depth data of at least one preset direction; the feature point cloud determination module 32 is set to: determine the feature point cloud based on the depth data; the obstacle score determination module 33 is set to: Determine the obstacle score of the feature point cloud in the preset global grid map, wherein the preset global grid map is formed based on the historical point cloud; the positioning determination module 34 is configured to optimize the current Pose to determine the positioning information of the robot.
  • the data collection module 31 is configured to: collect the current pose of the robot and depth data and infrared data in at least one preset direction; the feature point cloud module 32 is configured to extract at least one edge point from the infrared data , and converting the at least one edge point into a feature point cloud according to the depth data.
  • the current pose of the robot and the depth data and infrared data in the preset direction are obtained through the data acquisition module, and the feature point cloud determination module extracts at least one edge point from the infrared data, and processes at least one edge point according to the depth data
  • the obstacle score determination module determines the obstacle score of the feature point cloud in the preset global grid map, and the positioning determination module uses the obstacle score to optimize the current pose to obtain the positioning information of the robot, realizing the accuracy of the robot position information Acquisition, reducing the impact of complex environments on pose determination, can enhance the quality of robot services and help improve user experience.
  • the location determining module 34 includes: a comprehensive optimization unit configured to optimize the location information according to the moving speed and obstacle score of the robot.
  • the data acquisition module 31 includes: a pose acquisition unit configured to acquire the current pose in the world coordinate system of the robot, wherein the current pose includes at least the abscissa, The ordinate and the angle between the direction of the robot and the X-axis of the world coordinate system; the data acquisition unit is configured to use at least one depth data sensor pre-set on the robot to collect depth data in the preset direction and at least one infrared data The sensor collects infrared data in the preset direction, wherein the preset direction includes at least one of a horizontal direction and a vertical direction.
  • the feature point cloud determination module 32 includes: a noise processing unit configured to filter out noise in the infrared data; an edge extraction unit configured to filter out noise At least one edge point is extracted from the infrared data; a point cloud generation unit is configured to use the camera model of the robot and the depth data to convert the at least one edge point into three-dimensional coordinates to form a feature point cloud.
  • the obstacle score determination module 33 includes: a position mapping unit, configured to transform the coordinates of at least one edge point in the feature point cloud into the world coordinate system, and after mapping the transformed coordinates Each edge point of each edge point is to the target grid of the preset global grid map; the score determination unit is set to the grid where the target grid mapped to a position point is the grid of the obstacle area in the preset global grid map In this case, the probability value of the target grid is obtained as the obstacle score of the one edge point; the score statistics unit is configured to count the sum of the obstacle scores of the at least one edge point in the feature point cloud as The obstacle score of the feature point cloud.
  • the positioning determination module 34 further includes: a first residual unit configured to construct a residual function according to the current pose and the obstacle score of the feature point cloud; the parameter The adjustment unit is set to adjust the parameter information in the residual function so that the result value of the residual function is the smallest, and the parameter information in the residual function is the abscissa and ordinate of the current pose and the robot direction and the X axis of the world coordinate system The value of at least one parameter in the included angle; The location determination unit is set to the abscissa, ordinate, and the included angle location of the X-axis of the robot direction and the world coordinate system when the result value is minimized. Posture information.
  • the comprehensive optimization unit is configured to: construct a first residual item according to the current pose and the obstacle score of the feature point cloud; determine a prediction based on the moving speed pose, and the difference between the predicted pose and the previous moment’s historical pose as the second residual item; adjust the parameter information in the speed prediction pose in the first residual item and the second residual item And/or the parameter information in the current pose makes the sum of the first residual term and the second residual term the smallest value, the parameter information includes the abscissa, ordinate, robot direction and world coordinate system The value of at least one parameter in the included angle of the X axis; the speed prediction pose and/or the current pose when the sum of the first residual term and the second residual term is minimized The abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system are used as the optimized positioning pose information.
  • the device further includes: a map update module configured to update the robot pose corresponding to the positioning information to the preset global grid map.
  • the data collection module 31 is set to collect the current pose of the robot and the depth data of at least one preset direction; the feature point cloud determination module 32 is set to extract the depth data The outer contour points of at least one plane and the outer contour points of at least one plane form a feature point cloud.
  • the current pose of the robot and the depth data in at least one preset direction are obtained through the data acquisition module, and the feature point cloud determination module extracts at least one plane's outer contour points from the depth data, and uses the outer contour points to form features Point cloud, the obstacle score determination module determines the obstacle score of the feature point cloud in the preset global grid map, and the positioning determination module uses the obstacle score to optimize the current pose to obtain the positioning information of the robot, realizing the accurate acquisition of the robot position information, reducing The influence of complex environment on pose determination can enhance robot service quality and help improve user experience.
  • the location determination module 44 includes: a comprehensive optimization unit configured to optimize the location information according to the moving speed and obstacle score of the robot.
  • the data acquisition module 31 includes: a pose acquisition unit configured to acquire the current pose in the world coordinate system of the robot, wherein the current pose includes at least the abscissa, The ordinate and the angle between the direction of the robot and the X-axis of the world coordinate system; the depth acquisition unit is configured to use at least one depth data sensor preset on the robot to collect depth data in the at least one preset direction, wherein, The at least one preset direction includes at least one of a horizontal direction and a vertical direction; a depth processing unit configured to filter out noise in the depth data.
  • the feature point cloud determination module 32 includes: a point cloud generation unit configured to convert the depth data into a three-dimensional point cloud based on the camera model of the robot, and the depth The data includes at least position point information and depth information; the plane division unit is configured to divide the three-dimensional point cloud into at least one plane according to the depth information and preset normal vector information; the feature extraction unit is configured to extract the Outline points are used as feature point cloud.
  • the obstacle scoring module 403 includes: a position mapping unit configured to convert the coordinates of at least one position point in the feature point cloud into a world coordinate system, and map the transformed coordinates to Each location point is to the target grid of the preset global grid map; the score determination unit is configured to map a location point to the target grid as the grid of the obstacle area in the preset global grid map In the case of , the probability value of the target grid is obtained as the obstacle score of the one location point; the score statistics unit is set to count the sum of the obstacle scores of all the location points in the feature point cloud as the Obstacle scores for feature point clouds.
  • the positioning determination module 34 further includes: a first residual unit configured to construct a residual function according to the current pose and the obstacle score of the feature point cloud; the parameter The adjustment unit is configured to adjust the parameter information in the residual function so that the result value of the residual function is the smallest, and the parameter information in the residual function is the abscissa and ordinate of the current pose and the X of the robot direction and the world coordinate system The value of at least one parameter in the included angle of the axis; the positioning determination unit is set to the abscissa, ordinate and the included angle of the robot direction and the X-axis of the world coordinate system when the result value is the smallest.
  • the output positioning pose information is set to the abscissa, ordinate and the included angle of the robot direction and the X-axis of the world coordinate system when the result value is the smallest.
  • the comprehensive optimization unit is configured to: construct a first residual item according to the current pose and the obstacle score of the feature point cloud; determine a prediction based on the moving speed pose, and the difference between the predicted pose and the previous moment’s historical pose as the second residual item; adjust the parameter information and the parameter information in the predicted pose in the first residual item and the second residual item /or the parameter information in the current pose makes the sum of the first residual term and the second residual term the smallest value, the parameter information includes the abscissa, ordinate, and the robot direction and the world coordinate system
  • the value of at least one parameter in the included angle of the X axis; the velocity prediction pose and/or the transverse direction of the current pose when the sum of the first residual term and the second residual term is minimized Coordinates, ordinates, and the angle between the robot direction and the X-axis of the world coordinate system are used as positioning pose information.
  • the device further includes: a map update module configured to update the robot pose corresponding to the positioning information to the preset global grid map.
  • FIG. 33 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic equipment provided by the embodiment of the present application includes: one or more processors 41 and storage devices 42; there may be one or more processors 41 in the electronic equipment, and one processor is used in FIG. 34 41 as an example; the storage device 42 is set to store one or more programs; the one or more programs are executed by the one or more processors 41, so that the one or more processors 41 realize the implementation of the present application The positioning method described in any one of the examples.
  • the electronic device may further include: an input device 43 and an output device 44 .
  • the electronic device further includes a sensor, the sensor includes at least two top-view sensors and a wheel encoder, the field of view between the at least two top-view sensors has a common viewing area, and the at least two top-view sensors
  • a top-view sensor is configured to collect top-view environmental data above the electronic device; and the wheel encoder is configured to determine the speed and distance at which the electronic device travels.
  • the speed and distance collected by the wheel encoder can be used in positioning.
  • the top-view environmental data collected by the top-view sensor can be used for mapping and positioning.
  • the position of the top-view sensor on the electronic device is not limited here, as long as the top-view sensor can collect top-view environmental data, and the field of view between multiple top-view sensors has a common viewing area.
  • the technical means that there is a common-view area in the field of view between a plurality of top-view sensors is not limited, it can be considered that there is a common-view area between two adjacent top-view sensors in the at least two top-view sensors; or the at least two There is a target top-view sensor in the top-view sensor, and the top-view sensors except the target top-view sensor among the at least two top-view sensors have a common view area with the field angle of the target top-view sensor.
  • the common viewing area of the field angles of the at least two top-view sensors can be used for external parameter calibration of the top-view sensors, and then the top-view environment data collected by the at least two top-view sensors can be spliced.
  • the electronic device further includes: a top-view sensor and a head-up sensor; the head-up sensor is configured to collect the head-up environment data of the running direction of the electronic device; the top-view sensor is configured to collect the above-mentioned electronic device top-view environment data.
  • the number of the top-view sensors is at least two, and the angle between the at least two top-view sensors and the horizontal direction is different, and there is a set ratio of the field of view between two adjacent top-view sensors. common view area.
  • the top-view sensor in this embodiment can collect top-view environmental data in the same area to improve collection accuracy.
  • multiple top-view sensors can collect top-view environmental data in different areas above the electronic device, so as to improve positioning accuracy by expanding the collection range.
  • multiple top-view sensors may not have a common-view area to collect more top-view environment data.
  • the setting ratio is not limited, such as one-third or one-fourth.
  • the electronic device further includes: a top-view sensor and a head-up sensor; the head-up sensor is configured to collect the head-up environment data of the running direction of the electronic device; the top-view sensor is configured to collect the above-mentioned electronic device top-view environment data.
  • the location of the top-view sensor in this embodiment is not limited, as long as it can collect top-view environment data.
  • the number of top-view sensors is one, which reduces the cost of electronic equipment.
  • the electronic device further includes a depth data sensor and an infrared data sensor, the depth data sensor is configured to collect depth data in a preset direction, and the infrared data sensor is configured to collect infrared data in the preset direction.
  • the electronic device further includes a depth data sensor, and the depth data sensor is configured to collect depth data in a preset direction.
  • the processor 41 , storage device 42 , input device 43 and output device 344 in the electronic device may be connected via a bus or in other ways.
  • connection via a bus is taken as an example.
  • the storage device 42 in the electronic device can be used to store one or more programs, and the programs can be software programs, computer-executable programs and modules, such as the positioning method provided in the embodiment of the present application
  • Corresponding program instructions/modules for example, modules in the positioning device shown in FIG. 30 include: acquisition module 21, processing module 22, generation module 23 and positioning module 24 or data acquisition module 31 and feature point cloud in FIG. 31 determination module 32, obstacle score determination module 33 and positioning determination module 34).
  • the processor 41 executes various functional applications and data processing of the electronic device by running the software programs, instructions and modules stored in the storage device 42 , that is, implements the positioning method in the above method embodiments.
  • the storage device 42 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required for a function; the data storage area may store data created according to the use of the electronic device, and the like.
  • the storage device 42 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices.
  • storage device 42 may include memory located remotely from processor 31, and such remote memory may be connected to the device via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 33 can be configured to receive input numbers or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the output device 34 may include a display device such as a display screen.
  • the electronic device may specifically be a robot, or a positioning and navigation device installed on the robot.
  • a robot or a positioning and navigation device can accurately determine the position of the robot through the positioning method provided by the embodiment of the present invention.
  • FIG. 34 is a schematic structural diagram of a storage medium provided in the embodiment of the present application.
  • a computer program 53 is stored on the computer-readable storage medium 51, and the program is executed by a processor 52. Execution is used to execute the positioning method, which includes:
  • the computer program 53 when executed by the processor 52, it may also be used to execute the positioning method provided by any embodiment of the present application.
  • the computer-readable storage medium 51 in the embodiment of the present application may use any combination of one or more computer-readable media.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium 51 .
  • the computer-readable storage medium 51 may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof.
  • Computer-readable storage media 51 More specific examples (a non-exhaustive list) of computer-readable storage media 51 include: electrical connections with one or more wires, portable computer disks, hard disks, Random Access Memory (RAM), read-only Memory (Read Only Memory, ROM), Erasable Programmable Read Only Memory (Erasable Programmable Read Only Memory, EPROM), flash memory, optical fiber, portable CD-ROM (Compact Disc-Read-Only Memory, CD-ROM), An optical storage device, a magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium 51 may be any tangible medium that contains or stores a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • any appropriate medium including but not limited to: wireless, wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • Computer program code for carrying out the operations of the present invention may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages, such as Java, Smalltalk, C++, and conventional A procedural programming language, such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it may be connected to an external computer such as use an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network

Abstract

Disclosed in the present application are a positioning method and apparatus, an electronic device, and a storage medium. The method is applied to an electronic device, and the electronic device comprises at least one top view sensor. The method comprises: obtaining sensor data acquired by at least one top view sensor; processing the sensor data; and positioning the electronic device according to the processed sensor data. Or, the method comprises: acquiring a current pose and depth data in at least one preset direction of a robot; determining a feature point cloud on the basis of the depth data; determining an obstacle score of the feature point cloud in a preset global grid map, wherein the preset global grid map is formed on the basis of a historical point cloud; and optimizing the current pose according to the obstacle score to determine positioning information of the robot.

Description

定位方法、装置、电子设备及存储介质Positioning method, device, electronic device and storage medium 技术领域technical field
本申请涉及智能机器人技术领域,例如涉及一种定位方法、装置、电子设备及存储介质。The present application relates to the technical field of intelligent robots, for example, to a positioning method, device, electronic equipment and storage medium.
背景技术Background technique
随着社会的发展和技术的进步,电子设备逐渐从实验室走入人类的日常生活之中。以电子设备为智能机器人为例,日常生活中的服务机器人包括移动机器人和位置固定的机器人。With the development of society and the advancement of technology, electronic equipment has gradually entered the daily life of human beings from the laboratory. Taking electronic devices as intelligent robots as an example, service robots in daily life include mobile robots and fixed-position robots.
移动机器人实现所需功能的必要需求是移动机器人准确知道自身位置,即在环境中准确定位自身的位置,从而完成用户下达的指令。The necessary requirement for the mobile robot to realize the required functions is that the mobile robot accurately knows its own position, that is, accurately locates its own position in the environment, so as to complete the instructions issued by the user.
然而,现实中环境是会发生变换的,如环境中包括较多动态障碍物。当环境发生变换时,电子设备的定位精度就会受到影响,故如何提高电子设备的定位精度是当前亟待解决的。However, the environment will change in reality, for example, the environment includes more dynamic obstacles. When the environment changes, the positioning accuracy of the electronic device will be affected, so how to improve the positioning accuracy of the electronic device is an urgent problem to be solved.
发明内容Contents of the invention
本申请提供了一种定位方法、装置、电子设备及存储介质,有效地提高了电子设备的定位精度。The present application provides a positioning method, device, electronic equipment and storage medium, which effectively improves the positioning accuracy of the electronic equipment.
本申请实施例提供了一种定位方法,应用于电子设备,所述电子设备包括至少一个顶视传感器,所述方法包括:An embodiment of the present application provides a positioning method, which is applied to an electronic device, where the electronic device includes at least one top-view sensor, and the method includes:
获取所述至少一个顶视传感器采集的传感器数据;acquiring sensor data collected by the at least one top-looking sensor;
处理所述传感器数据;processing said sensor data;
根据处理后的传感器数据对所述电子设备进行定位。The electronic device is located based on the processed sensor data.
可选的,所述电子设备包括至少两个顶视传感器;Optionally, the electronic device includes at least two top-view sensors;
所述根据处理后的传感器数据对所述电子设备进行定位包括:在获取到定位指令的情况下,将处理后的传感器数据与全局地图中的局部地图数据进行匹配,确定所述电子设备在所述全局地图中的位姿。The positioning of the electronic device according to the processed sensor data includes: matching the processed sensor data with the local map data in the global map to determine where the electronic device is located. Describe the pose in the global map.
可选的,所述电子设备包括一个顶视传感器和至少一个平视传感器,所述传感器数据包括所述至少一个平视传感器采集的平视环境数据和所述一个顶视传感器采集的顶视环境数据;Optionally, the electronic device includes a top-view sensor and at least one head-up sensor, and the sensor data includes the head-up environment data collected by the at least one head-up sensor and the top-view environment data collected by the one top-view sensor;
根据处理后的传感器数据对所述电子设备进行定位包括:根据处理后的传感器数据生成平视栅格地图和顶视栅格地图;根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。Positioning the electronic device according to the processed sensor data includes: generating a head-up grid map and a top-view grid map according to the processed sensor data; The head-up grid map locates the electronic device.
可选的,所述电子设备包括至少两个顶视传感器和至少一个平视传感器,所述传感器数据包括至少一个平视传感器采集的平视环境数据和至少两个顶视传感器采集的顶视环境数据;Optionally, the electronic device includes at least two top-view sensors and at least one head-up sensor, and the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by at least two top-view sensors;
根据处理后的传感器数据对所述电子设备进行定位包括:根据处理后的传感器数据生成平视栅格地图和顶视栅格地图;根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。Positioning the electronic device according to the processed sensor data includes: generating a head-up grid map and a top-view grid map according to the processed sensor data; The head-up grid map locates the electronic device.
本申请实施例还提供了一种定位方法,该方法包括:The embodiment of the present application also provides a positioning method, which includes:
采集机器人的当前位姿以及至少一个预设方向的深度数据;Collect the current pose of the robot and depth data in at least one preset direction;
基于所述深度数据确定特征点云;determining a feature point cloud based on the depth data;
确定所述特征点云在预设全局栅格地图中的障碍得分,其中,所述预设全局栅格地图基于历史点云构成;determining the obstacle score of the feature point cloud in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds;
根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。Optimizing the current pose according to the obstacle score to determine positioning information of the robot.
可选的,采集机器人的当前位姿以及至少一个预设方向的深度数据包括:采集机器人的当前位姿、至少一个预设方向的深度数据和至少一个预设方向的红外数据;Optionally, collecting the current pose of the robot and depth data in at least one preset direction includes: collecting the current pose of the robot, depth data in at least one preset direction, and infrared data in at least one preset direction;
所述基于所述深度数据确定特征点云包括:在所述红外数据中提取至少一个边缘点,并根据所述深度数据将所述至少一个边缘点转换为特征点云。The determining the feature point cloud based on the depth data includes: extracting at least one edge point from the infrared data, and converting the at least one edge point into a feature point cloud according to the depth data.
可选的,基于所述深度数据确定特征点云包括:在所述深度数据中提取至少一个平面的外轮廓点并将所述至少一个平面的外轮廓点构成特征点云。Optionally, determining the feature point cloud based on the depth data includes: extracting the outer contour points of at least one plane from the depth data and forming the feature point cloud from the outer contour points of the at least one plane.
本申请实施例还提供了一种定位装置,配置于电子设备,所述装置包括:The embodiment of the present application also provides a positioning device, which is configured in an electronic device, and the device includes:
获取模块,设置为获取至少一个顶视传感器采集的传感器数据;An acquisition module configured to acquire sensor data collected by at least one top-view sensor;
处理模块,设置为处理所述传感器数据;a processing module configured to process the sensor data;
定位模块,设置为根据处理后的传感器数据对所述电子设备进行定位。A positioning module configured to locate the electronic device according to the processed sensor data.
本申请实施例还提供了一种定位装置,配置于电子设备,所述装置包括:The embodiment of the present application also provides a positioning device, which is configured in an electronic device, and the device includes:
数据采集模块,设置为采集机器人的当前位姿以及至少一个预设方向的深度数据;The data collection module is configured to collect the current pose of the robot and depth data in at least one preset direction;
特征点云确定模块,设置为基于所述深度数据确定特征点云;A feature point cloud determination module, configured to determine a feature point cloud based on the depth data;
障碍得分确定模块,设置为确定所述特征点云在预设全局栅格地图中的障碍得分,其中,所述预设全局栅格地图基于历史点云构成;The obstacle score determination module is configured to determine the obstacle score of the feature point cloud in the preset global grid map, wherein the preset global grid map is formed based on historical point clouds;
定位确定模块,设置为根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。A positioning determining module configured to optimize the current pose according to the obstacle score to determine positioning information of the robot.
本申请实施例还提供了一种电子设备,所述电子设备包括:至少一个处理器;存储装置,设置为存储一个或多个程序;当所述至少一个程序被所述一个或多个处理器执行,使得所述至少一个处理器实现如本申请提供的任意一种定位方法。The embodiment of the present application also provides an electronic device, which includes: at least one processor; a storage device configured to store one or more programs; when the at least one program is executed by the one or more processors Executing, so that the at least one processor implements any positioning method provided in this application.
可选的,所述电子设备还包括:至少两个顶视传感器:所述至少两个顶视传感器间的视场角存在共视区域,所述至少两个顶视传感器设置为采集顶视环境数据。Optionally, the electronic device further includes: at least two top-view sensors: there is a common-view area in the field of view between the at least two top-view sensors, and the at least two top-view sensors are configured to collect the top-view environment data.
可选的,所述电子设备还包括:一个顶视传感器和至少一个平视传感器;所述至少一个平视传感器设置为采集平视环境数据;所述一个顶视传感器设置为采集顶视环境数据。Optionally, the electronic device further includes: a top-view sensor and at least one head-up sensor; the at least one head-up sensor is configured to collect head-up environment data; the one top-view sensor is configured to collect top-view environment data.
可选的,所述电子设备还包括:还包括:至少两个顶视传感器和至少一个平视传感器;所述至少一个平视传感器平视传感器设置为采集平视环境数据;Optionally, the electronic device further includes: further comprising: at least two head-up sensors and at least one head-up sensor; the at least one head-up sensor is configured to collect head-up environment data;
所述至少两个顶视传感器设置为采集顶视环境数据。The at least two top-view sensors are configured to collect top-view environmental data.
可选的,所述至少两个顶视传感器与水平方向夹角不同,相邻两个顶视传感器间的视场角存在设定比例的共视区域。Optionally, the included angles between the at least two top-view sensors and the horizontal direction are different, and there is a common-view area with a set ratio in the field of view between two adjacent top-view sensors.
本发明的有益的技术效果,辛苦补充下Beneficial technical effect of the present invention, painstakingly added
附图说明Description of drawings
图1为本申请实施例提供的一种定位方法的流程示意图;FIG. 1 is a schematic flowchart of a positioning method provided in an embodiment of the present application;
图2为相关技术中机器人学中姿态定义的示意图;Fig. 2 is a schematic diagram of attitude definition in robotics in the related art;
图3为本申请实施例提供的另一种定位方法的流程示意图;FIG. 3 is a schematic flowchart of another positioning method provided in the embodiment of the present application;
图4为本申请实施例提供的另一种定位方法的流程示意图;FIG. 4 is a schematic flowchart of another positioning method provided in the embodiment of the present application;
图5为本申请实施例提供的另一种定位方法的流程示意图;FIG. 5 is a schematic flowchart of another positioning method provided in the embodiment of the present application;
图6为本申请实施例提供的一种双顶视飞行时间(Time of Fight,TOF)相机的定位方法的流程示意图;Fig. 6 is a schematic flow chart of a positioning method for a dual top-looking time-of-flight (Time of Fight, TOF) camera provided by an embodiment of the present application;
图7为本申请实施例提供的一种双顶视TOF相机的安装示意图;FIG. 7 is a schematic diagram of the installation of a dual top-view TOF camera provided in the embodiment of the present application;
图8为本申请实施例提供的另一种定位方法的流程示意图;FIG. 8 is a schematic flowchart of another positioning method provided in the embodiment of the present application;
图9为本申请实施例提供的一种室内机器人的结构示意图;FIG. 9 is a schematic structural diagram of an indoor robot provided by an embodiment of the present application;
图10为本申请实施例提供的另一种定位方法的流程示意图;FIG. 10 is a schematic flowchart of another positioning method provided in the embodiment of the present application;
图11为本申请实施例提供的另一种定位方法的流程示意图;FIG. 11 is a schematic flowchart of another positioning method provided in the embodiment of the present application;
图12为本申请实施例提供的一种多层栅格地图建图定位方法的流程示意图;FIG. 12 is a schematic flowchart of a multi-layer grid map positioning method provided by an embodiment of the present application;
图13为本申请实施例提供的另一种定位方法的流程示意图;FIG. 13 is a schematic flowchart of another positioning method provided in the embodiment of the present application;
图14为本申请实施例提供的另一种室内机器人的结构示意图;Fig. 14 is a schematic structural diagram of another indoor robot provided by the embodiment of the present application;
图15为本申请实施例提供的另一种定位方法的流程示意图;FIG. 15 is a schematic flowchart of another positioning method provided by the embodiment of the present application;
图16为本申请实施例提供的另一种定位方法的流程示意图;FIG. 16 is a schematic flowchart of another positioning method provided in the embodiment of the present application;
图17为本申请实施例提供的另一种多层栅格地图建图定位方法的流程示意图;FIG. 17 is a schematic flowchart of another multi-layer grid map positioning method provided by the embodiment of the present application;
图18是本申请实施例提供的另一种定位方法的流程示意图;Fig. 18 is a schematic flowchart of another positioning method provided by the embodiment of the present application;
图19是本申请实施例提供的另一种定位方法的流程示意图;Fig. 19 is a schematic flowchart of another positioning method provided by the embodiment of the present application;
图20是本申请实施例提供的一种位姿的示例图;Fig. 20 is an example diagram of a pose provided by an embodiment of the present application;
图21是本申请实施例提供的一种预设全局栅格地图的示例图;FIG. 21 is an example diagram of a preset global grid map provided by an embodiment of the present application;
图22是本申请实施例提供的另一种定位方法的流程示意图;Fig. 22 is a schematic flowchart of another positioning method provided by the embodiment of the present application;
图23是本申请实施例提供的一种坐标转换的示例图;Fig. 23 is an example diagram of a coordinate transformation provided by the embodiment of the present application;
图24是本申请实施例提供的另一种定位方法的流程示意图;Fig. 24 is a schematic flowchart of another positioning method provided by the embodiment of the present application;
图25是本申请实施例提供的一种定位方法的示例图;Fig. 25 is an example diagram of a positioning method provided by an embodiment of the present application;
图26是本申请实施例提供的另一种定位方法的流程示意图;Fig. 26 is a schematic flowchart of another positioning method provided by the embodiment of the present application;
图27是本申请实施例提供的另一种定位方法的流程示意图;Fig. 27 is a schematic flowchart of another positioning method provided by the embodiment of the present application;
图28是本申请实施例提供的另一种定位方法的流程示意图;Fig. 28 is a schematic flowchart of another positioning method provided by the embodiment of the present application;
图29是本申请实施例提供的另一种定位方法的示例图;Fig. 29 is an example diagram of another positioning method provided by the embodiment of the present application;
图30是本申请实施例提供的一种定位装置的结构示意图;Fig. 30 is a schematic structural diagram of a positioning device provided by an embodiment of the present application;
图31是本申请实施例提供的另一种定位装置的结构示意图;Fig. 31 is a schematic structural diagram of another positioning device provided by the embodiment of the present application;
图32是本申请实施例提供的另一种定位装置的结构示意图;Fig. 32 is a schematic structural diagram of another positioning device provided by the embodiment of the present application;
图33是本申请实施例提供的一种电子设备的结构示意图;Fig. 33 is a schematic structural diagram of an electronic device provided by an embodiment of the present application;
图34是本申请实施例提供的一种存储介质的结构示意图。FIG. 34 is a schematic structural diagram of a storage medium provided by an embodiment of the present application.
具体实施方式Detailed ways
下面结合附图和实施例对本申请说明。此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。The application will be described below in conjunction with the accompanying drawings and embodiments. The specific embodiments described here are only used to explain the present application, but not to limit the present application. In addition, for the convenience of description, only some structures related to the present application are shown in the drawings but not all structures.
在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将多项操作(或步骤)描述成顺序的处理,但是多项操作中的许多操作可以被并行地、并发地或者同时实施。此外,多项操作的顺序可以被重新安排。当多项操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。Before discussing the exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as sequential processing, many of the operations may be performed in parallel, concurrently, or simultaneously. Additionally, the order of multiple operations can be rearranged. The process may be terminated when multiple operations are complete, but may also have additional steps not included in the figure. The processing may correspond to a method, function, procedure, subroutine, subroutine, or the like.
本申请使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”。The term "comprising" and its variants used in this application are open-ended, ie "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment."
本申请中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,“一个”、“多个”应该理解为“一个或多个”。The modifications of "one" and "multiple" mentioned in this application are illustrative and not restrictive, and those skilled in the art should understand that unless the context clearly indicates otherwise, "one" and "multiple" should be understood as "one or more".
图1为本申请实施例提供的一种定位方法的流程示意图。该方法可适用于对电子设备进行定位的情况,该方法可以由定位装置来执行,其中该装置可由软件和/或硬件实现,并一般集成在电子设备上。该电子设备包括至少一个顶视传感器。顶视传感器可以认为是位于电子设备顶部的传感器。顶视传感器在建图阶段所采集的数据,用于构建全局地图。顶视传感器在定位阶段所采集的数据用于结合全局地图进行定位。本申请中的电子设备可以进行室内定位,在室外的顶视环境不单一的情况下,可以进行室外定位。本申请中的电子设备可以为可移动的电子设备。示例性的,电子设备为机器人。FIG. 1 is a schematic flowchart of a positioning method provided in an embodiment of the present application. The method is applicable to the situation of locating an electronic device, and the method can be executed by a locating device, wherein the device can be implemented by software and/or hardware, and is generally integrated on the electronic device. The electronic device includes at least one top-view sensor. A top-view sensor can be thought of as a sensor that sits on top of an electronic device. The data collected by the top-view sensor during the mapping phase is used to build a global map. The data collected by the top-view sensor in the positioning phase is used in combination with the global map for positioning. The electronic device in this application can perform indoor positioning, and can perform outdoor positioning when the outdoor top-view environment is not uniform. The electronic device in this application may be a mobile electronic device. Exemplarily, the electronic device is a robot.
以下对本申请中涉及的技术名词进行描述:参考坐标系也称为基准坐标系,是描述点、线、面和坐标系等的位置或角度的参考基准。相机外参用来表示相机在三维空间中相对于参考坐标系的位姿。位姿指的是位置和姿态。图2为相关技术中机器人学中姿态定义的示意图,参见图2,位置包括x,y,z,姿态包括航向角(yaw)、俯仰角(pitch)和翻滚角(roll)。The technical terms involved in this application are described below: a reference coordinate system is also called a reference coordinate system, which is a reference standard for describing the positions or angles of points, lines, planes, and coordinate systems. The camera extrinsics are used to represent the pose of the camera in three-dimensional space relative to the reference coordinate system. Pose refers to position and attitude. FIG. 2 is a schematic diagram of attitude definition in robotics in the related art. Referring to FIG. 2 , the position includes x, y, z, and the attitude includes yaw, pitch, and roll.
本申请中坐标变换:是从一种坐标系统变换到另一种坐标系统的过程。统计滤波器:计算每个点到所述每个点最近的k个点的平均距离,点云中所有点的距离应构成高斯分布。计算均值与方差,通过3σ原则剔除噪点,k为正整数。 非线性优化方法:在给定的目标函数f(x)中,寻找最优的一组数值映射使得f(x)最大或者最小,当f(x)是非线性函数时,求解的方法就称为非线性优化方法,本申请中x的维度是3,f(x1,y1,θ),即目标函数是关于x1,y1和θ的函数。其中,x1和y1可以表征顶视传感器间的相对距离,如顶视传感器间的距离,及顶视传感器距离屋内顶的距离。x1和y1可以是顶视传感器所在坐标系下的数值。θ可以认为是顶视传感器与竖直方向的夹角。TOF相机通过给目标连续发送光脉冲,然后利用传感器接收从目标返回的光,通过探测光脉冲的飞行时间来得到目标的距离。本申请中,目标可以认为电子设备顶部的物体,如屋内顶。Coordinate transformation in this application: is the process of transforming from one coordinate system to another. Statistical filter: calculate the average distance from each point to the nearest k points of each point, and the distances of all points in the point cloud should form a Gaussian distribution. Calculate the mean and variance, and remove noise points through the 3σ principle, and k is a positive integer. Nonlinear optimization method: In a given objective function f(x), find the optimal set of numerical mappings to make f(x) maximum or minimum. When f(x) is a nonlinear function, the solution method is called In the nonlinear optimization method, the dimension of x in this application is 3, f(x1, y1, θ), that is, the objective function is a function about x1, y1 and θ. Among them, x1 and y1 can represent the relative distance between the top-looking sensors, such as the distance between the top-looking sensors, and the distance between the top-looking sensor and the roof of the house. x1 and y1 may be values in the coordinate system where the top-view sensor is located. θ can be considered as the angle between the top-view sensor and the vertical direction. The TOF camera continuously sends light pulses to the target, then uses the sensor to receive the light returned from the target, and obtains the distance of the target by detecting the time-of-flight of the light pulse. In this application, a target may be considered as an object on top of an electronic device, such as a roof in a house.
实现导航和定位的前提条件是建立环境中精确的地图,建立地图是实现定位和导航的基础条件。电子设备建图和定位常用方法包括相机和激光雷达等。基于激光雷达的建图和定位方法精度高,数据精度抗干扰能力强,是较为成熟且应用较广的技术,但是单个激光造价较高,在广泛推广的过程中面临高成本的问题。利用相机进行定位具有硬件成本低,信息丰富等优点。相机主要包括单目相机,双目相机,深度相机,红外相机等类型。使用相机建图和定位,一般需要融合轮式里程计或者惯性测量单元(Inertial Measurement Unit,IMU)等多个传感器数据。双目相机需要通过两个相机的视差计算还原场景中的深度信息,深度相机则通过TOF,结构光等技术直接获取场景中的深度,红外相机获取场景中的红外图像。深度相机用于将二维空间中的点还原为三维空间的点云,即视觉数据的像素点可以转化为点云数据的虚拟雷达技术,同样适用于激光定位算法。通过上述相机可以同时获得类似激光的点云信息和视觉图像信息以及红外图像信息,结合了激光和相机两种传感器的优点。The prerequisite for realizing navigation and positioning is to establish an accurate map in the environment, which is the basic condition for realizing positioning and navigation. Common methods for mapping and positioning electronic devices include cameras and lidar. The mapping and positioning method based on lidar has high precision and strong anti-interference ability of data precision. It is a relatively mature and widely used technology, but the cost of a single laser is high, and it faces the problem of high cost in the process of widespread promotion. The use of cameras for positioning has the advantages of low hardware cost and rich information. Cameras mainly include monocular cameras, binocular cameras, depth cameras, infrared cameras and other types. Using camera mapping and positioning generally requires the fusion of multiple sensor data such as wheel odometers or inertial measurement units (Inertial Measurement Unit, IMU). The binocular camera needs to restore the depth information in the scene through the parallax calculation of the two cameras, the depth camera directly obtains the depth in the scene through TOF, structured light and other technologies, and the infrared camera obtains the infrared image in the scene. The depth camera is used to restore the point in the two-dimensional space to the point cloud in the three-dimensional space, that is, the pixel point of the visual data can be converted into the virtual radar technology of the point cloud data, which is also applicable to the laser positioning algorithm. Laser-like point cloud information, visual image information, and infrared image information can be simultaneously obtained through the above-mentioned camera, which combines the advantages of both laser and camera sensors.
定位首先要构建准确的室内地图,室内地图用于绝对坐标系下的电子设备自身位姿的计算和电子设备后续移动的路径规划。传统的传感器的方向都被设置为水平方向以采集水平方向的环境数据,由于现实中环境发生变化很快,且建立地图耗时久,因此,建立的地图精度差,后续机器人的定位过程中容易丢失定位数据,即当环境中发生剧烈变化的时候,定位的精度就会急剧降低。此外,现实环境中,前视传感器(平视传感器)采集的数据容易受到人流等动态对象的影响,例如:人流,宠物,移动机器人遮挡传感器,这样也会影响定位的精度,从而制约电子设备在飞机场,车站,商场或者超市环境下的使用效果。Positioning first requires the construction of an accurate indoor map, which is used for the calculation of the electronic device's own pose in the absolute coordinate system and the path planning for the subsequent movement of the electronic device. The direction of traditional sensors is set to the horizontal direction to collect environmental data in the horizontal direction. Since the environment changes rapidly in reality, and it takes a long time to establish a map, the accuracy of the established map is poor, and the subsequent robot positioning process is easy. Loss of positioning data, that is, when drastic changes occur in the environment, the positioning accuracy will drop sharply. In addition, in the real environment, the data collected by the forward-looking sensor (head-up sensor) is easily affected by dynamic objects such as the flow of people, such as: flow of people, pets, mobile robots blocking the sensor, which will also affect the accuracy of positioning, thus restricting the electronic equipment in the aircraft. The use effect in the field, station, shopping mall or supermarket environment.
为了提高定位精度,如图1所示,本申请实施例提供的一种定位方法,包括如下步骤:In order to improve the positioning accuracy, as shown in Figure 1, a positioning method provided in the embodiment of the present application includes the following steps:
S10、获取至少一个顶视传感器采集的传感器数据。S10. Acquire sensor data collected by at least one top-view sensor.
S20、处理传感器数据。S20. Process sensor data.
示例性的,处理传感器数据可以包括对传感器数据中的噪声数据进行去除。Exemplarily, processing the sensor data may include removing noise data in the sensor data.
S30、根据处理后的传感器数据对电子设备进行定位。S30. Position the electronic device according to the processed sensor data.
示例性的,可以将处理后的传感器数据与全局地图中的局部地图数据进行匹配,根据全局地图中与处理后的传感器数据匹配的局部地图数据确定所述电子设备在全局地图中的位姿。Exemplarily, the processed sensor data may be matched with local map data in the global map, and the pose of the electronic device in the global map may be determined according to the local map data in the global map matched with the processed sensor data.
示例性的,根据全局地图中与处理后的传感器数据匹配的局部地图数据确定所述电子设备在全局地图中的位姿包括:根据全局地图中与处理后的传感器数据匹配的局部地图数据和全局地图的位姿变换关系,处理后的传感器数据与局部地图数据间的位姿变换关系,确定电子设备在全局地图中的位姿。Exemplarily, determining the pose of the electronic device in the global map according to the local map data matched with the processed sensor data in the global map includes: according to the local map data matched with the processed sensor data in the global map and the global The pose transformation relationship of the map, the pose transformation relationship between the processed sensor data and the local map data, determines the pose of the electronic device in the global map.
本实施例基于由顶视传感器采集的数据进行电子设备定位,由于计算机设备上方的环境不容易发生改变,因此本实施例提供的定位方法有效的提升了电子设备的定位精度。This embodiment locates the electronic device based on the data collected by the top-view sensor. Since the environment above the computer device is not easy to change, the positioning method provided by this embodiment effectively improves the positioning accuracy of the electronic device.
图3本申请实施例提供的另一种定位方法的流程图。本实施例中,计算机设备包括至少两个顶视传感器。如图3所示,本实施例提供的方法包括如下步骤:Fig. 3 is a flow chart of another positioning method provided by the embodiment of the present application. In this embodiment, the computer device includes at least two top-view sensors. As shown in Figure 3, the method provided in this embodiment includes the following steps:
S110、获取至少两个顶视传感器采集的传感器数据。S110. Acquire sensor data collected by at least two top-view sensors.
在本实施例中,传感器数据可以认为是通过传感器采集的数据。传感器可以是电子设备上的检测装置。此处不对传感器数据的内容进行限定,可以基于电子设备所包括传感器的类型确定。如传感器数据包括顶视传感器采集的顶视环境数据。顶视环境数据可以是顶视传感器采集的数据,如采集电子设备顶部的环境数据作为顶视环境数据。环境数据的内容基于所采集的设备决定,此处不作限定。In this embodiment, sensor data can be considered as data collected by sensors. A sensor may be a detection device on an electronic device. The content of the sensor data is not limited here, and may be determined based on the type of sensor included in the electronic device. For example, the sensor data includes the top-view environment data collected by the top-view sensor. The top-view environment data may be data collected by a top-view sensor, for example, the environment data on top of an electronic device is collected as top-view environment data. The content of the environmental data is determined based on the collected equipment and is not limited here.
在实现定位前,本步骤可以首先获取电子设备上传感器采集的传感器数据,以便于对传感器数据进行处理,以对电子设备进行定位,此处不对获取传感器数据的方式进行限定。Before realizing the positioning, in this step, the sensor data collected by the sensor on the electronic device may be obtained first, so as to process the sensor data and locate the electronic device. The method of obtaining the sensor data is not limited here.
S120、处理所述传感器数据。S120. Process the sensor data.
获取传感器数据后,可以处理传感器数据,以便于能够基于处理后的传感器数据进行定位。此处不对如何处理传感器数据进行限定,不同的传感器数据对应不同的处理手段。After the sensor data is acquired, the sensor data can be processed so that localization can be performed based on the processed sensor data. How to process the sensor data is not limited here, and different sensor data corresponds to different processing means.
在一个实施例中,可以将至少两个顶视传感器采集的传感器数据进行时间戳对齐,以便于对电子设备进行定位。在传感器数据包括顶视环境数据时,由于顶视环境数据通过至少两个顶视传感器采集,故本实施例可以分别基于每个顶视传感器对应的顶视环境数据对电子设备进行定位,也可以拼接至少两个顶视传感器的顶视环境数据,以基于拼接后的顶视环境数据对电子设备进行定位。In one embodiment, the time stamps of the sensor data collected by at least two top-looking sensors can be aligned so as to locate the electronic device. When the sensor data includes top-view environment data, since the top-view environment data is collected by at least two top-view sensors, this embodiment can position the electronic device based on the top-view environment data corresponding to each top-view sensor, or The top-view environment data of at least two top-view sensors are spliced to locate the electronic device based on the spliced top-view environment data.
在一个实施例中,在基于顶视环境数据对电子设备进行定位时,本实施例可以基于顶视环境数据进行面边缘的点云信息提取,以基于点云信息进行全局地图匹配,以实现电子设备的定位。全局地图可以是基于定位场景建立的栅格地图。定位场景可以认为是电子设备当前所处的需要对电子设备定位的场景。全局地图可以是在建图阶段构建。In one embodiment, when positioning the electronic device based on the top-view environment data, this embodiment can extract the point cloud information of the surface edge based on the top-view environment data, so as to perform global map matching based on the point cloud information, so as to realize electronic The location of the device. The global map may be a grid map established based on the positioning scene. The positioning scene can be regarded as a scene where the electronic device is currently located and needs to be positioned. A global map can be constructed during the mapping phase.
S130、在获取到定位指令的情况下,将处理后的传感器数据与全局地图中的局部地图数据进行匹配,确定所述电子设备在所述全局地图中的位姿。S130. When the positioning instruction is acquired, match the processed sensor data with the local map data in the global map, and determine the pose of the electronic device in the global map.
定位指令可以认为是触发电子设备进行定位的指令。定位指令可以通过人机交互界面触发,此处不作限定。局部地图数据可以认为是全局地图中的局部地图的数据。全局地图可以由局部地图组成。The positioning instruction can be regarded as an instruction to trigger the electronic device to perform positioning. The positioning instruction can be triggered through a human-computer interaction interface, which is not limited here. The local map data can be regarded as the data of the local map in the global map. A global map can be composed of local maps.
本步骤可以基于处理后的传感器数据与局部地图数据确定两者的位姿变换关系,结合局部地图与全局地图间的位姿变换关系,进行电子设备定位。处理后的传感器数据与局部地图数据匹配后,可以得到两者的位姿变换关系,然后结合局部地图与全局地图的位姿变换关系对电子设备进行定位。位姿变换关系即位姿关系。In this step, the pose transformation relationship between the processed sensor data and the local map data may be determined, and the electronic device positioning may be performed in combination with the pose transformation relationship between the local map and the global map. After the processed sensor data is matched with the local map data, the pose transformation relationship between the two can be obtained, and then the electronic device can be positioned by combining the pose transformation relationship between the local map and the global map. The pose transformation relation is the pose relation.
在一个实施例中,全局地图可以为基于多个顶视传感器采集的顶视环境数据构建的,故在定位阶段,基于处理后的传感器数据能够在全局地图中匹配到对应的局部地图数据,进而能够确定电子设备在全局地图中的位姿。In one embodiment, the global map can be constructed based on the top-view environment data collected by multiple top-view sensors, so in the positioning phase, based on the processed sensor data, the corresponding local map data can be matched in the global map, and then Ability to determine the pose of the electronic device in the global map.
本申请实施例提供的一种定位方法,首先获取传感器数据;其次处理所述传感器数据;然后在获取到定位指令的情况下,将处理后的传感器数据与全局地图中的局部地图数据进行匹配,确定所述电子设备在所述全局地图中的位姿。利用上述技术方案,能够基于顶视传感器采集的传感器数据和全局地图中的局部地图数据,进行电子设备的定位,在进行电子设备的定位时有效的避免了环境发生变换时,定位精度差的技术问题,有效的提升了定位精度。In a positioning method provided by an embodiment of the present application, sensor data is firstly obtained; secondly, the sensor data is processed; and then, when a positioning instruction is obtained, the processed sensor data is matched with the local map data in the global map, Determine the pose of the electronic device in the global map. Utilizing the above technical solution, it is possible to locate the electronic device based on the sensor data collected by the top-view sensor and the local map data in the global map, and effectively avoid the technology of poor positioning accuracy when the environment changes during the positioning of the electronic device. problem, effectively improving the positioning accuracy.
在上述实施例的基础上,提出了上述实施例的变型实施例,在此需要说明的是,为了使描述简要,在变型实施例中仅描述与上述实施例的不同之处。On the basis of the above-mentioned embodiments, modified embodiments of the above-mentioned embodiments are proposed. It should be noted here that, for the sake of brevity, only differences from the above-mentioned embodiments are described in the modified embodiments.
在一个实施例中,传感器数据包括顶视环境数据;处理所述传感器数据,包括:基于时间戳将所述至少两个顶视传感器采集的顶视环境数据进行对齐;预处理对齐后的顶视环境数据。In one embodiment, the sensor data includes top-view environment data; processing the sensor data includes: aligning the top-view environment data collected by the at least two top-view sensors based on time stamps; preprocessing the aligned top-view environment data environmental data.
本实施例细化了处理传感器数据的操作,确保处理后的传感器能够实现精确的定位。This embodiment refines the operation of processing sensor data to ensure that the processed sensor can achieve accurate positioning.
在一个实施例中,将处理后的传感器数据与全局地图中的局部地图数据进行匹配,确定所述电子设备在所述全局地图中的位姿,包括:将处理后的传感 器数据所包括的面边缘的点云信息转换为栅格数据;确定全局地图中与所述栅格数据匹配的局部地图数据;根据所述局部地图数据和所述栅格数据,确定所述电子设备在所述全局地图中的位姿。In one embodiment, matching the processed sensor data with local map data in the global map to determine the pose of the electronic device in the global map includes: The point cloud information of the edge is converted into raster data; determine the local map data matching the raster data in the global map; according to the local map data and the raster data, determine the pose in .
本实施例将处理后的传感器数据与全局地图中的局部地图数据进行匹配,以确定电子设备的位姿,准确的进行了电子设备的定位。In this embodiment, the processed sensor data is matched with the local map data in the global map to determine the pose of the electronic device, and accurately locate the electronic device.
图4为本申请实施例提供的另一种定位方法的流程示意图。参见图4,该方法包括如下步骤:Fig. 4 is a schematic flow chart of another positioning method provided by the embodiment of the present application. Referring to Fig. 4, this method comprises the steps:
S210、获取至少两个顶视传感器的传感器数据。S210. Acquire sensor data of at least two top-view sensors.
传感器数据包括顶视环境数据。Sensor data includes top-looking environment data.
S220、基于时间戳将所述至少两个顶视传感器的顶视环境数据进行对齐。S220. Align the top-view environment data of the at least two top-view sensors based on the time stamps.
基于时间戳进行对齐可以认为是基于时间建立多个传感器采集的传感器数据的对应关系。此处不对对齐的技术手段进行限定。Aligning based on time stamps can be considered as establishing a corresponding relationship between sensor data collected by multiple sensors based on time. The technical means of alignment are not limited here.
S230、预处理对齐后的顶视环境数据。S230. Preprocess the aligned top-view environment data.
预处理对齐后的顶视环境数据可以认为是对对齐后的顶视环境数据进行处理,以便于定位。预处理的技术手段不作限定,只要能够便于将预处理后的视环境数据与全局地图中局部地图数据匹配即可。Preprocessing the aligned top-view environment data can be considered as processing the aligned top-view environment data to facilitate positioning. The technical means of preprocessing is not limited, as long as it is convenient to match the preprocessed visual environment data with the local map data in the global map.
示例性的,预处理的手段包括但不限于去噪和拼接等。Exemplarily, the means of preprocessing include but not limited to denoising and splicing.
S240、在获取到定位指令的情况下,将处理后的传感器数据所包括的面边缘的点云信息转换为栅格数据。S240. Convert the point cloud information of the surface edge included in the processed sensor data into grid data when the positioning instruction is acquired.
此处不对如何将面边缘的点云信息转换得到栅格数据进行限定。示例性的,在将面边缘的点云信息转换得到栅格数据时,可以基于笛卡尔坐标系进行映射转换,如将面边缘的点云信息转换至笛卡尔坐标系下,然后再将转换到笛卡尔坐标系下的面边缘的点云信息投影到栅格坐标系内得到栅格数据。There is no limitation on how to convert the point cloud information of the surface edge into raster data. Exemplarily, when converting the point cloud information of the edge of the surface into raster data, the mapping conversion can be performed based on the Cartesian coordinate system, such as converting the point cloud information of the edge of the surface into the Cartesian coordinate system, and then converting to The point cloud information of the surface edge in the Cartesian coordinate system is projected into the grid coordinate system to obtain the grid data.
S250、确定全局地图中与所述栅格数据匹配的局部地图数据。S250. Determine local map data matching the raster data in the global map.
全局地图可以认为是栅格地图,为了便于匹配,本实施例基于栅格数据与全局地图进行匹配,以匹配得到栅格数据对应的局部地图数据。The global map can be regarded as a grid map. To facilitate matching, this embodiment performs matching with the global map based on the grid data, so as to obtain local map data corresponding to the grid data.
由于全局地图是根据顶视传感器采集的顶视环境数据生成的,故在进行电子设备定位时,基于栅格数据能够从全局地图中匹配到对应的局部地图数据,基于匹配的局部地图数据能够确定电子设备的位姿。此处不对匹配的技术手段进行限定。Since the global map is generated based on the top-view environment data collected by the top-view sensor, when positioning electronic equipment, the corresponding local map data can be matched from the global map based on the raster data, and the local map data can be determined based on the matched local map data. The pose of the electronic device. The matching technical means are not limited here.
S260、根据所述局部地图数据和所述栅格数据,确定所述电子设备在所述 全局地图中的位姿。S260. Determine the pose of the electronic device in the global map according to the local map data and the grid data.
在将栅格数据与全局地图匹配得到局部地图数据后,本实施例可以基于匹配得到的局部地图数据和栅格数据的位姿变换关系,确定电子设备的位姿。After the local map data is obtained by matching the grid data with the global map, this embodiment can determine the pose of the electronic device based on the pose transformation relationship between the local map data obtained through matching and the grid data.
示例性的,本实施例通过局部地图数据对应的局部地图和全局地图的位姿变换关系,栅格数据与局部地图数据间的位姿变换关系,确定电子设备在全局地图中的位姿。Exemplarily, this embodiment determines the pose of the electronic device in the global map through the pose transformation relationship between the local map and the global map corresponding to the local map data, and the pose transformation relationship between the grid data and the local map data.
本实施例细化了处理传感器数据和确定电子设备位姿的操作,利用该实施例的所述定位方法有效的处理了传感器数据,基于处理后的传感器数据和全局地图对电子设备进行了定位,提升了定位准确度。This embodiment refines the operations of processing sensor data and determining the pose of an electronic device. Using the positioning method of this embodiment, the sensor data is effectively processed, and the electronic device is positioned based on the processed sensor data and the global map. Improved positioning accuracy.
在一个实施例中,所述预处理对齐后的顶视环境数据,包括:去除对齐后的顶视环境数据中的噪点;拼接去除噪点后的顶视环境数据;提取拼接后的顶视环境数据中面边缘的点云信息。In one embodiment, the preprocessing of the aligned top-view environment data includes: removing noise points in the aligned top-view environment data; splicing the top-view environment data after denoising; extracting the spliced top-view environment data Point cloud information of the edge of the midplane.
该实施例细化了预处理对齐后的顶视环境数据的操作,通过拼接去噪后的顶视环境数据,增加了顶视传感器的视场角度,通过提取面边缘的点云信息提升了定位效率。This embodiment refines the operation of preprocessing the aligned top-view environment data. By splicing and denoising the top-view environment data, the field of view angle of the top-view sensor is increased, and the positioning is improved by extracting point cloud information at the edge of the surface. efficiency.
在预处理顶视环境数据时,可以首先去除对齐后的顶视环境数据中的噪点,以提升定位准确性。此处不对去除噪点的手段不作限定。When preprocessing the top-view environment data, the noise in the aligned top-view environment data can be removed first to improve the positioning accuracy. The method for removing noise is not limited here.
去除噪点后,本实施例可以拼接去除噪点后的顶视环境数据,以扩大顶视传感器的视野范围,提高定位精度。拼接去除噪点后的顶视环境数据的手段不作限定,如通过坐标转换的方式对去除噪点后的顶视环境数据进行拼接。After the noise is removed, in this embodiment, the noise-removed top-view environment data can be spliced, so as to expand the field of view of the top-view sensor and improve positioning accuracy. The method of splicing the noise-removed top-view environment data is not limited, for example, splicing the noise-removed top-view environment data by means of coordinate transformation.
在拼接去除噪点后的顶视环境数据后,可以提取拼接后的顶视环境数据中的面边缘的点云信息,以基于提取的点云信息对电子设备进行定位。提取点云信息的技术手段此处不作限定。After splicing the noise-removed top-view environment data, point cloud information of surface edges in the spliced top-view environment data may be extracted, so as to locate the electronic device based on the extracted point cloud information. The technical means for extracting point cloud information is not limited here.
在一个实施例中,所述拼接去除噪点后的顶视环境数据,包括:将去除噪点后的顶视环境数据转换至目标顶视传感器所在坐标系下,所述目标顶视传感器为所述至少两个顶视传感器中的一个顶视传感器。In one embodiment, the splicing the noise-removed top-view environment data includes: converting the noise-removed top-view environment data into the coordinate system where the target top-view sensor is located, and the target top-view sensor is the at least One of the two top-looking sensors.
目标顶视传感器是至少两个顶视传感器中的任一顶视传感器。在坐标系转换时可以基于顶视传感器的外参进行转换,转换的手段此处不作限定。The target top-looking sensor is any one of the at least two top-looking sensors. During the conversion of the coordinate system, the conversion can be performed based on the external parameters of the top-view sensor, and the conversion method is not limited here.
以顶视传感器的数量为两个为例,本示例中可以将两个顶视传感器采集的顶视环境数据均转换至笛卡尔坐标系下,然后基于两个顶视传感器的外参,将两个顶视传感器中一个顶视传感器对应的去除噪点后的顶视环境数据转换至另一个顶视传感器所在的坐标系下。Taking the number of top-looking sensors as an example, in this example, the top-looking environment data collected by the two top-looking sensors can be transformed into the Cartesian coordinate system, and then based on the external parameters of the two top-looking sensors, the two The noise-removed top-view environment data corresponding to one top-view sensor among the top-view sensors is converted to the coordinate system where the other top-view sensor is located.
该实施例细化了拼接去除噪点后的顶视环境数据的操作,通过坐标系转换的技术手段有效地增加了顶视传感器的视场角。This embodiment refines the operation of splicing and denoising the top-view environment data, and effectively increases the field of view angle of the top-view sensor through the technical means of coordinate system conversion.
在一个实施例中,所述根据所述局部地图数据和所述栅格数据,确定所述电子设备在所述全局地图中的位姿,包括:根据所述局部地图数据与所述全局地图之间的位姿关系,以及所述栅格数据与所述局部地图数据间的位姿关系,确定所述电子设备在所述全局地图中的位姿。In one embodiment, the determining the pose of the electronic device in the global map according to the local map data and the grid data includes: according to the relationship between the local map data and the global map and the pose relationship between the raster data and the local map data to determine the pose of the electronic device in the global map.
在本实施例中基于栅格数据与局部地图数据间的位姿关系和局部地图数据与全局地图间的位姿关系,可以确定栅格数据与全局地图的位姿关系,即得到电子设备在全局地图中的位姿。In this embodiment, based on the pose relationship between the grid data and the local map data and the pose relationship between the local map data and the global map, the pose relationship between the grid data and the global map can be determined, that is, the global pose in the map.
本实施例细化了通过局部地图数据和全局地图进行电子设备定位的技术手段,有效的对电子设备进行了定位。This embodiment refines the technical means for locating electronic devices through local map data and global maps, effectively locating electronic devices.
在一个实施例中,该方法,还包括:若获取到建图指令,则将处理后的传感器数据所包括的面边缘的点云信息添加至匹配的局部地图数据中;将添加面边缘的点云信息后的局部地图数据更新至所述全局地图中。In one embodiment, the method further includes: if the mapping instruction is obtained, adding the point cloud information of the edge of the surface included in the processed sensor data to the matched local map data; adding the point cloud information of the edge of the surface The local map data after the cloud information is updated to the global map.
建图指令可以认为是触发电子设备进行建图的指令。建图指令的获取不作限定,如可以通过人机交互界面获取。The mapping instruction can be regarded as an instruction that triggers the electronic device to perform mapping. The acquisition of the mapping instruction is not limited, for example, it can be acquired through a human-computer interaction interface.
在获取建图指令后,本实施例可以将面边缘的点云信息添加至匹配到的局部地图数据中,然后使用添加面边缘的点云信息后的局部地图数据更新全局地图。After obtaining the mapping instruction, this embodiment can add point cloud information of the edge of the surface to the matched local map data, and then use the local map data after adding the point cloud information of the edge of the surface to update the global map.
本实施例在获取到建图指令后,有效的基于处理后的传感器数据进行建图,以便于电子设备定位。In this embodiment, after obtaining the mapping instruction, the mapping is effectively performed based on the processed sensor data, so as to facilitate the positioning of the electronic device.
图5为本申请实施例提供的另一种定位方法的流程示意图。参见图5,该方法包括如下步骤:FIG. 5 is a schematic flow chart of another positioning method provided by the embodiment of the present application. Referring to Fig. 5, this method comprises the steps:
S310、获取至少两个顶视传感器采集的传感器数据。S310. Acquire sensor data collected by at least two top-view sensors.
S320、处理所述传感器数据。S320. Process the sensor data.
S330、在获取到定位指令的情况下,将处理后的传感器数据与全局地图中的局部地图数据进行匹配,确定所述电子设备在所述全局地图中的位姿。S330. When the positioning instruction is acquired, match the processed sensor data with the local map data in the global map, and determine the pose of the electronic device in the global map.
S340、若获取到建图指令,则将处理后的传感器数据所包括的面边缘的点云信息添加至匹配的局部地图数据中。S340. If the mapping instruction is obtained, add the point cloud information of the surface edge included in the processed sensor data to the matched local map data.
S350、将添加面边缘的点云信息后的局部地图数据更新至所述全局地图中。S350. Update the local map data added with the point cloud information of the surface edge into the global map.
此处不对S340和S330的执行顺序进行限定,可以并行执行,也可以先后执行。The execution sequence of S340 and S330 is not limited here, and may be executed in parallel or sequentially.
在一个实施例中,电子设备可以实时获取传感器数据,在监测到定位指令后,可以基于传感器数据进行定位;在监测到建图指令后,可以基于传感器数据建图。In one embodiment, the electronic device can acquire sensor data in real time, and after monitoring the positioning instruction, it can perform positioning based on the sensor data; after monitoring the mapping instruction, it can build a map based on the sensor data.
在一个实施例中,电子设备也可以在接收到建图指令或定位指令后获取传感器数据。In an embodiment, the electronic device may also acquire sensor data after receiving a mapping instruction or a positioning instruction.
以下对本申请进行示例性描述,本申请实施例提供的定位方法可以认为是一种基于双顶视TOF相机的机器人定位和建图的方案。本申请提供的定位方法可以让机器人在地面上障碍物频繁变化或者是动态障碍物较多的场景中保持稳定定位。在机器人的驱动轮上安装轮式编码器,在机器人的顶部安装顶视传感器,如TOF相机1和TOF相机2,TOF相机1和TOF相机2与竖直方向夹角范围为0°-30°,单个TOF相机的视场范围小,视场容易受到遮挡从而造成定位的波动,安装两个TOF相机是为了扩大视场,两个视场存在相交是为了标定两个TOF相机之间的外参。TOF相机1和TOF相机2之间的距离参数是根据机器人的自身参数(例如:机器人尺寸大小,机器人顶部表面的面积大小等多因素)结合相机视场角(Field of View,FOV)将相机视野调整到最宽视野的安装方式下的数值。比如y1方向位置相差20cm,x1方向相差为0,两个相机各自往第一侧和第二侧偏离竖直方向25度,或者x1方向位置相差20cm,y1方向相差为0,两个相机各自往前后偏离竖直方向20度。由于采用双顶视TOF相机的安装方法,天花板上的特征相较于地面上的特征发生变化的概率小,且不易受到地面上人群等动态障碍物的影响,而且双顶视TOF相机克服单个TOF相机视野过小时定位稳定性较差的影响。利用双TOF相机数据和轮式编码器数据,通过特征提取,暴力匹配和非线性优化方法确定机器人TOF相机与局部地图之间位姿变换以及局部地图和全局地图之间位姿变换。位姿变换可以理解为相对位置关系。在得到两个物体间的相对位置关系后,可以推导出不同物体间的位置关系。如,机器人通过传感器在两个不同时刻看到的同一物体,可以计算出这两个不同时刻机器人的相对位姿,也可以得到这个物体相对机器人的位置,上述步骤可以认为是建图。通过将传感器数据与地图中的物体匹配,得到机器人的位姿,即定位。The following is an exemplary description of the present application. The positioning method provided by the embodiment of the present application can be regarded as a solution for robot positioning and mapping based on dual top-view TOF cameras. The positioning method provided by this application can make the robot maintain stable positioning in the scene where obstacles on the ground change frequently or there are many dynamic obstacles. Install a wheel encoder on the driving wheel of the robot, and install a top-view sensor on the top of the robot, such as TOF camera 1 and TOF camera 2. The angle range between TOF camera 1 and TOF camera 2 and the vertical direction is 0°-30° , the field of view of a single TOF camera is small, and the field of view is easily blocked, resulting in fluctuations in positioning. The purpose of installing two TOF cameras is to expand the field of view. The intersection of the two fields of view is to calibrate the external parameters between the two TOF cameras. . The distance parameter between TOF camera 1 and TOF camera 2 is based on the robot's own parameters (such as the size of the robot, the area of the top surface of the robot, etc.) combined with the camera field of view (Field of View, FOV) Adjust to the value under the installation method with the widest field of view. For example, the position difference in the y1 direction is 20cm, the difference in the x1 direction is 0, and the two cameras deviate from the vertical direction by 25 degrees to the first side and the second side, or the position difference in the x1 direction is 20cm, and the difference in the y1 direction is 0. 20 degrees from vertical. Due to the installation method of the double top-view TOF camera, the features on the ceiling are less likely to change than those on the ground, and are not easily affected by dynamic obstacles such as crowds on the ground, and the double top-view TOF camera overcomes the single TOF The effect of poor positioning stability when the camera field of view is too small. Using dual TOF camera data and wheel encoder data, the pose transformation between the robot TOF camera and the local map and the pose transformation between the local map and the global map are determined through feature extraction, brute force matching and nonlinear optimization methods. Pose transformation can be understood as a relative position relationship. After obtaining the relative positional relationship between two objects, the positional relationship between different objects can be deduced. For example, the robot can calculate the relative pose of the robot at two different times through the same object seen by the sensor at two different times, and can also obtain the position of the object relative to the robot. The above steps can be considered as map building. By matching the sensor data with the objects in the map, the pose of the robot is obtained, that is, localization.
x1、y1和角度可以根据顶视传感器的视场角、顶视传感器与屋内顶的距离确定,具体数值不作限定,只要能够保证多个顶视传感器能够共视即可。角度为顶视传感器与竖直方向的角度。如,两个顶视传感器间的距离
Figure PCTCN2021096828-appb-000001
Figure PCTCN2021096828-appb-000002
其中,h可以为顶视传感器与屋内顶的距离。H为视场角,θ可以认为是顶视传感器与竖直方向的夹角。
x1, y1 and the angle can be determined according to the field of view angle of the top-view sensor and the distance between the top-view sensor and the roof of the house. The specific values are not limited, as long as multiple top-view sensors can be seen together. Angle is the angle between the top-looking sensor and the vertical direction. For example, the distance between two top-looking sensors
Figure PCTCN2021096828-appb-000001
Figure PCTCN2021096828-appb-000002
Wherein, h may be the distance between the top-view sensor and the roof of the house. H is the field of view angle, and θ can be considered as the angle between the top-view sensor and the vertical direction.
本申请可以应用于机器人自动控制技术领域,该定位方法抗干扰能力强, 适用于动态环境或移动障碍物较多的环境下,克服机器人在变化环境中或者人流较多的情况下,相机视野小易受到干扰情况下机器人定位失效的问题。This application can be applied to the field of robot automatic control technology. The positioning method has strong anti-interference ability, and is suitable for dynamic environments or environments with many moving obstacles, and overcomes the small field of view of the camera when the robot is in a changing environment or when there are many people. The problem of robot positioning failure when it is susceptible to interference.
以电子设备为机器人,顶视传感器为双顶视TOF相机为例,图6为本申请实施例提供的一种双顶视TOF相机的定位方法的流程示意图,图7为本申请实施例提供的一种双顶视TOF相机的安装示意图,参见图6,双顶视TOF相机进行机器人定位时,包括如下步骤:Taking the electronic device as a robot and the top-view sensor as a double-top-view TOF camera as an example, FIG. 6 is a schematic flowchart of a positioning method for a double-top-view TOF camera provided by an embodiment of the present application, and FIG. 7 is a schematic diagram of a positioning method provided by an embodiment of the present application. A schematic diagram of the installation of a double top-view TOF camera, see Figure 6. When the double top-view TOF camera is used for robot positioning, it includes the following steps:
步骤s1:采集数据。Step s1: collect data.
如图6所示,在机器即机器人上安装传感器:1、TOF相机1和TOF相机2,2、轮式编码器,其中TOF相机1和TOF相机2即双顶视TOF相机。As shown in Figure 6, sensors are installed on the machine, that is, the robot: 1. TOF camera 1 and TOF camera 2, 2. Wheel encoder, wherein TOF camera 1 and TOF camera 2 are double top-view TOF cameras.
本申请实施例中所述步骤s1中还包括以下步骤:安装轮式编码器时,可以将轮式编码器安装在机器人的驱动轮的轮轴上,轮式编码器用于航迹推演,计算机器人行走的速度以及距离,以用于位姿转换、非线性优化。The step s1 described in the embodiment of the present application also includes the following steps: when installing the wheel encoder, the wheel encoder can be installed on the wheel shaft of the driving wheel of the robot, and the wheel encoder is used for track deduction to calculate the walking of the robot The speed and distance are used for pose transformation and nonlinear optimization.
安装深度TOF相机1时,需要将深度TOF相机1安装在机器人的顶部,深度TOF相机1与竖直方向夹角范围为0到30°,具体夹角可以根据相机型号等做调整。When installing the depth TOF camera 1, the depth TOF camera 1 needs to be installed on the top of the robot. The angle between the depth TOF camera 1 and the vertical direction ranges from 0 to 30°, and the specific angle can be adjusted according to the camera model.
安装深度TOF相机2时,需要将深度TOF相机2安装在机器人的顶部,深度TOF相机2与竖直方向夹角范围为0到30°,具体夹角可以根据相机型号等做调整。When installing the depth TOF camera 2, the depth TOF camera 2 needs to be installed on the top of the robot. The angle between the depth TOF camera 2 and the vertical direction ranges from 0 to 30°, and the specific angle can be adjusted according to the camera model.
安装传感器后,可以获取传感器数据,即获取传感器所采集的数据。After the sensor is installed, the sensor data can be obtained, that is, the data collected by the sensor can be obtained.
步骤s13:对齐编码器数据和两个TOF相机数据的时间戳。Step s13: Align the timestamps of the encoder data and the two TOF camera data.
将多个传感器进行时间戳对齐,即基于时间戳将多个传感器采集的传感器数据进行对齐。Time stamp alignment of multiple sensors means aligning sensor data collected by multiple sensors based on time stamps.
步骤s2:预处理TOF相机数据。Step s2: Preprocessing TOF camera data.
对采集到的点云信息进行预处理,即预处理对齐后的顶视环境数据。Preprocess the collected point cloud information, that is, preprocess the aligned top-view environment data.
本申请实施例提供的所述步骤s2中还包括以下步骤:使用统计滤波器,对TOF相机1和TOF相机2的原始数据进行噪点的去除,即去除对齐后的顶视环境数据中的噪点;利用TOF相机1和TOF相机2之间的外参,将TOF相机2数据转换到TOF相机1的坐标系下,即拼接去除噪点后的顶视环境数据。The step s2 provided in the embodiment of the present application also includes the following steps: using a statistical filter to remove noise from the original data of the TOF camera 1 and TOF camera 2, that is, remove the noise in the aligned top-view environment data; Using the external parameters between TOF camera 1 and TOF camera 2, the data of TOF camera 2 is converted to the coordinate system of TOF camera 1, that is, the top-view environment data after splicing and removing noise.
转换坐标系还包括以下步骤:将TOF相机1和TOF相机2所获取到的点云都转换到笛卡尔坐标系下,转换到笛卡尔坐标系下后的TOF相机1内任意点的坐标表示为
Figure PCTCN2021096828-appb-000003
转换到笛卡尔坐标系下后TOF相机2内任意点的坐标表示为
Figure PCTCN2021096828-appb-000004
Transforming the coordinate system also includes the following steps: converting the point clouds acquired by TOF camera 1 and TOF camera 2 into the Cartesian coordinate system, and the coordinates of any point in the TOF camera 1 after converting to the Cartesian coordinate system are expressed as
Figure PCTCN2021096828-appb-000003
After converting to the Cartesian coordinate system, the coordinates of any point in the TOF camera 2 are expressed as
Figure PCTCN2021096828-appb-000004
TOF相机1与TOF相机2的外参(可以指示TOF相机1和TOF相机2的相对位置和方向)表示为
Figure PCTCN2021096828-appb-000005
将TOF相机2下所获取的点云转换到TOF相机1(即目标顶视传感器)表示为:
The external parameters of TOF camera 1 and TOF camera 2 (which can indicate the relative position and direction of TOF camera 1 and TOF camera 2) are expressed as
Figure PCTCN2021096828-appb-000005
Converting the point cloud acquired under TOF camera 2 to TOF camera 1 (that is, the target top-view sensor) is expressed as:
Figure PCTCN2021096828-appb-000006
Figure PCTCN2021096828-appb-000006
将TOF相机2下所获取的所有点转换后,则将TOF相机2视野下所有点转换到TOF相机1下,从而扩大了TOF相机1的视场角度。After converting all points obtained under TOF camera 2, all points under the field of view of TOF camera 2 are converted to under TOF camera 1, thereby expanding the field of view angle of TOF camera 1.
步骤s23:提取面边缘的点云信息。Step s23: Extract point cloud information of the edge of the surface.
提取面边缘的点云信息并将提取的面边缘的点云信息转换为二维(2-Dimension,2D)栅格数据,即提取顶视环境数据中的面边缘的点云数据(点云信息)并将提取的顶视环境数据中面边缘的点云数据转换为栅格数据。Extract the point cloud information of the surface edge and convert the extracted point cloud information of the surface edge into two-dimensional (2-Dimension, 2D) grid data, that is, extract the point cloud data of the surface edge in the top-view environment data (point cloud information ) and convert the point cloud data of the surface edge in the extracted top-view environment data into raster data.
提取面边缘的点云信息还包括如下步骤:筛选当前帧点云中具有面边缘特征的点,当前帧点云可以是扩大后的TOF相机1的数据。Extracting the point cloud information of the edge of the surface further includes the following steps: screening the points with edge features in the point cloud of the current frame, where the point cloud of the current frame can be the data of the enlarged TOF camera 1 .
以TOF相机原点建立平面栅格坐标系,将面边缘的点云信息转换到笛卡尔坐标系下。The plane grid coordinate system is established with the origin of the TOF camera, and the point cloud information at the edge of the surface is converted to the Cartesian coordinate system.
将面边缘的点云信息按照x-y投影到栅格坐标系内得到2D栅格数据,即栅格数据。Project the point cloud information of the surface edge into the grid coordinate system according to x-y to obtain 2D grid data, that is, grid data.
步骤s3:匹配面边缘的点云信息与局部地图数据。Step s3: Match the point cloud information of the surface edge with the local map data.
计算采集到的面边缘的点云信息与局部地图数据之间的位姿变换,即确定全局地图中与栅格数据匹配的局部地图数据。Calculate the pose transformation between the collected point cloud information of the surface edge and the local map data, that is, determine the local map data matching the raster data in the global map.
步骤s4:将面边缘的点云信息加入到匹配成功的局部地图数据。Step s4: Add the point cloud information of the surface edge to the local map data that is successfully matched.
利用面边缘的点云信息与局部地图数据匹配得到的位姿,将当前帧点云添加到局部地图中,即将点云信息添加至匹配的局部地图数据中。Using the pose obtained by matching the point cloud information of the edge of the face with the local map data, the current frame point cloud is added to the local map, that is, the point cloud information is added to the matched local map data.
步骤s5:将局部地图数据与全局地图匹配得到位姿。Step s5: Match the local map data with the global map to obtain the pose.
利用局部地图中的所有点的坐标,计算局部地图到全局地图之间的位姿变换,以实现定位,即基于局部地图数据和栅格数据确定电子设备在全局地图中的位姿。Using the coordinates of all points in the local map, calculate the pose transformation between the local map and the global map to achieve positioning, that is, determine the pose of the electronic device in the global map based on the local map data and raster data.
步骤s6:判断当前模式是否是定位模式,若当前模式是定位模式,执行s1,并基于局部地图数据和栅格数据之间的位姿以及步骤s5中得到的的位姿确定电 子设备在全局地图中的位姿;若当前模式不是定位模式,执行s7。Step s6: Determine whether the current mode is the positioning mode, if the current mode is the positioning mode, execute s1, and determine the position of the electronic device on the global map based on the pose between the local map data and the raster data and the pose obtained in step s5 pose in ; if the current mode is not positioning mode, go to s7.
判断当前模式是定位模式/建图模式,若当前模式是建图模式,则执行步骤s7。在应用程序(Application,APP)上选择建图模式或定位模式,若选择建图模式,会生成地图,若选择定位模式,则在已有地图上定位机器人。It is judged that the current mode is the positioning mode/mapping mode, and if the current mode is the mapping mode, step s7 is executed. Select the mapping mode or positioning mode on the application (Application, APP). If the mapping mode is selected, a map will be generated. If the positioning mode is selected, the robot will be positioned on the existing map.
步骤s7:将局部地图数据添加到全局地图中。Step s7: Add the local map data to the global map.
在当前模式不是定位模式的情况下,利用s5得到的结果将局部地图中的点更新到全局地图之中,即更新全局地图。When the current mode is not the positioning mode, use the result obtained by s5 to update the points in the local map to the global map, that is, update the global map.
步骤s7包含以下步骤:通过s5步骤得到的位姿,将局部地图内的点云变换到全局地图坐标系下,已知局部地图相对于机器人的位置,又已知机器人在全局地图的位置,则通过位姿转换可以计算出局部地图在全局地图的位置。类似于A在B北方向1m,B在C北方向1m,则可知A离C的距离为2m。Step s7 includes the following steps: Transform the point cloud in the local map into the global map coordinate system through the pose obtained in step s5, and the position of the local map relative to the robot is known, and the position of the robot in the global map is known, then The position of the local map on the global map can be calculated through pose transformation. Similar to A is 1m north of B, and B is 1m north of C, then the distance between A and C is 2m.
利用局部地图中的点云在全局坐标系下的坐标的x-y值确定在全局地图内的栅格坐标,然后使用z值更新当前栅格内的高度分布。Use the x-y value of the coordinates of the point cloud in the local map in the global coordinate system to determine the grid coordinates in the global map, and then use the z value to update the height distribution in the current grid.
本申请中全局地图可以是栅格地图,已知局部地图相对于机器人的位置,又已知机器人在全局地图的位置,则通过位姿转换可以计算出局部地图在全局地图的位置。类似于A在B北方向1m,B在C北方向1m,则可知A离C的距离为2m。In this application, the global map may be a grid map. If the position of the local map relative to the robot is known, and the position of the robot on the global map is known, the position of the local map on the global map can be calculated through pose conversion. Similar to A is 1m north of B, and B is 1m north of C, then the distance between A and C is 2m.
本申请实施例提出的定位方法可以认为是提取天花板边缘点云特征进行建图和定位的方法,该方法实现了机器人利用天花板的三维(3-Dimension,3D)点云信息进行建图和定位功能。克服了机器人在人流很多的场景下的定位问题,即解决了场景发生大范围变化情况下的定位问题。The positioning method proposed in the embodiment of the present application can be considered as a method of extracting the point cloud features of the ceiling edge for mapping and positioning. This method realizes the function of the robot using the three-dimensional (3-Dimension, 3D) point cloud information of the ceiling for mapping and positioning. . It overcomes the positioning problem of the robot in a scene with a lot of people, that is, solves the positioning problem in the case of a large-scale change in the scene.
图8为本申请实施例提供的另一种定位方法的流程示意图,该方法可适用于对电子设备进行室内定位的情况,该方法可以由定位装置来执行,其中该装置可由软件和/或硬件实现,并一般集成在电子设备上。电子设备可以为能够主动或被动移动的设备。示例性的,电子设备可以为机器人。机器人的应用场景不作限定,可以为室内或室外。本实施例以室内机器人为例进行电子设备的描述,此处并不对电子设备进行限定,除室内机器人外的电子设备的实现方式与室内机器人的实现方式相同或相似。Fig. 8 is a schematic flow chart of another positioning method provided by the embodiment of the present application. This method is applicable to indoor positioning of electronic equipment. This method can be performed by a positioning device, wherein the device can be implemented by software and/or hardware. Realized and generally integrated on electronic equipment. An electronic device may be a device capable of active or passive movement. Exemplarily, the electronic device may be a robot. The application scenario of the robot is not limited, and can be indoor or outdoor. In this embodiment, an indoor robot is taken as an example to describe the electronic equipment, and the electronic equipment is not limited here, and the implementation manner of the electronic equipment other than the indoor robot is the same as or similar to that of the indoor robot.
本实施例中的机器人包括室内机器人和室外机器人。室外机器人可以认为是工作在室外且能够移动的机器人。室内机器人可以认为是工作在室内且能够 移动的机器人。由于室内机器人的工作场景一般在商场、车库以及超市等高动态环境中,故室内机器人的平视激光雷达(例如是二维激光雷达)经常会被遮挡,导致激光雷达数据中包含很多的动态物体或者激光雷达数据全部被遮挡以致失效。同时室内机器人的场景也经常变化,传统通过平视激光建图以及定位的方案鲁棒性不高。The robots in this embodiment include indoor robots and outdoor robots. Outdoor robots can be considered as robots that work outdoors and can move. Indoor robots can be considered as robots that work indoors and can move. Since the working scenes of indoor robots are generally in high-dynamic environments such as shopping malls, garages, and supermarkets, the head-up lidar (such as two-dimensional lidar) of indoor robots is often blocked, resulting in lidar data containing many dynamic objects or The lidar data are all blocked to become invalid. At the same time, the scene of the indoor robot changes frequently, and the traditional head-up laser mapping and positioning scheme is not robust.
本申请实施例中的电子设备包括一个顶视传感器,顶视传感器用于采集顶视环境数据。以电子设备为室内机器人为例对电子设备的结构进行说明。图9为本申请实施例提供的一种室内机器人的结构示意图,参见图9,顶视传感器,如深度相机,位于室内机器人顶部,用于采集室内机器人顶部的顶视环境数据。室内机器人包括的传感器至少包括:深度相机、激光雷达以及轮式里程计,其中,深度相机朝向上方,用于采集顶视环境数据,此处不对深度相机与水平方向的夹角进行限定,只要能够采集室内机器人顶部的顶视环境数据即可,如深度相机与水平方向夹角为90°±10°范围内。可选的,深度相机朝向正上方。激光雷达,即平视传感器的安装不作限定,只要能够采集平视环境数据即可。示例性的,激光雷达朝向为水平向前,正方向与水平方向夹角为0°。The electronic device in the embodiment of the present application includes a top-view sensor, and the top-view sensor is used to collect top-view environment data. The structure of the electronic equipment will be described by taking the electronic equipment as an indoor robot as an example. FIG. 9 is a schematic structural diagram of an indoor robot provided by an embodiment of the present application. Referring to FIG. 9 , a top-view sensor, such as a depth camera, is located on the top of the indoor robot and is used to collect top-view environmental data on the top of the indoor robot. The sensors included in the indoor robot include at least: a depth camera, a laser radar, and a wheel odometer. The depth camera faces upwards and is used to collect top-view environmental data. The angle between the depth camera and the horizontal direction is not limited here, as long as it can It is enough to collect the top-view environment data on the top of the indoor robot. For example, the angle between the depth camera and the horizontal direction is within the range of 90°±10°. Optionally, the depth camera faces straight up. LiDAR, that is, the installation of the head-up sensor is not limited, as long as the head-up environment data can be collected. Exemplarily, the orientation of the lidar is horizontally forward, and the angle between the positive direction and the horizontal direction is 0°.
在室内机器人工作时,首先完成轮式里程计、激光雷达以及深度相机的外参标定;然后建立水平环境地图以及顶视环境地图,即平视栅格地图和顶视栅格地图;完成建图后,室内机器人即可通过平视栅格地图和顶视栅格地图进行定位。如图8所示,本申请实施例提供的一种定位方法,包括如下步骤:When working with an indoor robot, first complete the external parameter calibration of the wheel odometer, lidar, and depth camera; then establish a horizontal environment map and a top-view environment map, that is, a head-up grid map and a top-view grid map; , the indoor robot can be positioned through the head-up grid map and the top-view grid map. As shown in Figure 8, a positioning method provided in the embodiment of the present application includes the following steps:
S410、获取传感器数据,所述传感器数据包括至少一个平视传感器采集的平视环境数据和一个顶视传感器采集的顶视环境数据。S410. Acquire sensor data, where the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by a top-view sensor.
在本实施例中,传感器数据可以认为是位于电子设备上的传感器所采集的数据。此处不对传感器数据的内容进行限定,可以根据电子设备所包括传感器的类型确定。如,传感器数据可以包括:顶视传感器采集的环境数据(即顶视环境数据)、平视传感器采集的环境数据(即平视环境数据)和轮式里程计采集的数据等。环境数据包括平视环境数据和顶视环境数据,环境数据可以认为是传感器采集的表征电子设备周边环境的数据。In this embodiment, the sensor data may be regarded as data collected by a sensor located on the electronic device. The content of the sensor data is not limited here, and may be determined according to the type of sensor included in the electronic device. For example, the sensor data may include: environment data collected by the top-view sensor (ie, top-view environment data), environment data collected by the head-up sensor (ie, head-up environment data), and data collected by the wheel odometer. The environment data includes head-up environment data and top-view environment data, and the environment data can be regarded as the data collected by the sensor representing the surrounding environment of the electronic device.
平视环境数据可以认为是平视传感器采集的电子设备运行方向的环境数据,平视传感器可以认为是采集电子设备运行方向的环境数据的传感器,如激 光雷达。顶视环境数据可以认为是顶视传感器采集的电子设备顶部的环境数据,顶视传感器可以认为是采集电子设备顶部的环境数据的传感器,如深度相机。The head-up environmental data can be considered as the environmental data of the running direction of the electronic equipment collected by the head-up sensor, and the head-up sensor can be considered as the sensor that collects the environmental data of the running direction of the electronic equipment, such as lidar. The top-view environment data can be considered as the environment data on the top of the electronic device collected by the top-view sensor, and the top-view sensor can be considered as a sensor that collects the environment data on the top of the electronic device, such as a depth camera.
顶视环境数据,例如天花板的信息相对比较固定,受到遮挡的概率也比较小,故顶视环境数据可以在平视环境数据受到遮挡的情况下维持定位的稳定。The top-view environment data, such as ceiling information is relatively fixed, and the probability of being blocked is relatively small, so the top-view environment data can maintain stable positioning when the head-up environment data is blocked.
在实现室内定位前,可以首先获取室内机器人上的传感器采集的传感器数据,以便于对传感器数据进行处理,以进行室内定位,此处不对获取传感器数据的方式进行限定。如室内机器人的处理器与传感器通信,获取传感器采集的传感器数据。Before realizing indoor positioning, the sensor data collected by the sensors on the indoor robot can be obtained first, so as to process the sensor data for indoor positioning, and the method of obtaining the sensor data is not limited here. For example, the processor of the indoor robot communicates with the sensor to obtain the sensor data collected by the sensor.
S420、处理所述传感器数据,得到处理后的传感器数据。S420. Process the sensor data to obtain processed sensor data.
获取传感器数据后,可以处理传感器数据,以便于能够建立栅格地图。栅格地图是一种地图表达方式,栅格地图将空间平面划分为具有一定分辨率的栅格,栅格中的值即为当前栅格被占据的概率。在本实施例中,栅格地图包括顶视栅格地图和平视栅格地图。Once the sensor data has been acquired, it can be processed so that a raster map can be built. A grid map is a map expression method. The grid map divides the spatial plane into grids with a certain resolution, and the value in the grid is the probability that the current grid is occupied. In this embodiment, the grid map includes a top-view grid map and a horizontal grid map.
此处不对如何处理传感器数据进行限定,不同的传感器数据对应不同的处理手段。How to process the sensor data is not limited here, and different sensor data corresponds to different processing methods.
在一个实施例中,由于顶视传感器数据是通过顶视传感器采集的,为了便于提高定位效率,在处理顶视环境数据时,可以提取顶视环境数据中的面边缘的点云信息;也可以提取平视环境数据中的面边缘的点云信息。为了准确定位,在处理传感器数据时,可以将顶视传感器数据和平视传感器数据进行时间戳对齐。在由多个平视传感器采集平视环境数据的情况下,在处理传感器数据时,可以拼接多个平视传感器采集的平视环境数据。In one embodiment, since the top-view sensor data is collected by the top-view sensor, in order to improve the positioning efficiency, when processing the top-view environment data, the point cloud information of the surface edge in the top-view environment data can be extracted; Extract the point cloud information of the face edge in the head-up environment data. For accurate positioning, when processing sensor data, time stamps can be aligned between top-looking sensor data and head-down sensor data. When the head-up environment data is collected by multiple head-up sensors, the head-up environment data collected by the multiple head-up sensors can be spliced when processing the sensor data.
S430、根据处理后的传感器数据生成平视栅格地图和顶视栅格地图。S430. Generate a head-up grid map and a top-view grid map according to the processed sensor data.
在本实施例中,平视栅格地图可以认为是基于电子设备平视方向(电子设备运行方向)的环境建立的地图,顶视栅格地图可以认为是基于电子设备顶视方向的环境建立的地图。示例性的,平视栅格地图可以是基于处理后的平视环境数据生成的栅格地图。顶视栅格地图可以是基于处理后的顶视环境数据生成的栅格地图。此处不限定平视栅格地图和顶视栅格地图建立时所采用的数据。In this embodiment, the head-up grid map can be regarded as a map established based on the environment of the head-up direction of the electronic device (the running direction of the electronic device), and the top-view grid map can be regarded as a map established based on the environment of the electronic device's top-view direction. Exemplarily, the head-up grid map may be a grid map generated based on processed head-up environment data. The top-view grid map may be a grid map generated based on the processed top-view environment data. The data used in the establishment of the head-up grid map and the top-view grid map are not limited here.
在生成栅格地图时,首先生成空的栅格地图,在点云信息(即处理后的传感器数据中处理后的顶视环境数据和处理后的平视环境数据)获取后,可以将 点云信息映射至栅格地图以更新栅格地图,并计算栅格对应的概率。When generating a grid map, an empty grid map is first generated, and after the point cloud information (that is, the processed top-view environment data and the processed head-up environment data in the processed sensor data) is obtained, the point cloud information can be Map to raster map to update the raster map and calculate the probability corresponding to the raster.
在电子设备顶视方向的环境单一时,本实施例可以结合平视栅格地图剔除重复的顶视环境数据,以避免电子设备顶视方向的环境单一造成的定位效率低的问题。在电子设备平视方向的环境下人员流动性大,或电子设备平视方向的环境动态变化时,本实施例可以结合顶视栅格地图提高定位精度。When the environment in the top-view direction of the electronic device is single, this embodiment can combine the head-up grid map to eliminate duplicate top-view environment data, so as to avoid the problem of low positioning efficiency caused by the single environment in the top-view direction of the electronic device. In the environment of the head-up direction of the electronic device, the mobility of people is high, or the environment of the head-up direction of the electronic device changes dynamically, this embodiment can combine the top-view grid map to improve the positioning accuracy.
S440、根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。S440. Position the electronic device according to the processed sensor data, the top-view grid map, and the head-up grid map.
在生成顶视栅格地图和平视栅格地图后,可以基于处理后的传感器数据、顶视栅格地图和平视栅格地图对电子设备进行定位。After the top-view grid map and the head-down grid map are generated, the electronic device can be positioned based on the processed sensor data, the top-view grid map and the head-up grid map.
在一个实施例中,本实施例可以基于处理后的传感器数据中的轮式里程计数据进行位姿预测,然后基于顶视栅格地图和平视栅格地图实现全局位姿确定。In one embodiment, this embodiment can perform pose prediction based on the wheel odometer data in the processed sensor data, and then realize global pose determination based on the top-view grid map and the flat-view grid map.
在基于顶视栅格地图和平视栅格地图对电子设备进行定位时,可以首先对平视栅格地图和顶视栅格地图进行闭环检测,然后基于全局位姿图优化输出全局位姿。When locating electronic devices based on the top-view grid map and the top-view grid map, loop closure detection can be performed on the top-view grid map and the top-view grid map, and then the global pose can be optimized and output based on the global pose graph.
示例性的,闭环检测可以为计算传感器数据对应的点云信息与栅格地图的匹配率,具体公式如下:Exemplarily, the closed-loop detection may be calculating the matching rate between the point cloud information corresponding to the sensor data and the grid map, and the specific formula is as follows:
Figure PCTCN2021096828-appb-000007
Figure PCTCN2021096828-appb-000007
Figure PCTCN2021096828-appb-000008
Figure PCTCN2021096828-appb-000008
式中:k:表示当前帧内预处理后的激光点的数量;T:表示当前帧在地图内的位姿;h i:表示当前帧内预处理后的激光点中第i个激光点经过位姿T转换后的激光点的高度值;μ i,σ i:表示当前帧内预处理后的激光点中第i个激光点根据位姿T投影至栅格地图所落在的栅格内的所有激光点的高度值的高斯分布参数。 In the formula: k: represents the number of preprocessed laser points in the current frame; T: represents the pose of the current frame in the map; h i : represents the i-th laser point in the current frame after preprocessing The height value of the laser point transformed by the pose T; μ i , σ i : Indicates that the i-th laser point among the preprocessed laser points in the current frame is projected to the grid where the grid map falls according to the pose T The parameters of the Gaussian distribution of the height values of all laser points.
其中,Score代表当前点云信息和栅格地图的匹配情况,即匹配率,Score在0-1范围内,分数越大代表匹配的越好,越可能是个闭环。确定Score后就可以获得两个节点的位姿相对位置,进而可以基于两个节点的位姿相对位置进行全局位姿图优化。Among them, Score represents the matching situation between the current point cloud information and the raster map, that is, the matching rate. The Score is in the range of 0-1. The larger the score, the better the matching, and the more likely it is a closed loop. After the Score is determined, the relative positions of the two nodes can be obtained, and then the global pose graph can be optimized based on the relative positions of the two nodes.
节点是一个抽象概念,代表是一次测量封装的信息,有点云信息、位姿信 息;可以基于闭环信息获得的两个节点之间的位姿相对位置进行全局位姿图优化,全局位姿图优化后会调整节点的位姿信息,最终输出优化后的全局位姿结果。A node is an abstract concept, which represents the information encapsulated by a measurement, such as point cloud information and pose information; the relative position of the pose between two nodes obtained based on the closed-loop information can be used for global pose graph optimization, global pose graph optimization After that, the pose information of the nodes will be adjusted, and finally the optimized global pose result will be output.
全局位姿图优化时,代价函数如下:When optimizing the global pose graph, the cost function is as follows:
Figure PCTCN2021096828-appb-000009
Figure PCTCN2021096828-appb-000009
Figure PCTCN2021096828-appb-000010
Figure PCTCN2021096828-appb-000010
其中函数h如下,用于计算两个节点之间的位姿相对位置:The function h is as follows, which is used to calculate the relative position of the pose between two nodes:
Figure PCTCN2021096828-appb-000011
Figure PCTCN2021096828-appb-000011
式中:c i,c j:分别表示节点i,j的位姿信息;R i:表示节点i的旋转向量;t i,t j:分别表示节点i,j的位姿平移向量;θ i,θ j:分别表示节点i,j的位姿角度向量;Z ij:表示节点i,j之间的激光匹配,即观测到的两个激光帧之间的位姿变换;X 2:位姿图残差;Λij是信息矩阵中ij对应的部分,表示i,j之间观测信息量的大小,用作全局位姿图优化时的权重。 In the formula: c i , c j : represent the pose information of nodes i and j respectively; R i : represent the rotation vector of node i; t i , t j : represent the pose translation vectors of nodes i and j respectively; θ i , θ j : represent the pose angle vectors of nodes i and j respectively; Z ij : represent the laser matching between nodes i and j, that is, the observed pose transformation between two laser frames; X 2 : pose Graph residual; Λij is the part corresponding to ij in the information matrix, which represents the amount of observation information between i and j, and is used as the weight when optimizing the global pose graph.
本申请实施例提供的一种定位方法,首先获取传感器数据,所述传感器数据包括平视环境数据和顶视环境数据;然后处理所述传感器数据;其次根据处理后的传感器数据生成平视栅格地图和顶视栅格地图;最后根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。利用上述技术方案,由于计算机设备上方的环境不容易发生改变,通过结合顶视传感器采集的顶视环境数据生成的顶视栅格地图和平视传感器采集的平视环境数据生成的平视栅格地图避免了环境对电子设备定位造成的影响,提高了定位的鲁棒性。In a positioning method provided by an embodiment of the present application, first, sensor data is obtained, and the sensor data includes head-up environment data and top-view environment data; then the sensor data is processed; secondly, a head-up grid map and a head-up grid map are generated according to the processed sensor data A top-view grid map; finally, the electronic device is positioned according to the processed sensor data, the top-view grid map, and the head-up grid map. With the above technical solution, since the environment above the computer equipment is not easy to change, the head-up grid map generated by combining the top-view environment data collected by the top-view sensor and the head-up environment data collected by the head-up sensor avoids The impact of the environment on the positioning of electronic devices improves the robustness of positioning.
在上述实施例的基础上,提出了上述实施例的变型实施例,在此需要说明的是,为了使描述简要,在变型实施例中仅描述与上述实施例的不同之处。On the basis of the above-mentioned embodiments, modified embodiments of the above-mentioned embodiments are proposed. It should be noted here that, for the sake of brevity, only differences from the above-mentioned embodiments are described in the modified embodiments.
在一个实施例中,所述处理所述传感器数据,包括:预处理所述传感器数据;将预处理后的传感器数据转换至机体坐标系下;优化转换坐标系后的传感器数据;根据优化后的传感器数据得到处理后的顶视环境数据和处理后的平视环境数据。In one embodiment, the processing of the sensor data includes: preprocessing the sensor data; transforming the preprocessed sensor data into the body coordinate system; optimizing the sensor data after the coordinate system transformation; according to the optimized The sensor data is processed top-view environment data and processed head-up environment data.
在处理传感器数据时,可以首先预处理传感器数据,预处理手段不作限定可以基于传感器数据的内容确定预处理手段。如可以基于时间建立传感器数据间的对应关系;还可以提取顶视环境数据中便于定位的点云信息,提取平视环境数据中的点云信息。When processing sensor data, the sensor data may be preprocessed first, and the preprocessing means are not limited. The preprocessing means may be determined based on the content of the sensor data. For example, the corresponding relationship between sensor data can be established based on time; point cloud information that is convenient for positioning in the top-view environment data can also be extracted, and point cloud information in the head-up environment data can be extracted.
示例性的,传感器数据中的平视环境数据的预处理手段包括但不限于:时间戳对齐,提取对齐后的平视环境数据中的特征,分割出所述平视环境数据中的面边缘的点云信息。Exemplarily, the preprocessing means of the head-up environment data in the sensor data includes but not limited to: time stamp alignment, extracting the features in the aligned head-up environment data, and segmenting the point cloud information of the surface edge in the head-up environment data .
为了实现定位,可以将预处理后的传感器数据转换至机体坐标系下,以在机体坐标系下对电子设备进行定位。此处不对转换手段进行限定。In order to realize positioning, the preprocessed sensor data can be transformed into the body coordinate system, so as to locate the electronic device in the body coordinate system. The conversion means is not limited here.
在进行坐标系转换后可以优化转换坐标系后的传感器数据,以更好的进行定位。优化手段不作限定,如迭代最临近点(Iterative Closest Point,ICP)、ICP变种和暴力匹配。After the coordinate system conversion, the sensor data after the coordinate system conversion can be optimized for better positioning. The optimization methods are not limited, such as Iterative Closest Point (ICP), ICP variants and brute force matching.
优化转换坐标系后的传感器数据后,可以根据优化后的传感器数据得到处理后的顶视环境数据和处理后的平视环境数据,如读取优化后的传感器数据中优化后的顶视环境数据作为处理后的顶视环境数据,读取优化后的传感器数据中优化后的平视环境数据作为处理后的平视环境数据。After optimizing the sensor data after transforming the coordinate system, the processed top-view environment data and the processed head-up environment data can be obtained according to the optimized sensor data, such as reading the optimized top-view environment data in the optimized sensor data as For the processed top-view environment data, the optimized head-up environment data in the optimized sensor data is read as the processed head-up environment data.
本实施例细化了处理传感器数据的操作,确保处理后的传感器数据能够实现定位。This embodiment refines the operation of processing sensor data to ensure that the processed sensor data can be positioned.
在一个实施例中,所述优化转换坐标系后的传感器数据,包括:通过暴力匹配处理转换坐标系后的传感器数据。In one embodiment, the optimization of the sensor data after the coordinate system transformation includes: processing the sensor data after the coordinate system transformation by brute force matching.
通过暴力匹配处理转换坐标系后的传感器数据,能够排除初值敏感的影响。此处不对暴力匹配的手段进行限定,如通过相关扫描匹配(Correlation Scan Match,CSM)算法优化转换坐标系后的传感器数据。CSM可以计算激光与地图间的相对位姿。通过CSM优化转换坐标系后的传感器数据可以使得定位结果更加准确。The sensor data after transforming the coordinate system is processed by brute force matching, which can eliminate the influence of initial value sensitivity. The method of violent matching is not limited here, such as optimizing the sensor data after transforming the coordinate system through the Correlation Scan Match (CSM) algorithm. CSM can calculate the relative pose between the laser and the map. Optimizing the sensor data after transforming the coordinate system through CSM can make the positioning result more accurate.
本实施例细化了优化转换坐标系后的传感器数据的技术手段,使得基于优化转换坐标系后的传感器数据生成的顶视栅格地图和平视栅格地图能够更加精准的定位。This embodiment refines the technical means of optimizing the sensor data after the coordinate system is optimized, so that the top-view grid map and the horizontal grid map generated based on the sensor data after the coordinate system optimization conversion can be positioned more accurately.
在一个实施例中,所述预处理所述传感器数据,包括:基于时间戳将平视 环境数据和顶视环境数据进行对齐;提取对齐后的顶视环境数据中的面边缘的点云信息。In one embodiment, the preprocessing of the sensor data includes: aligning the head-up environment data and the top-view environment data based on the time stamp; extracting the point cloud information of the surface edge in the aligned top-view environment data.
本实施例细化了预处理传感器的技术手段,在预处理传感器数据时,可以首先基于时间戳将将顶视环境数据和平视环境数据进行对齐。。This embodiment refines the technical means of preprocessing the sensor. When preprocessing the sensor data, the top-view environment data and the head-down environment data may be aligned based on the time stamp first. .
本实施例可以有效的基于时间将传感器数据建立关联,并有效的提取了顶视环境数据中面边缘的点云信息,以用于生成顶视栅格地图。This embodiment can effectively associate sensor data based on time, and effectively extract point cloud information of surface edges in the top-view environment data, so as to generate a top-view grid map.
本实施例为了便于生成顶视栅格地图,可以提取对齐后的顶视环境数据中的面边缘的点云信息,以便于将提取的点云信息映射至顶视栅格地图中实现建图和定位;为了便于生成平视栅格地图,可以提取对齐后的平视环境数据中的面边缘的点云信息,以便于将点云信息映射至平视栅格地图中实现建图和定位。In order to facilitate the generation of the top-view grid map in this embodiment, the point cloud information of the surface edge in the aligned top-view environment data can be extracted, so that the extracted point cloud information can be mapped to the top-view grid map to realize mapping and Positioning: In order to facilitate the generation of a head-up grid map, the point cloud information of the edge of the face in the aligned head-up environment data can be extracted, so that the point cloud information can be mapped to the head-up grid map for mapping and positioning.
图10为本申请实施例提供的另一种定位方法的流程示意图,参见图10,该方法包括如下步骤:Fig. 10 is a schematic flowchart of another positioning method provided in the embodiment of the present application. Referring to Fig. 10, the method includes the following steps:
S510、获取传感器数据,所述传感器数据包括至少一个平视传感器采集的平视环境数据和一个顶视传感器采集的顶视环境数据。S510. Acquire sensor data, where the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by a top-view sensor.
S520、基于时间戳将所述平视环境数据和顶视环境数据进行对齐。S520. Align the head-up environment data with the top-view environment data based on the time stamp.
基于时间戳将平视环境数据和顶视环境数据进行对齐可以认为是基于时间建立多个传感器采集的数据的对应关系,以便对对齐后的顶视环境数据进行处理,进而实现定位。Aligning head-up environment data and top-view environment data based on time stamps can be considered as establishing the corresponding relationship of data collected by multiple sensors based on time, so as to process the aligned top-view environment data and realize positioning.
S530、提取对齐后的顶视环境数据中的面边缘的点云信息。S530. Extract point cloud information of surface edges in the aligned top-view environment data.
本实施例可以提取对齐后的顶视环境数据中的面边缘的点云信息,以进行定位。此处不对提取面边缘的点云信息的手段进行限定。In this embodiment, the point cloud information of the surface edge in the aligned top-view environment data can be extracted for positioning. The method for extracting the point cloud information of the surface edge is not limited here.
S540、将预处理后的传感器数据转换至机体坐标系下。S540. Transform the preprocessed sensor data into the body coordinate system.
本示例中仅示出了处理顶视环境数据的流程,此处不限定预处理传感器数据中其余数据的技术手段。坐标系转换的技术手段此处不作限定。This example only shows the process of processing the top-view environment data, and the technical means for preprocessing the rest of the sensor data are not limited here. The technical means of coordinate system conversion is not limited here.
S550、优化转换坐标系后的传感器数据。S550. Optimizing the sensor data after the coordinate system transformation.
S560、根据优化后的传感器数据得到处理后的顶视环境数据和处理后的平视环境数据。S560. Obtain processed top-view environment data and processed head-up environment data according to the optimized sensor data.
S570、根据处理后的传感器数据生成平视栅格地图和顶视栅格地图。S570. Generate a head-up grid map and a top-view grid map according to the processed sensor data.
S580、根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。S580. Position the electronic device according to the processed sensor data, the top-view grid map, and the head-up grid map.
在一个实施例中,所述根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位,包括:根据处理后的平视环境数据对所述平视栅格地图进行闭环检测得到平视匹配率;根据处理后的顶视环境数据对所述顶视栅格地图进行闭环检测得到顶视匹配率;在所述平视匹配率大于第一设定阈值和/或顶视匹配率大于第二设定阈值时,确定所述电子设备的全局位姿。In one embodiment, the positioning of the electronic device according to the processed sensor data, the top-view grid map and the head-up grid map includes: positioning the head-up view according to the processed head-up environment data Perform closed-loop detection on the grid map to obtain a head-up matching rate; perform closed-loop detection on the top-view grid map according to the processed top-view environment data to obtain a top-view matching rate; when the head-up matching rate is greater than the first set threshold and/or Or when the top-view matching rate is greater than a second set threshold, determine the global pose of the electronic device.
本实施例细化了定位的技术手段,基于平视匹配率和顶视匹配率进行电子设备全局位姿的确定,能够保证定位结果更加精准。This embodiment refines the positioning technical means, and determines the global pose of the electronic device based on the head-up matching rate and the top-looking matching rate, which can ensure more accurate positioning results.
图11为本申请实施例提供的另一种定位方法的流程示意图,参见图11,该方法包括如下步骤:Fig. 11 is a schematic flowchart of another positioning method provided in the embodiment of the present application. Referring to Fig. 11, the method includes the following steps:
S610、获取传感器数据,所述传感器数据包括平视传感器采集的平视环境数据和顶视传感器采集的顶视环境数据。S610. Acquire sensor data, where the sensor data includes head-up environment data collected by the head-up sensor and top-view environment data collected by the top-view sensor.
S620、处理所述传感器数据。S620. Process the sensor data.
S630、根据处理后的传感器数据生成平视栅格地图和顶视栅格地图。S630. Generate a head-up grid map and a top-view grid map according to the processed sensor data.
S640、根据处理后的平视环境数据对所述平视栅格地图进行闭环检测得到平视匹配率。S640. Perform closed-loop detection on the head-up grid map according to the processed head-up environment data to obtain a head-up matching rate.
平视匹配率可以认为是平视栅格地图与平视环境数据匹配的概率。此处不对闭环检测的技术手段进行限定,只要能够确定出平视匹配率即可。The head-up matching rate can be considered as the probability of matching the head-up grid map with the head-up environment data. Here, there is no limitation on the technical means of loop closure detection, as long as the head-up matching rate can be determined.
本实施例可以基于平视栅格地图和平视环境数据,对平视栅格地图进行闭环检测得到平视匹配率。In this embodiment, based on the head-up grid map and the head-up environment data, the head-up matching rate can be obtained by performing closed-loop detection on the head-up grid map.
S650、根据处理后的顶视环境数据对所述顶视栅格地图进行闭环检测得到顶视匹配率。S650. Perform loop closure detection on the top-view grid map according to the processed top-view environment data to obtain a top-view matching rate.
顶视匹配率可以认为是顶视栅格地图与顶视环境数据匹配的概率。此处不对闭环检测的技术手段进行限定,只要能够确定出顶视匹配率即可。The top-view matching rate can be considered as the probability of matching the top-view grid map with the top-view environmental data. Here, no limitation is imposed on the technical means of loop closure detection, as long as the top-view matching rate can be determined.
本实施例可以基于顶视栅格地图和顶视环境数据,对顶视栅格地图进行闭 环检测得到顶视匹配率。In this embodiment, based on the top-view grid map and the top-view environment data, the closed-loop detection of the top-view grid map can be performed to obtain the top-view matching rate.
在确定平视匹配率和顶视匹配率时,可以分别针对平视栅格地图和顶视栅格地图进行闭环检测,又称回环检测,以确定对应的顶视匹配率和平视匹配率。此处不对确定平视匹配率和顶视匹配率的执行顺序进行限定,可以并行确定平视匹配率和顶视匹配率,也可以依次确定平视匹配率和顶视匹配率。When determining the head-up matching rate and the top-view matching rate, loop closure detection, also known as loop-back detection, can be performed on the head-up grid map and the top-view grid map to determine the corresponding top-view matching rate and horizontal matching rate. Here, the execution order of determining the head-up matching rate and the top-view matching rate is not limited. The head-up matching rate and the top-view matching rate may be determined in parallel, or the head-up matching rate and the top-view matching rate may be determined sequentially.
S660、在所述平视匹配率大于第一设定阈值和/或顶视匹配率大于第二设定阈值时,确定所述电子设备的全局位姿。S660. Determine the global pose of the electronic device when the head-up matching rate is greater than a first set threshold and/or the top-view matching rate is greater than a second set threshold.
在平视匹配率大于第一设定阈值和/或顶视匹配率大于第二设定阈值时,本实施例可以基于顶视栅格地图和平视栅格地图确定电子设备的全局位姿。如基于位姿图优化的技术手段确定位姿图残差,以确定全局位姿。第一设定阈值和第二设定阈值不作限定。平视匹配率和顶视匹配率可以对应有不同的设定阈值。When the head-up matching rate is greater than the first set threshold and/or the top-view matching rate is greater than the second set threshold, this embodiment may determine the global pose of the electronic device based on the top-view grid map and the head-up grid map. For example, the technical means based on pose graph optimization determines the residual error of the pose graph to determine the global pose. The first set threshold and the second set threshold are not limited. The head-up matching rate and the top-view matching rate may have different setting thresholds.
在一个实施例中,根据处理后的传感器数据生成平视栅格地图和顶视栅格地图,包括:基于处理后的平视环境数据生成平视栅格地图;基于处理后的顶视环境数据生成顶视栅格地图。In one embodiment, generating the head-up grid map and the top-view grid map according to the processed sensor data includes: generating a head-up grid map based on the processed head-up environment data; generating a top-view grid map based on the processed top-view environment data grid map.
本实施例细化了生成平视栅格地图和顶视栅格地图的技术手段,该实施例能够有效的结合电子设备平视和顶视的环境进行精准定位。This embodiment refines the technical means for generating a head-up grid map and a top-view grid map. This embodiment can effectively combine the head-up and top-view environments of electronic devices for precise positioning.
在本实施例中,平视栅格地图基于平视环境数据生成,顶视栅格地图基于顶视环境数据生成。In this embodiment, the head-up grid map is generated based on the head-up environment data, and the top-view grid map is generated based on the top-view environment data.
在建图的初始阶段,可以首先建立空的顶视栅格地图和空的平视栅格地图,在获取到传感器数据后,可以基于处理后的平视环境数据更新平视栅格地图,基于处理后的顶视环境数据更新顶视栅格地图。In the initial stage of mapping, you can first create an empty top-view grid map and an empty head-up grid map. After obtaining the sensor data, you can update the head-up grid map based on the processed head-up environment data. Based on the processed The top-view environment data updates the top-view raster map.
本实施例通过顶视环境数据和平视环境数据融合建图,即基于顶视传感器和平视传感器融合进行建图和定位,该方案综合考量平视激光雷达(即平视传感器)以及顶视深度相机(即顶视传感器)的测量数据特点以及应用场景特点,充分发挥多个传感器采集的数据优势,实现精准实时建图和高鲁棒的定位。In this embodiment, the fusion of top-view environmental data and horizontal-view environmental data is used to construct maps, that is, map building and positioning are performed based on the fusion of top-view sensors and horizontal-view sensors. Top-view sensor) measurement data characteristics and application scene characteristics, give full play to the advantages of data collected by multiple sensors, and achieve accurate real-time mapping and highly robust positioning.
12为本申请实施例提供的一种多层栅格地图建图的定位方法的流程示意图,图12以顶视传感器的个数为一个为例,该方法包括:12 is a schematic flowchart of a multi-layer grid map positioning method provided in the embodiment of the present application. FIG. 12 takes the number of top-view sensors as an example, and the method includes:
S1.获取传感器数据。S1. Obtain sensor data.
获取的传感器数据包括激光雷达的数据、轮式里程计的数据以及深度相机的数据,即顶视环境数据和平视环境数据。The acquired sensor data includes lidar data, wheel odometer data, and depth camera data, namely top-view environment data and head-up environment data.
S2.数据预处理。S2. Data preprocessing.
S2.1.将多个传感器采集的传感器数据进行时间戳对齐。S2.1. Time stamp alignment of sensor data collected by multiple sensors.
S2.2.提取顶视环境数据的特征,分割出面边缘的点云信息。S2.2. Extract the features of the top-view environment data, and segment the point cloud information of the surface edge.
S3.通过轮式里程计进行位姿预测。S3. Prediction of pose by wheel odometer.
S4.坐标系转换。S4. Coordinate system conversion.
将激光雷达的点云信息与顶视环境数据的面边缘的点云信息转到机体坐标系,得到坐标转换后的点云信息。机体坐标系可以为自定义的,也可以为轮式里程计的坐标系。The point cloud information of the lidar and the point cloud information of the surface edge of the top-view environment data are transferred to the body coordinate system to obtain the point cloud information after coordinate transformation. The body coordinate system can be customized or the coordinate system of the wheel odometer.
S5.优化点云信息。S5. Optimizing point cloud information.
基于转换后的点云信息,通过CSM算法优化当前位姿,其中优化的位姿可以认为是坐标转换后的点云信息在当前机体坐标系下在栅格地图中的位置,当前的点云信息在机体坐标系下,通过当前点云信息和地图的匹配,可以矫正坐标转换后的点云信息在机体坐标系下在地图中的位姿。Based on the converted point cloud information, the current pose is optimized through the CSM algorithm, where the optimized pose can be considered as the position of the coordinate transformed point cloud information in the grid map in the current body coordinate system, and the current point cloud information In the body coordinate system, through the matching of the current point cloud information and the map, the pose of the point cloud information after coordinate transformation in the body coordinate system can be corrected in the map.
S6.更新栅格地图中的栅格对应的概率。S6. Updating the probability corresponding to the grid in the grid map.
S6.1.基于平视激光数据更新平视栅格地图;S6.1. Update the head-up grid map based on the head-up laser data;
S6.2.基于顶视面边缘点云更新顶视栅格地图。S6.2. Update the top-view grid map based on the top-view edge point cloud.
S7.闭环检测。S7. Closed-loop detection.
S7.1.通过平视激光数据和平视栅格地图检测平视闭环;S7.1. Detect head-up closed loop through head-up laser data and head-up grid map;
S7.2.通过顶视面边缘点云和顶视栅格地图检测顶视闭环。S7.2. Detect top-view closed loops through the top-view surface edge point cloud and top-view grid map.
S8.全局位姿图优化,输出全局位姿。S8. Global pose graph optimization, output global pose.
平视激光数据可以认为是处理后的平视环境数据,顶视面边缘点云可以认为是处理后的顶视环境数据。The head-up laser data can be considered as the processed head-up environment data, and the top-view surface edge point cloud can be considered as the processed top-view environment data.
图13为本申请实施例提供的另一种定位方法的流程示意图,该方法可适用于对电子设备进行室内定位的情况,该方法可以由定位装置来执行,其中该装置可由软件和/或硬件实现,并一般集成在电子设备上。电子设备可以为能够主 动或被动移动的设备。示例性的,电子设备可以为机器人。机器人的应用场景不作限定,可以为室内或室外。本实施例以室内机器人为例进行电子设备的描述,此处并不对电子设备进行限定,除室内机器人外的电子设备的实现方式与室内机器人的实现方式相同或相似。Fig. 13 is a schematic flow chart of another positioning method provided by the embodiment of the present application. This method is applicable to indoor positioning of electronic equipment. This method can be performed by a positioning device, wherein the device can be implemented by software and/or hardware. Realized and generally integrated on electronic equipment. An electronic device may be a device capable of active or passive movement. Exemplarily, the electronic device may be a robot. The application scenario of the robot is not limited, and can be indoor or outdoor. In this embodiment, an indoor robot is taken as an example to describe the electronic equipment, and the electronic equipment is not limited here, and the implementation manner of the electronic equipment other than the indoor robot is the same as or similar to that of the indoor robot.
本实施例中的机器人包括室内机器人和室外机器人。室外机器人可以认为是工作在室外且能够移动的机器人。室内机器人可以认为是工作在室内且能够移动的机器人。由于室内机器人的工作场景一般在商场、车库以及超市等高动态环境中,故室内机器人的平视激光雷达(尤其是二维激光雷达)经常会被遮挡,导致激光雷达数据中包含很多的动态物体或者激光雷达数据全部被遮挡以致失效。同时室内机器人的场景也经常变化,传统通过平视激光建图以及定位的方案鲁棒性不高。The robots in this embodiment include indoor robots and outdoor robots. Outdoor robots can be considered as robots that work outdoors and can move. Indoor robots can be considered as robots that work indoors and can move. Since the working scenes of indoor robots are generally in highly dynamic environments such as shopping malls, garages, and supermarkets, the head-up lidar (especially 2D lidar) of indoor robots is often blocked, resulting in many dynamic objects or The lidar data are all blocked to become invalid. At the same time, the scene of the indoor robot changes frequently, and the traditional head-up laser mapping and positioning scheme is not robust.
本申请实施例中的电子设备包括至少两个顶视传感器,顶视传感器用于采集顶视环境数据。以电子设备为室内机器人为例对电子设备的结构进行说明。图14为本申请实施例提供的另一种室内机器人的结构示意图,参见图9和图14,顶视传感器,如双深度相机,即两个深度相机位于室内机器人顶部,用于采集室内机器人顶部不同区域的顶视环境数据。室内机器人包括的传感器至少包括:深度相机、激光雷达以及轮式里程计,其中,深度相机朝向上方,用于采集顶视环境数据,此处不对深度相机与水平方向夹角进行限定,只要相邻两个深度相机间的视场角存在共视区域即可,如深度相机与水平方向夹角为90°±20°,同时保证两个相机有1/3的共视区域,方便相机的标定以及图像拼接。激光雷达,即平视传感器的安装不作限定,只要能够采集平视环境数据即可。示例性的,激光雷达朝向为水平向前,正方向与水平方向夹角为0°。The electronic device in the embodiment of the present application includes at least two top-view sensors, and the top-view sensors are used to collect top-view environment data. The structure of the electronic equipment will be described by taking the electronic equipment as an indoor robot as an example. Figure 14 is a schematic structural diagram of another indoor robot provided by the embodiment of the present application, see Figure 9 and Figure 14, top-view sensors, such as dual depth cameras, that is, two depth cameras are located on the top of the indoor robot, used to collect the top of the indoor robot Top-view environment data for different regions. The sensors included in the indoor robot include at least: a depth camera, a laser radar, and a wheel odometer. The depth camera faces upwards and is used to collect top-view environmental data. The angle between the depth camera and the horizontal direction is not limited here, as long as the adjacent The field of view between the two depth cameras only needs to have a common view area. For example, the angle between the depth camera and the horizontal direction is 90°±20°. At the same time, ensure that the two cameras have a 1/3 common view area, which is convenient for camera calibration and Image stitching. LiDAR, that is, the installation of the head-up sensor is not limited, as long as the head-up environment data can be collected. Exemplarily, the orientation of the lidar is horizontally forward, and the angle between the positive direction and the horizontal direction is 0°.
在室内机器人工作时,首先完成轮式里程计、激光雷达以及顶视的深度相机的外参标定;然后建立水平环境地图以及顶视环境地图,即平视栅格地图和顶视栅格地图;完成建图后,室内机器人即可通过平视栅格地图和顶视栅格地图进行定位。如图13所示,本申请实施例提供的一种定位方法,包括如下步骤:When working with an indoor robot, first complete the external parameter calibration of the wheel odometer, lidar, and top-view depth camera; then establish a horizontal environment map and a top-view environment map, that is, a head-up grid map and a top-view grid map; complete After the map is built, the indoor robot can be positioned through the head-up grid map and the top-view grid map. As shown in Figure 13, a positioning method provided in the embodiment of the present application includes the following steps:
S710、获取传感器数据,所述传感器数据包括至少一个平视传感器采集的平视环境数据和至少两个顶视传感器采集的顶视环境数据。S710. Acquire sensor data, where the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by at least two top-view sensors.
在本实施例中,传感器数据可以认为是位于电子设备上的传感器所采集的 数据。此处不对传感器数据的内容进行限定,可以根据电子设备所包括传感器的类型确定。如,传感器数据可以包括:顶视传感器采集的环境数据(即顶视环境数据)、平视传感器采集的环境数据(即平视环境数据)和轮式里程计采集的数据等。环境数据包括平视环境数据和顶视环境数据,环境数据可以认为是传感器采集的表征电子设备周边环境的数据。In this embodiment, the sensor data can be regarded as the data collected by the sensor on the electronic device. The content of the sensor data is not limited here, and may be determined according to the type of sensor included in the electronic device. For example, the sensor data may include: environment data collected by the top-view sensor (ie, top-view environment data), environment data collected by the head-up sensor (ie, head-up environment data), and data collected by the wheel odometer. The environment data includes head-up environment data and top-view environment data, and the environment data can be regarded as the data collected by the sensor representing the surrounding environment of the electronic device.
平视环境数据可以认为是平视传感器采集的电子设备运行方向的环境数据,平视传感器可以认为是采集电子设备运行方向环境数据的传感器,如激光雷达。顶视环境数据可以认为是顶视传感器采集的电子设备顶部的环境数据,顶视传感器可以认为是采集电子设备顶部的环境数据的传感器,如深度相机。The head-up environmental data can be considered as the environmental data of the running direction of the electronic equipment collected by the head-up sensor, and the head-up sensor can be considered as the sensor that collects the environmental data of the running direction of the electronic equipment, such as lidar. The top-view environment data can be considered as the environment data on the top of the electronic device collected by the top-view sensor, and the top-view sensor can be considered as a sensor that collects the environment data on the top of the electronic device, such as a depth camera.
顶视环境数据,例如天花板的信息相对比较固定,受到遮挡的概率也比较小,故顶视环境数据可以在平视环境数据受到遮挡的情况下维持定位的稳定。The top-view environment data, such as ceiling information is relatively fixed, and the probability of being blocked is relatively small, so the top-view environment data can maintain stable positioning when the head-up environment data is blocked.
在实现定位前,本步骤可以首先获取电子设备上的传感器采集的传感器数据,以便于对传感器数据进行处理,以进行定位,此处不对获取的方式进行限定。如电子设备的处理器与传感器通信,获取传感器采集的传感器数据。Before realizing the positioning, in this step, the sensor data collected by the sensor on the electronic device may be obtained first, so as to process the sensor data for positioning, and the acquisition method is not limited here. For example, a processor of an electronic device communicates with a sensor to obtain sensor data collected by the sensor.
S720、处理所述传感器数据。S720. Process the sensor data.
获取传感器数据后,可以处理传感器数据,以便于能够建立栅格地图。栅格地图是一种地图表达方式,栅格地图将空间平面划分为具有一定分辨率的栅格,栅格中的值即为当前栅格被占据的概率。在本实施例中,栅格地图包括顶视栅格地图和平视栅格地图。Once the sensor data has been acquired, it can be processed so that a raster map can be built. A grid map is a map expression method. The grid map divides the spatial plane into grids with a certain resolution, and the value in the grid is the probability that the current grid is occupied. In this embodiment, the grid map includes a top-view grid map and a horizontal grid map.
此处不对如何处理传感器数据进行限定,不同的传感器数据对应不同的处理手段。How to process the sensor data is not limited here, and different sensor data corresponds to different processing methods.
在一个实施例中,由于顶视传感器数据(顶视环境数据)是通过至少两个顶视传感器采集的,故处理顶视传感器数据时,可以包括拼接至少两个顶视传感器采集的顶视传感器数据的操作,以更好的进行定位。同理,在由多个平视传感器采集平视环境数据的情况下,在处理平视环境数据时,可以拼接多个平视传感器采集的平视环境数据。In one embodiment, since the top-view sensor data (top-view environment data) is collected by at least two top-view sensors, processing the top-view sensor data may include splicing the top-view sensors collected by at least two top-view sensors Data manipulation for better positioning. Similarly, when multiple heads-up sensors collect head-up environment data, when processing the head-up environment data, the head-up environment data collected by multiple heads-up sensors can be spliced.
此外,为了便于提高定位效率,在处理顶视环境数据时,可以提取顶视环境数据中的面边缘的点云信息;也可以提取平视环境数据中的面边缘的点云信息。为了准确定位,在处理传感器数据时,可以将顶视传感器数据和平视传感 器数据进行时间戳对齐。In addition, in order to improve the positioning efficiency, when processing the top-view environment data, the point cloud information of the surface edge in the top-view environment data can be extracted; the point cloud information of the surface edge in the head-up environment data can also be extracted. For accurate positioning, when processing sensor data, time stamps can be aligned between top-looking sensor data and head-up sensor data.
S730、根据处理后的传感器数据生成平视栅格地图和顶视栅格地图。S730. Generate a head-up grid map and a top-view grid map according to the processed sensor data.
在本实施例中,平视栅格地图可以认为是基于电子设备平视方向(电子设备运行方向)的环境建立的地图,顶视栅格地图可以认为是基于电子设备顶视方向的环境建立的地图。示例性的,平视栅格地图可以是基于平视环境数据生成的栅格地图。顶视栅格地图可以是基于顶视环境数据生成的栅格地图。此处不限定平视栅格地图和顶视栅格地图建立时所采用的数据。In this embodiment, the head-up grid map can be regarded as a map established based on the environment of the head-up direction of the electronic device (the running direction of the electronic device), and the top-view grid map can be regarded as a map established based on the environment of the electronic device's top-view direction. Exemplarily, the head-up grid map may be a grid map generated based on head-up environment data. The top-view grid map may be a grid map generated based on top-view environment data. The data used in the establishment of the head-up grid map and the top-view grid map are not limited here.
在生成栅格地图时,首先生成空的栅格地图,在点云信息(即处理后的传感器数据中处理后的顶视环境数据和处理后的平视环境数据)获取后,可以将点云信息映射至栅格地图以更新栅格地图,并计算栅格对应的概率。When generating a grid map, an empty grid map is first generated, and after the point cloud information (that is, the processed top-view environment data and the processed head-up environment data in the processed sensor data) is obtained, the point cloud information can be Map to raster map to update the raster map and calculate the probability corresponding to the raster.
在电子设备顶视方向的环境单一时,本实施例可以结合平视栅格地图剔除重复的顶视环境数据,以避免电子设备顶视方向的环境单一造成的定位效率低的问题。在电子设备平视方向的环境下人员流动性大,或电子设备平视方向的环境动态变化时,本实施例可以结合顶视栅格地图提高定位精度。When the environment in the top-view direction of the electronic device is single, this embodiment can combine the head-up grid map to eliminate duplicate top-view environment data, so as to avoid the problem of low positioning efficiency caused by the single environment in the top-view direction of the electronic device. In the environment of the head-up direction of the electronic device, the mobility of people is high, or the environment of the head-up direction of the electronic device changes dynamically, this embodiment can combine the top-view grid map to improve the positioning accuracy.
S740、根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。S740. Position the electronic device according to the processed sensor data, the top-view grid map, and the head-up grid map.
在生成顶视栅格地图和平视栅格地图后,可以基于处理后的传感器数据、顶视栅格地图和平视栅格地图对电子设备进行定位。After the top-view grid map and the head-down grid map are generated, the electronic device can be positioned based on the processed sensor data, the top-view grid map and the head-up grid map.
在一个实施例中,本实施例可以基于处理后的传感器数据中的轮式里程计数据进行位姿预测,然后基于顶视栅格地图和平视栅格地图实现全局位姿确定。In one embodiment, this embodiment can perform pose prediction based on the wheel odometer data in the processed sensor data, and then realize global pose determination based on the top-view grid map and the flat-view grid map.
在基于顶视栅格地图和平视栅格地图对电子设备进行定位时,可以首先对平视栅格地图和顶视栅格地图进行闭环检测,然后基于全局位姿图优化输出全局位姿。When locating electronic devices based on the top-view grid map and the top-view grid map, loop closure detection can be performed on the top-view grid map and the top-view grid map, and then the global pose can be optimized and output based on the global pose graph.
示例性的,闭环检测可以为计算通过传感器数据对应的点云信息与栅格地图的匹配率,具体公式如下:Exemplarily, the closed-loop detection can be calculated by calculating the matching rate of the point cloud information corresponding to the sensor data and the grid map, and the specific formula is as follows:
Figure PCTCN2021096828-appb-000012
Figure PCTCN2021096828-appb-000012
Figure PCTCN2021096828-appb-000013
式中:k:表示当前帧内预处理后的激光点的数量;T: 表示当前帧在地图内的位姿;h i:当前帧内预处理后的激光点中第i个激光点经过位姿T转换后的激光点的高度值;μ i,σ i:表示当前帧内预处理后的激光点中第i个激光点根据位姿T投影至栅格地图所落在的栅格内的所有激光点的高度值的高斯分布参数。
Figure PCTCN2021096828-appb-000013
In the formula: k: represents the number of preprocessed laser points in the current frame; T: represents the pose of the current frame in the map; h i : among the preprocessed laser points in the current frame The height value of the laser point converted from attitude T; μ i , σ i : indicates the i-th laser point in the laser point after preprocessing in the current frame is projected to the grid where the grid map falls according to the pose T Gaussian distribution parameter for the height values of all laser points.
Score代表当前点云信息和栅格地图的匹配情况,即匹配率,Score在0-1范围内,分数越大代表匹配的越好,越可能是个闭环。确定Score后就可以获得两个节点的位姿相对位置,进而可以基于两个节点的位姿相对位置进行全局位姿图优化。Score represents the matching situation between the current point cloud information and the grid map, that is, the matching rate. The Score is in the range of 0-1. The larger the score, the better the matching, and the more likely it is a closed loop. After the Score is determined, the relative positions of the two nodes can be obtained, and then the global pose graph can be optimized based on the relative positions of the two nodes.
节点是一个抽象概念,代表是一次测量封装的信息,有点云信息、位姿信息;可以基于闭环信息获得的两个节点之间的位姿相对位置进行全局位姿图,全局位姿图优化后会调整节点的位姿信息,最终输出优化后的全局位姿结果。A node is an abstract concept, which represents the information encapsulated by a measurement, such as point cloud information and pose information; the global pose graph can be made based on the relative position of the pose between two nodes obtained from the closed-loop information. After the global pose graph is optimized The pose information of the node will be adjusted, and finally the optimized global pose result will be output.
全局位姿图优化时,代价函数如下:When optimizing the global pose graph, the cost function is as follows:
Figure PCTCN2021096828-appb-000014
Figure PCTCN2021096828-appb-000014
Figure PCTCN2021096828-appb-000015
Figure PCTCN2021096828-appb-000015
其中函数h如下,用于计算两个节点之间的位姿相对位置:The function h is as follows, which is used to calculate the relative position of the pose between two nodes:
Figure PCTCN2021096828-appb-000016
Figure PCTCN2021096828-appb-000016
式中:c i,c j:分别表示节点i,j的位姿信息;Ri:表示节点i的旋转向量;t i,t j:分别表示节点i,j的位姿平移向量;θ ij:分别表示节点i,j的位姿角度向量;Z ij:节点i,j之间的激光匹配,即观测到的两个激光帧之间的位姿变换;X2:位姿图残差,Λij是信息矩阵中ij对应的部分,表示i,j之间观测信息量的大小,用作全局位姿图优化时的权重。 In the formula: c i , c j : represent the pose information of nodes i and j respectively; Ri: represent the rotation vector of node i; t i , t j : represent the pose translation vectors of nodes i and j respectively; θ i , θ j : represent the pose angle vectors of nodes i and j respectively; Z ij : laser matching between nodes i and j, that is, the pose transformation between two observed laser frames; X2: pose graph residual , Λij is the part corresponding to ij in the information matrix, which represents the amount of observation information between i and j, and is used as the weight when optimizing the global pose graph.
本申请实施例提供的一种定位方法,首先获取传感器数据,所述传感器数据包括平视环境数据和顶视环境数据;然后处理所述传感器数据;其次根据处理后的传感器数据生成平视栅格地图和顶视栅格地图;最后根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。利用上述技术方案,由于计算机设备上方的环境不容易发生改变,通过结合基于多个顶视传感器采集的顶视环境数据生成的顶视栅格地图和平视传感器采集的 平视环境生成的平视栅格地图避免了环境对电子设备定位造成的影响,提高了定位的鲁棒性。In a positioning method provided by an embodiment of the present application, first, sensor data is obtained, and the sensor data includes head-up environment data and top-view environment data; then the sensor data is processed; secondly, a head-up grid map and a head-up grid map are generated according to the processed sensor data A top-view grid map; finally, the electronic device is positioned according to the processed sensor data, the top-view grid map, and the head-up grid map. With the above technical solution, since the environment above the computer equipment is not easy to change, the head-up grid map generated by combining the top-view environment data collected by multiple top-view sensors and the head-up environment collected by the head-up sensor The influence of the environment on the positioning of the electronic equipment is avoided, and the robustness of the positioning is improved.
在上述实施例的基础上,提出了上述实施例的变型实施例,在此需要说明的是,为了使描述简要,在变型实施例中仅描述与上述实施例的不同之处。On the basis of the above-mentioned embodiments, modified embodiments of the above-mentioned embodiments are proposed. It should be noted here that, for the sake of brevity, only differences from the above-mentioned embodiments are described in the modified embodiments.
在一个实施例中,所述处理所述传感器数据,包括:预处理所述传感器数据;将预处理后的传感器数据转换至机体坐标系下;优化转换坐标系后的传感器数据;根据优化后的传感器数据得到处理后的顶视环境数据和处理后的平视环境数据。In one embodiment, the processing of the sensor data includes: preprocessing the sensor data; transforming the preprocessed sensor data into the body coordinate system; optimizing the sensor data after the coordinate system transformation; according to the optimized The sensor data is processed top-view environment data and processed head-up environment data.
在处理传感器数据时,可以首先预处理传感器数据,预处理手段不作限定可以基于传感器数据的内容确定预处理手段。如可以基于时间建立传感器数据间的对应关系;还可以拼接顶视环境数据;又可以提取顶视环境数据中便于定位的点云信息,提取平视环境数据中的点云信息。When processing sensor data, the sensor data may be preprocessed first, and the preprocessing means are not limited. The preprocessing means may be determined based on the content of the sensor data. For example, the corresponding relationship between sensor data can be established based on time; the top-view environment data can also be spliced; the point cloud information that is convenient for positioning in the top-view environment data can be extracted, and the point cloud information in the head-up environment data can be extracted.
示例性的,传感器数据中的平视环境数据的预处理手段包括但不限于:时间戳对齐,提取对齐后平视传感器数据中的特征,分割出所述平视传感器数据中面边缘的点云信息。Exemplarily, the preprocessing means of the head-up environment data in the sensor data includes but not limited to: time stamp alignment, extracting the features in the head-up sensor data after alignment, and segmenting the point cloud information of the surface edge in the head-up sensor data.
为了实现定位,可以将预处理后的传感器数据转换至机体坐标系下,以在机体坐标系下对电子设备进行定位。此处不对转换手段进行限定。In order to realize positioning, the preprocessed sensor data can be transformed into the body coordinate system, so as to locate the electronic device in the body coordinate system. The conversion means is not limited here.
在进行坐标系转换后可以优化转换坐标系后的传感器数据,以更好的进行定位。优化手段不作限定,如迭代最临近点(Iterative Closest Point,ICP)、ICP变种和暴力匹配。After the coordinate system conversion, the sensor data after the coordinate system conversion can be optimized for better positioning. The optimization methods are not limited, such as Iterative Closest Point (ICP), ICP variants and brute force matching.
优化转换坐标系后的传感器数据后,可以根据传感器数据得到处理后的顶视环境数据和处理后的平视环境数据,如读取传感器数据中优化后的顶视环境数据作为处理后的顶视环境数据,读取传感器数据中优化后的平视环境数据作为处理后的平视环境数据。After optimizing the sensor data after converting the coordinate system, the processed top-view environment data and the processed head-up environment data can be obtained according to the sensor data, such as reading the optimized top-view environment data in the sensor data as the processed top-view environment Data, read the optimized head-up environment data in the sensor data as the processed head-up environment data.
本实施例细化了处理传感器数据的操作,确保处理后的传感器数据能够实现定位This embodiment refines the operation of processing sensor data to ensure that the processed sensor data can be positioned
在一个实施例中,所述优化转换坐标系后的传感器数据,包括:通过暴力匹配处理转换坐标系后的传感器数据。In one embodiment, the optimization of the sensor data after the coordinate system transformation includes: processing the sensor data after the coordinate system transformation by brute force matching.
通过暴力匹配处理转换坐标系后的传感器数据,能够排除初值敏感的影响。此处不对暴力匹配的手段进行限定,如通过相关扫描匹配(Correlation Scan Match,CSM)算法优化转换后的数据。CSM可以计算激光与地图间的相对位姿。通过CSM优化转换坐标系后的传感器数据可以使得定位结果更加准确。The sensor data after transforming the coordinate system is processed by brute force matching, which can eliminate the influence of initial value sensitivity. The method of violent matching is not limited here, such as optimizing the converted data through the Correlation Scan Match (CSM) algorithm. CSM can calculate the relative pose between the laser and the map. Optimizing the sensor data after transforming the coordinate system through CSM can make the positioning result more accurate.
本实施例细化了优化转换坐标系后的传感器数据的具体技术手段,使得基于优化转换坐标系后的传感器数据生成的顶视栅格地图和平视栅格地图能够更加精准的定位。This embodiment refines the specific technical means for optimizing the sensor data after the coordinate system conversion, so that the top-view grid map and the horizontal grid map generated based on the sensor data after the coordinate system optimization conversion can be positioned more accurately.
在一个实施例中,所述预处理所述传感器数据,包括:基于时间戳将平视环境数据和顶视环境数据进行对齐;将对齐后的顶视环境数据进行拼接;提取所述顶视环境数据中的面边缘的点云信息。In one embodiment, the preprocessing of the sensor data includes: aligning the head-up environment data and the top-view environment data based on time stamps; splicing the aligned top-view environment data; extracting the top-view environment data The point cloud information of the face edge in .
本实施例细化了预处理传感器的技术手段,在预处理传感器数据时,可以首先基于时间戳将将顶视环境数据和平视环境数据进行对齐。This embodiment refines the technical means of preprocessing the sensor. When preprocessing the sensor data, the top-view environment data and the head-down environment data may be aligned based on the time stamp first.
由于顶视环境数据由至少两个顶视传感器采集,故在预处理传感器数据时,可以将对齐后的顶视环境数据进行拼接,以基于拼接后的顶视环境数据进行定位。此处不对拼接的手段进行限定。Since the top-view environment data is collected by at least two top-view sensors, when preprocessing the sensor data, the aligned top-view environment data can be spliced to perform positioning based on the spliced top-view environment data. The method of splicing is not limited here.
本实施例可以有效的基于时间将传感器数据建立关联,并有效的提取了顶视环境数据中面边缘的点云信息,以用于生成顶视栅格地图。This embodiment can effectively associate sensor data based on time, and effectively extract point cloud information of surface edges in the top-view environment data, so as to generate a top-view grid map.
在拼接完对齐后的顶视环境数据后,本实施例为了便于生成顶视栅格地图,可以提取顶视环境数据中的面边缘的点云信息,以便于将点云信息映射至顶视栅格地图中实现建图和定位;为了便于生成平视栅格地图,可以提取对齐后的平视环境数据中的面边缘的点云信息,以便于将点云信息映射至平视栅格地图中实现建图和定位。After splicing the aligned top-view environment data, in order to facilitate the generation of the top-view grid map, this embodiment can extract the point cloud information of the surface edge in the top-view environment data, so as to map the point cloud information to the top-view grid In order to facilitate the generation of the head-up grid map, the point cloud information of the edge of the face in the aligned head-up environment data can be extracted, so that the point cloud information can be mapped to the head-up grid map to realize the mapping and positioning.
图15为本申请实施例提供的另一种定位方法的流程示意图,参见图15,该方法包括如下步骤:Fig. 15 is a schematic flowchart of another positioning method provided in the embodiment of the present application. Referring to Fig. 15, the method includes the following steps:
S810、获取传感器数据,所述传感器数据包括至少一个平视传感器采集的平视环境数据和至少两个平视传感器采集的顶视环境数据。S810. Acquire sensor data, where the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by at least two head-up sensors.
S820、基于时间戳将所述平视环境数据和顶视环境数据进行对齐。S820. Align the head-up environment data with the top-view environment data based on the time stamp.
基于时间戳将所述平视环境数据和顶视环境数据进行对齐可以认为是基于 时间建立至少一种传感器数据的对应关系,以便对对齐后的顶视环境数据进行处理,进而实现定位。Aligning the head-up environment data and the top-view environment data based on the time stamp can be considered as establishing at least one sensor data correspondence based on time, so as to process the aligned top-view environment data, and then realize positioning.
S830、将对齐后的顶视环境数据进行拼接。S830. Stitching the aligned top-view environment data.
拼接对齐后的顶视环境数据的手段不作限定,如可以基于共视区域内的顶视环境数据实现拼接,以将多个顶视传感器采集的顶视环境数据进行拼接。在不存在共视区域时,可以基于多个顶视传感器在电子设备中的位置拼接对齐后的顶视环境数据。The method of splicing the aligned top-view environment data is not limited. For example, the splicing can be realized based on the top-view environment data in the common-view area, so as to splice the top-view environment data collected by multiple top-view sensors. When there is no common-view area, the aligned top-view environment data can be spliced based on the positions of multiple top-view sensors in the electronic device.
S840、提取拼接后的顶视环境数据中的面边缘的点云信息。S840. Extract point cloud information of surface edges in the spliced top-view environment data.
在拼接对齐后的顶视环境数据后,本实施例可以提取拼接后的顶视环境数据中的面边缘的点云信息,以进行定位。此处不对提取面边缘的点云信息的具体手段进行限定。After splicing and aligning the top-view environment data, this embodiment may extract point cloud information of surface edges in the spliced top-view environment data for positioning. The specific means for extracting the point cloud information of the surface edge is not limited here.
S850、将预处理后的传感器数据转换至机体坐标系下。S850. Transform the preprocessed sensor data into the body coordinate system.
本示例中仅示出了处理顶视环境数据的流程,此处不限定预处理传感器数据中其余数据的技术手段。This example only shows the process of processing the top-view environment data, and the technical means for preprocessing the rest of the sensor data are not limited here.
S860、优化转换坐标系后的传感器数据,根据所述传感器数据得到处理后的顶视环境数据和处理后的平视环境数据。S860. Optimizing the sensor data after the transformed coordinate system, and obtaining processed top-view environment data and processed head-up environment data according to the sensor data.
S870、根据处理后的传感器数据生成平视栅格地图和顶视栅格地图。S870. Generate a head-up grid map and a top-view grid map according to the processed sensor data.
S880、根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。S880. Position the electronic device according to the processed sensor data, the top-view grid map, and the head-up grid map.
在一个实施例中,所述根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位,包括:对所述平视栅格地图进行闭环检测得到平视匹配率;对所述顶视栅格地图进行闭环检测得到顶视匹配率;在所述平视匹配率大于第一设定阈值和/或顶视匹配率大于第二设定阈值时,确定所述电子设备的全局位姿。In one embodiment, the positioning of the electronic device according to the processed sensor data, the top-view grid map, and the head-up grid map includes: performing closed-loop detection on the head-up grid map to obtain head-up matching rate; performing closed-loop detection on the top-view grid map to obtain the top-view matching rate; when the head-up matching rate is greater than the first set threshold and/or the top-view matching rate is greater than the second set threshold, determine the The global pose of the electronic device.
本实施例细化了定位的技术手段,基于平视匹配率和顶视匹配率进行电子设备全局位姿的确定,能够保证定位结果更加精准。This embodiment refines the positioning technical means, and determines the global pose of the electronic device based on the head-up matching rate and the top-looking matching rate, which can ensure more accurate positioning results.
图16为本申请实施例提供的另一种定位方法的流程示意图,参见图16,该方法包括如下步骤:Fig. 16 is a schematic flowchart of another positioning method provided in the embodiment of the present application. Referring to Fig. 16, the method includes the following steps:
S910、获取传感器数据,所述传感器数据包括至少一个平视传感器采集的平视环境数据和至少两个平视传感器采集的顶视环境数据。S910. Acquire sensor data, where the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by at least two head-up sensors.
S920、处理所述传感器数据。S920. Process the sensor data.
S930、根据处理后的传感器数据生成平视栅格地图和顶视栅格地图。S930. Generate a head-up grid map and a top-view grid map according to the processed sensor data.
S940、对所述平视栅格地图进行闭环检测得到平视匹配率。S940. Perform loop closure detection on the head-up grid map to obtain a head-up matching rate.
平视匹配率可以认为是平视栅格地图与平视环境数据匹配的概率。此处不对闭环检测的技术手段进行限定,只要能够确定出平视匹配率即可。The head-up matching rate can be considered as the probability of matching the head-up grid map with the head-up environment data. Here, there is no limitation on the technical means of loop closure detection, as long as the head-up matching rate can be determined.
本实施例可以基于平视栅格地图和平视环境数据,对平视栅格地图进行闭环检测得到平视匹配率。In this embodiment, based on the head-up grid map and the head-up environment data, the head-up matching rate can be obtained by performing closed-loop detection on the head-up grid map.
S950、对所述顶视栅格地图进行闭环检测得到顶视匹配率。S950. Perform loop closure detection on the top-view grid map to obtain a top-view matching rate.
顶视匹配率可以认为是顶视栅格地图与顶视环境数据匹配的概率。此处不对闭环检测的技术手段进行限定,只要能够确定出顶视匹配率即可。The top-view matching rate can be considered as the probability of matching the top-view grid map with the top-view environmental data. Here, no limitation is imposed on the technical means of loop closure detection, as long as the top-view matching rate can be determined.
本实施例可以基于顶视栅格地图和顶视环境数据,对顶视栅格地图进行闭环检测得到顶视匹配率。In this embodiment, based on the top-view grid map and the top-view environment data, the top-view matching rate can be obtained by performing closed-loop detection on the top-view grid map.
在确定平视匹配率和顶视匹配率时,可以分别针对平视栅格地图和顶视栅格地图进行闭环检测,又称回环检测,以确定对应的顶视匹配率和平视匹配率。此处不对确定平视匹配率和顶视匹配率的执行顺序进行限定平视匹配率和顶视匹配率,可以并行确定,也可以依次确定平视匹配率和顶视匹配率。When determining the head-up matching rate and the top-view matching rate, loop closure detection, also known as loop-back detection, can be performed on the head-up grid map and the top-view grid map to determine the corresponding top-view matching rate and horizontal matching rate. Here, the execution order of determining the head-up matching rate and the top-view matching rate is not limited. The head-up matching rate and the top-view matching rate can be determined in parallel, or can be determined sequentially.
S960、在所述平视匹配率大于第一设定阈值和/或顶视匹配率大于第二设定阈值时,确定所述电子设备的全局位姿。S960. When the head-up matching rate is greater than a first set threshold and/or the top-view matching rate is greater than a second set threshold, determine the global pose of the electronic device.
在平视匹配率大于第一设定阈值和/或顶视匹配率大于第二设定阈值时,本实施例可以基于顶视栅格地图和平视栅格地图确定电子设备的全局位姿。如基于位姿图优化的技术手段确定位姿图残差,以确定全局位姿。第一设定阈值和第二设定阈值不作限定。平视匹配率和顶视匹配率可以对应有不同的设定阈值。When the head-up matching rate is greater than the first set threshold and/or the top-view matching rate is greater than the second set threshold, this embodiment may determine the global pose of the electronic device based on the top-view grid map and the head-up grid map. For example, the technical means based on pose graph optimization determines the residual error of the pose graph to determine the global pose. The first set threshold and the second set threshold are not limited. The head-up matching rate and the top-view matching rate may have different setting thresholds.
在一个实施例中,根据处理后的传感器数据生成平视栅格地图和顶视栅格地图,包括:基于处理后的平视环境数据生成平视栅格地图;基于处理后的顶视环境数据生成顶视栅格地图。In one embodiment, generating the head-up grid map and the top-view grid map according to the processed sensor data includes: generating a head-up grid map based on the processed head-up environment data; generating a top-view grid map based on the processed top-view environment data grid map.
在本实施例中,平视栅格地图基于平视环境数据生成,顶视栅格地图基于 顶视环境数据生成。In this embodiment, the head-up grid map is generated based on the head-up environment data, and the top-view grid map is generated based on the top-view environment data.
本实施例细化了生成平视栅格地图和顶视栅格地图的技术手段,该实施例能够有效的结合电子设备平视和顶视的环境进行精准定位。This embodiment refines the technical means for generating a head-up grid map and a top-view grid map. This embodiment can effectively combine the head-up and top-view environments of electronic devices for precise positioning.
在建图的初始阶段,可以首先建立空的顶视栅格地图和空的平视栅格地图,在获取到传感器数据后,可以基于处理后的平视环境数据更新平视栅格地图,基于处理后的顶视环境数据更新顶视栅格地图。In the initial stage of mapping, you can first create an empty top-view grid map and an empty head-up grid map. After obtaining the sensor data, you can update the head-up grid map based on the processed head-up environment data. Based on the processed The top-view environment data updates the top-view raster map.
本实施例通过顶视环境数据和平视环境数据融合建图,即基于顶视传感器和平视传感器融合进行建图和定位,该方案综合考量平视激光雷达(即平视传感器)以及顶视深度相机(即顶视传感器)的测量数据特点以及应用场景特点,充分发挥多个传感器采集的数据优势,实现精准实时建图和高鲁棒的定位。In this embodiment, the fusion of top-view environmental data and horizontal-view environmental data is used to construct maps, that is, map building and positioning are performed based on the fusion of top-view sensors and horizontal-view sensors. Top-view sensor) measurement data characteristics and application scene characteristics, give full play to the advantages of data collected by multiple sensors, and achieve accurate real-time mapping and highly robust positioning.
图17为本申请实施例提供的一种多层栅格地图建图定位方法的流程示意图,图17以顶视传感器的个数为两个为例,该方法包括:Fig. 17 is a schematic flowchart of a multi-layer grid map positioning method provided in the embodiment of the present application. Fig. 17 takes two top-view sensors as an example, and the method includes:
S1.获取传感器数据。S1. Obtain sensor data.
获取的传感器数据包括激光雷达的数据、轮式里程计的数据以及深度相机的数据,即顶视环境数据和平视环境数据。The acquired sensor data includes lidar data, wheel odometer data, and depth camera data, namely top-view environment data and head-up environment data.
S2.数据预处理。S2. Data preprocessing.
S2.1.将多个传感器采集的传感器数据进行时间戳对齐,并将双顶视深度相机数据,即两个顶视传感器采集的顶视环境数据进行拼接;S2.1. Align the time stamps of the sensor data collected by multiple sensors, and splicing the double top-view depth camera data, that is, the top-view environment data collected by two top-view sensors;
S2.2.提取顶视环境数据的特征,分割出面边缘的点云信息。S2.2. Extract the features of the top-view environment data, and segment the point cloud information of the surface edge.
S3.通过轮式里程计进行位姿预测。S3. Prediction of pose by wheel odometer.
S4.坐标系转换。S4. Coordinate system conversion.
将激光雷达的点云信息与顶视环境数据的面边缘的点云信息转到机体坐标系,得到坐标转换后的点云信息,其中,机体坐标系可以为自定义的,也可以为轮式里程计的坐标系。Transfer the point cloud information of the lidar and the point cloud information of the surface edge of the top-view environment data to the body coordinate system, and obtain the point cloud information after coordinate transformation, where the body coordinate system can be customized or wheeled The coordinate system of the odometry.
S5.优化点云信息。S5. Optimizing point cloud information.
基于转换后的点云信息,通过CSM算法优化当前位姿,其中优化的位姿可以认为是坐标转换后的点云信息在当前机体坐标系下在栅格地图中的位置,当 前的点云信息在机体坐标系下,通过当前点云信息和地图的匹配,可以矫正坐标转换后的点云信息在机体坐标系下在地图中的位姿。Based on the converted point cloud information, the current pose is optimized through the CSM algorithm, where the optimized pose can be considered as the position of the coordinate transformed point cloud information in the grid map in the current body coordinate system, and the current point cloud information In the body coordinate system, through the matching of the current point cloud information and the map, the pose of the point cloud information after coordinate transformation in the body coordinate system can be corrected in the map.
S6.更新栅格地图中的栅格对应的概率。S6. Updating the probability corresponding to the grid in the grid map.
S6.1.基于平视激光数据更新平视栅格地图;S6.1. Update the head-up grid map based on the head-up laser data;
S6.2.基于顶视面边缘点云更新顶视栅格地图。S6.2. Update the top-view grid map based on the top-view edge point cloud.
S7.闭环检测。S7. Closed-loop detection.
S7.1.通过平视激光数据和平视栅格地图检测平视闭环;S7.1. Detect head-up closed loop through head-up laser data and head-up grid map;
S7.2.通过顶视的面边缘点云和顶视栅格地图检测顶视闭环。S7.2. Detect top-view closed loops through the top-view face edge point cloud and top-view grid map.
S8.全局位姿图优化,输出全局位姿。S8. Global pose graph optimization, output global pose.
平视激光数据可以认为是处理后的平视环境数据,顶视面边缘点云可以认为是处理后的顶视环境数据。The head-up laser data can be considered as the processed head-up environment data, and the top-view surface edge point cloud can be considered as the processed top-view environment data.
随着自动化控制技术的发展,移动服务机器人逐渐在工业生产和商业服务中得到应用。移动服务机器人的重要工作前提是准确的位置信息获取。相关技术中常见的位置信息的获取方式是移动服务器人依靠激光雷达传感器获取位置信息。虽然激光雷达具有定位精度和抗干扰等方面的能力,但是由于移动服务器机器人往往运行在较为复杂的场景中,导致激光雷达的应用受限,例如,在人流密集场景中激光雷达识别到的特征帧间变换大,使得运动服务器机器人确定的位姿存在较大误差,并且激光雷达的视野受到人群的制约,导致激光雷达采集数据受限。运动服务机器人在该人流密集的场景中运行时需要一种具有高精度的定位方法。With the development of automation control technology, mobile service robots are gradually being applied in industrial production and commercial services. The important working premise of mobile service robot is accurate location information acquisition. A common way to obtain location information in the related art is that the mobile server relies on a laser radar sensor to obtain the location information. Although lidar has the ability of positioning accuracy and anti-interference, but because mobile server robots often run in more complex scenes, the application of lidar is limited, for example, the feature frame identified by lidar in crowded scenes The large space-time transformation makes the pose determined by the motion server robot have a large error, and the field of view of the lidar is restricted by the crowd, resulting in limited data collection by the lidar. Mobile service robots need a high-precision localization method when operating in this densely populated scene.
如上所述,机器人的服务质量主要依靠于准确的位置信息,移动机器人多以激光雷达传感器测取位置信息,但是受到场景的限制,例如,机器人所处环境中人流较多,导致激光雷达测取的特征变化较大,无法精确定位。本申请技术方案通过短时间内基于预设方向的深度数据(例如基本无变化的室内天花板特征)进行建图和定位,提高机器人定位的准确性。As mentioned above, the service quality of robots mainly depends on accurate location information. Mobile robots mostly use lidar sensors to measure location information, but this is limited by the scene. The characteristics of the variable are large and cannot be precisely located. The technical solution of the present application improves the accuracy of robot positioning by performing mapping and positioning based on depth data in a preset direction (for example, basically unchanged indoor ceiling features) in a short period of time.
图18是本申请实施例提供的另一种定位方法的流程示意图,本实施例可适用于人流密集场景下机器人定位的情况,该方法可以由定位装置来执行,该装置可以采用硬件和/或软件的方式来实现,参见图18,本申请实施例提供的定位 方法包括如下步骤:Fig. 18 is a schematic flow chart of another positioning method provided by the embodiment of the present application. This embodiment is applicable to robot positioning in crowded scenes. The method can be performed by a positioning device, which can use hardware and/or It can be implemented by means of software. Referring to FIG. 18, the positioning method provided by the embodiment of the present application includes the following steps:
S1010、采集机器人的当前位姿以及至少一个预设方向的深度数据。S1010. Collect the current pose of the robot and depth data in at least one preset direction.
当前位姿可以是表示机器人当前时刻位置和状态的信息,可以包括世界坐标系下的位置坐标以及机器人方向与世界坐标系的X轴的夹角等。The current pose can be information representing the current position and state of the robot, and can include the position coordinates in the world coordinate system and the angle between the robot direction and the X-axis of the world coordinate system.
至少一个预设方向可以为水平方向和垂直方向中的至少一个。At least one preset direction may be at least one of a horizontal direction and a vertical direction.
可以基于预先设置在机器人上的传感器采集至少一个预设方向的深度数据。Depth data in at least one preset direction can be collected based on sensors preset on the robot.
S1020、基于所述深度数据确定特征点云。S1020. Determine a feature point cloud based on the depth data.
特征点云为表征深度数据中的特征的点云。A feature point cloud is a point cloud that characterizes features in the depth data.
采集至少一个预设方向的深度数据后,可以基于深度数据确定特征点云,例如在所述深度数据中提取至少一个平面的外轮廓点并将所述至少一个平面的外轮廓点构成特征点云。After collecting the depth data in at least one preset direction, the feature point cloud can be determined based on the depth data, for example, extracting the outer contour points of at least one plane from the depth data and forming the feature point cloud from the outer contour points of the at least one plane .
S1030、确定所述特征点云在预设全局栅格地图中的障碍得分,其中,所述预设全局栅格地图基于历史点云构成。S1030. Determine an obstacle score of the feature point cloud in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds.
预设全局栅格地图中可以包括一个或者多个栅格,每个栅格内可以包括机器人位于该栅格的概率值。The preset global grid map may include one or more grids, and each grid may include a probability value that the robot is located in the grid.
预设全局栅格地图分为障碍区域、无障碍区域和未知区域。The preset global grid map is divided into obstacle areas, barrier-free areas and unknown areas.
示例性的,可以将特征点云中的每个点映射至预设全局栅格地图中的一个栅格中,将所述一个栅格对应的概率作为该点的障碍得分,并将特征点云中映射到障碍区域的所有点的障碍得分的总和作为特征点云的障碍得分。Exemplarily, each point in the feature point cloud can be mapped to a grid in the preset global grid map, and the probability corresponding to the grid is used as the obstacle score of the point, and the feature point cloud The sum of the obstacle scores of all the points mapped to the obstacle region in , is used as the obstacle score of the feature point cloud.
S1040、根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。S1040. Optimizing the current pose according to the obstacle score to determine positioning information of the robot.
定位信息可以反映机器人在当前时刻的位置信息,可以包括机器人在世界坐标系下的坐标以及机器人与世界坐标系的X轴的夹角。The positioning information can reflect the position information of the robot at the current moment, and can include the coordinates of the robot in the world coordinate system and the angle between the robot and the X-axis of the world coordinate system.
示例性的,可以使用特征点云的障碍得分作为当前位姿的限制条件,在上述障碍得分的基础下对当前位姿进行优化,将优化后的当前位姿作为机器人的定位信息。可以理解的是,优化当前位姿的方式可以非线性最小二乘法优化以 及拉格朗日乘子法优化等。Exemplarily, the obstacle score of the feature point cloud can be used as the constraint condition of the current pose, the current pose can be optimized on the basis of the above obstacle score, and the optimized current pose can be used as the positioning information of the robot. It can be understood that the way to optimize the current pose can be optimized by nonlinear least square method and Lagrange multiplier method.
本申请技术方案通过短时间内基于预设方向的深度数据进行建图和定位,提高机器人定位的准确性。The technical solution of the present application improves the accuracy of robot positioning by performing mapping and positioning based on depth data in a preset direction within a short period of time.
图19是本申请实施例提供的另一种定位方法的流程示意图。参见图19,本申请实施例提供的方法包括如下步骤。Fig. 19 is a schematic flowchart of another positioning method provided by the embodiment of the present application. Referring to FIG. 19 , the method provided by the embodiment of the present application includes the following steps.
S1110、采集机器人的当前位姿、至少一个预设方向的深度数据和至少一个预设方向的红外数据。S1110. Collect the current pose of the robot, depth data in at least one preset direction, and infrared data in at least one preset direction.
当前位姿可以是表示机器人当前时刻位置和状态的信息,可以包括世界坐标系下的位置坐标以及机器人方向与世界坐标系的X轴的夹角等。图20是本申请实施例提供的一种位姿的示例图,参见图20,本申请实施例中机器人的当前位姿可以包含3个未知量,分别为x、y和yaw,其中,x表示机器人作为世界坐标系下的横坐标,y表示世界坐标下的纵坐标,yaw表示机器人方向与世界坐标系的X轴的夹角。预设方向可以是预先设置的数据采集方向,可以由用户或者服务商设置,预设方向可以为空间内任意方向,例如,机器人的竖直方向以及机器人的水平向上45度方向等。深度数据可以是反映物体到机器人距离的数据,深度数据可以通过设置在机器人上的传感器采集获得。红外数据可以是通过红外线传感器采集的数据,红外数据可以由红外线传感器采集机器人预设方向的物体生成,可以表示机器人距离物体的远近。The current pose can be information representing the current position and state of the robot, and can include the position coordinates in the world coordinate system and the angle between the robot direction and the X-axis of the world coordinate system. Fig. 20 is an example diagram of a pose provided by the embodiment of the present application. Referring to Fig. 20, the current pose of the robot in the embodiment of the present application may contain three unknown quantities, namely x, y and yaw, where x represents The robot is used as the abscissa in the world coordinate system, y represents the ordinate in the world coordinate system, and yaw represents the angle between the robot direction and the X-axis of the world coordinate system. The preset direction can be a preset data collection direction, which can be set by the user or the service provider. The preset direction can be any direction in space, for example, the vertical direction of the robot and the horizontal direction of 45 degrees upward of the robot. The depth data can be data reflecting the distance from the object to the robot, and the depth data can be collected by sensors installed on the robot. The infrared data can be the data collected by the infrared sensor, and the infrared data can be generated by collecting objects in the preset direction of the robot by the infrared sensor, and can indicate the distance between the robot and the object.
在本申请实施例中,可以使用设置在机器人上的传感器采集当前位姿,例如,可以使用惯性导航或者位移传感器采集机器人的移动距离从而确定出机器人的当前位姿。可以在机器人的预设方向上采集深度数据和红外数据,例如,可以使用飞行时间(Time Of Flight,TOF)相机在机器人顶部方向上采集障碍物的深度数据。机器人可以在多个预设方向上采集深度数据和红外数据,以进一步提高机器人定位的准确性。In the embodiment of the present application, the current pose of the robot can be acquired by using the sensors arranged on the robot, for example, the moving distance of the robot can be collected by inertial navigation or displacement sensors to determine the current pose of the robot. Depth data and infrared data can be collected in a preset direction of the robot, for example, depth data of obstacles can be collected in the direction of the top of the robot using a Time Of Flight (TOF) camera. The robot can collect depth data and infrared data in multiple preset directions to further improve the accuracy of robot positioning.
S1120、在红外数据中提取至少一个边缘点,并根据深度数据将至少一个边缘点转换为特征点云。S1120. Extract at least one edge point from the infrared data, and convert the at least one edge point into a feature point cloud according to the depth data.
边缘点可以是红外数据构成的图像中位于边缘位置的位置点,边缘点可以通过差分边缘检测、Reborts算子边缘检测、Sobel边缘检测、Laplacian边缘检测、Prewitt算子边缘检测等方式在红外数据中检测。The edge point can be the position point at the edge position in the image composed of infrared data, and the edge point can be detected in the infrared data through differential edge detection, Reborts operator edge detection, Sobel edge detection, Laplacian edge detection, Prewitt operator edge detection, etc. detection.
示例性的,可以按照边缘检测方法在红外数据中提取到边缘点,可以按照深度数据的深度值将至少一个边缘点的坐标由二维坐标转换为三维坐标,例如,可以在深度数据中提取每个边缘点对应的深度值,可以将该深度值作为所述每个边缘点的三维坐标的第三维。可以将坐标转换后的边缘点组成特征点云。Exemplarily, the edge points can be extracted from the infrared data according to the edge detection method, and the coordinates of at least one edge point can be converted from two-dimensional coordinates to three-dimensional coordinates according to the depth value of the depth data. For example, each edge point can be extracted from the depth data. The depth value corresponding to each edge point can be used as the third dimension of the three-dimensional coordinates of each edge point. The edge points after coordinate transformation can be composed into a feature point cloud.
S1130、确定特征点云在预设全局栅格地图中的障碍得分,其中,预设全局栅格地图基于历史点云构成。S1130. Determine the obstacle score of the feature point cloud in the preset global grid map, where the preset global grid map is formed based on the historical point cloud.
障碍得分可以是特征点云中全部边缘点映射到预设全局栅格地图的情况下,被映射至障碍区域内的栅格的边缘点所映射到的栅格的概率总分。预设全局栅格地图可以是历史点云构成,可以反映机器人所处空间内的情况,预设全局栅格地图中可以包括一个或者多个栅格,每个栅格内可以包括机器人位于该栅格的概率值。预设全局栅格地图可以在机器人的移动过程中逐渐完善。障碍得分可以是边缘点处于预设全局栅格地图中障碍物位置的概率值总和。图21是本申请实施例提供的一种预设全局栅格地图的示例图。参见图21,预设全局栅格地图中可以包括未知区域、障碍区域和无障碍区域三部分组成,障碍得分可以通过统计特征点云映射到的障碍区域内的栅格的概率值的总和确定。The obstacle score may be the total probability score of the grid mapped to the edge point of the grid in the obstacle area when all the edge points in the feature point cloud are mapped to the preset global grid map. The preset global grid map can be composed of historical point clouds, which can reflect the situation in the space where the robot is located. The preset global grid map can include one or more grids, and each grid can include grid probability value. The preset global grid map can be gradually improved during the movement of the robot. The obstacle score may be the sum of the probability values that the edge point is located at the obstacle position in the preset global grid map. FIG. 21 is an example diagram of a preset global grid map provided by an embodiment of the present application. Referring to Fig. 21, the preset global grid map can include three parts: unknown area, obstacle area and unobstructed area, and the obstacle score can be determined by the sum of the probability values of the grids in the obstacle area mapped to the statistical feature point cloud.
在本申请实施例中,可以将特征点云中全部边缘点一一映射到预设全局栅格地图中,确定出每个边缘点所在的对应栅格的概率值,可以统计边缘点所处的障碍区域的栅格的概率值的和作为特征点云在预设全局栅格地图的障碍得分。In this embodiment of the application, all the edge points in the feature point cloud can be mapped to the preset global grid map one by one, the probability value of the corresponding grid where each edge point is located can be determined, and the edge points can be counted. The sum of the probability values of the grids in the obstacle area is used as the obstacle score of the feature point cloud in the preset global grid map.
S1140、根据障碍得分优化当前位姿以确定机器人的定位信息。S1140. Optimizing the current pose according to the obstacle score to determine the positioning information of the robot.
定位信息可以反映机器人在当前时刻的位置信息,可以包括机器人在世界坐标系下的坐标以及机器人与世界坐标系的X轴的夹角。The positioning information can reflect the position information of the robot at the current moment, and can include the coordinates of the robot in the world coordinate system and the angle between the robot and the X-axis of the world coordinate system.
可以使用特征点云的障碍得分作为当前位姿的限制条件,可以在上述障碍得分的基础下对当前位姿进行优化,可以将优化后的当前位姿作为机器人的定位信息。优化当前位姿的方式可以非线性最小二乘法优化以及拉格朗日乘子法优化等。The obstacle score of the feature point cloud can be used as the constraint condition of the current pose, the current pose can be optimized on the basis of the above obstacle score, and the optimized current pose can be used as the positioning information of the robot. The way to optimize the current pose can be nonlinear least square optimization and Lagrange multiplier optimization.
本申请实施例,通过获取机器人当前位姿以及预设方向上的深度数据和红外数据,在红外数据中提取至少一个边缘点,根据深度数据处理至少一个边缘点后组成特征点云,确定特征点云在预设全局栅格地图中的障碍得分,使用该 障碍得分优化当前位姿以获取机器人的定位信息,实现了机器人位置信息的准确获取,减少复杂环境对位姿确定的影响,可增强机器人服务质量,有助于提高用户使用体验。In the embodiment of the present application, by acquiring the current pose of the robot and the depth data and infrared data in the preset direction, extracting at least one edge point from the infrared data, processing at least one edge point according to the depth data to form a feature point cloud, and determining the feature point The obstacle score of the cloud in the preset global grid map, using the obstacle score to optimize the current pose to obtain the robot's positioning information, realizes the accurate acquisition of the robot's position information, reduces the impact of complex environments on pose determination, and can enhance the robot Service quality helps to improve user experience.
图22是本申请实施例提供的另一种定位方法的流程示意图,本申请实施例是在上述实施例基础上进行说明。参见图22,本申请实施例提供的方法包括如下步骤。Fig. 22 is a schematic flowchart of another positioning method provided by the embodiment of the present application, and the embodiment of the present application is described on the basis of the foregoing embodiments. Referring to Fig. 22, the method provided by the embodiment of the present application includes the following steps.
S1210、获取机器人世界坐标系下的当前位姿,其中,当前位姿至少包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角。S1210. Obtain the current pose of the robot in the world coordinate system, wherein the current pose at least includes an abscissa, a ordinate, and an angle between the robot direction and the X-axis of the world coordinate system.
世界坐标系可以是机器人的绝对坐标系,世界坐标系的原点可以在机器人初始化时确定。The world coordinate system can be the absolute coordinate system of the robot, and the origin of the world coordinate system can be determined when the robot is initialized.
在本申请实施例中,当前位姿可以包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角三个元素,当前位姿可以通过机器人中设置的传感器获取。可以获取机器人的传感器采集的数据,基于传感器采集的数据确定出机器人当前时刻在世界坐标系中所处的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为当前位姿。In the embodiment of the present application, the current pose may include three elements, the abscissa, the ordinate, and the angle between the robot direction and the X-axis of the world coordinate system, and the current pose may be acquired by sensors set in the robot. The data collected by the sensor of the robot can be obtained, and based on the data collected by the sensor, the abscissa and ordinate of the robot in the world coordinate system at the current moment and the angle between the robot direction and the X-axis of the world coordinate system are determined as the current pose.
S1220、使用预先设置在机器人上的至少一个深度数据传感器采集至少一个预设方向的深度数据和至少一个红外数据传感器采集至少一个预设方向的红外数据,其中,预设方向至少包括水平方向和竖直方向中的至少一种。S1220. Use at least one depth data sensor pre-set on the robot to collect depth data in at least one preset direction and at least one infrared data sensor to collect infrared data in at least one preset direction, wherein the preset direction includes at least a horizontal direction and a vertical direction At least one of the vertical directions.
深度数据传感器可以是采集深度数据的设备,可以采集机器人到被采集物体的距离,可以感知空间中物体的深度,深度数据传感器可以包括结构光深度传感器、相机阵列深度传感器和飞行时间深度传感器,红外数据传感器等,红外数据传感器可以是生成被采集物体热成像图的设备,红外数据传感器可以包括红外成像仪和飞行时间相机等,机器人设备上的深度数据传感器和红外数据传感器可以为集成的数据采集设备,例如,TOF相机,可以直接控制TOF相机采集到被采集物体的深度数据和红外数据。至少一个预设方向可以为机器人的水平方向和竖直方向中至少一种。The depth data sensor can be a device that collects depth data. It can collect the distance from the robot to the object to be collected, and can perceive the depth of objects in space. The depth data sensor can include structured light depth sensors, camera array depth sensors, and time-of-flight depth sensors. Infrared Data sensors, etc. Infrared data sensors can be devices that generate thermal images of objects to be collected. Infrared data sensors can include infrared imagers and time-of-flight cameras. Depth data sensors and infrared data sensors on robotic devices can be used for integrated data acquisition A device, such as a TOF camera, can directly control the depth data and infrared data of the collected object collected by the TOF camera. At least one preset direction may be at least one of a horizontal direction and a vertical direction of the robot.
在本申请实施例中,机器人上预先安装有深度数据传感器和红外数据传感器,可以使用深度数据传感器和红外数据传感器分别采集机器人的水平方向或者竖直方向的数据。机器人上可以预先设置有多个深度数据传感器和多个红外 数据传感器,多个深度数据传感器预先设置的采集数据的预设方向可以不同,多个红外数据传感器预先设置的采集数据的预设方向可以不同,预设方向可以为用户或者服务商设置的预设方向,例如,可以是便于采集室内天花板特征的方向,预设方向可以包括竖直向上、或者水平向上45度等方向。In the embodiment of the present application, a depth data sensor and an infrared data sensor are pre-installed on the robot, and the depth data sensor and the infrared data sensor can be used to respectively collect horizontal or vertical data of the robot. A plurality of depth data sensors and a plurality of infrared data sensors can be preset on the robot, and the preset direction of data collection preset by multiple depth data sensors can be different, and the preset direction of data collection preset by multiple infrared data sensors can be Differently, the preset direction may be a preset direction set by a user or a service provider, for example, it may be a direction convenient for collecting indoor ceiling features, and the preset direction may include directions such as vertically upward or horizontally upward at 45 degrees.
S1230、滤除红外数据中的噪声。S1230. Filter out noise in the infrared data.
在本申请实施例中,由于红外数据采集过程中受到环境的影响,数据中存在噪声,降低了边缘点采集的准确性,为了提高机器人定位的准确性,可以对红外数据中的噪声进行滤除。滤除的方法可以包括高斯滤波、双边滤波、中值滤波、均值滤波等方法。高斯滤波可以是线性平滑滤波,可以在图像处理过程中消除高斯噪声,实现图像数据的减噪。In the embodiment of this application, due to the influence of the environment during the infrared data collection process, there is noise in the data, which reduces the accuracy of edge point collection. In order to improve the accuracy of robot positioning, the noise in the infrared data can be filtered . The filtering method may include Gaussian filtering, bilateral filtering, median filtering, mean filtering and other methods. The Gaussian filter can be a linear smoothing filter, which can eliminate Gaussian noise during image processing and achieve noise reduction of image data.
示例性的,以对采集到的红外数据通过高斯滤波来消除红外数据中的噪声为例,可以使用一个模板(卷积或掩膜)扫描红外数据中每个像素,使用模板确定邻域内像素的加权平均灰度值替代模板中心像素点的值,实现红外数据的噪声滤除。Exemplarily, taking Gaussian filtering on the collected infrared data as an example to eliminate noise in the infrared data, a template (convolution or mask) can be used to scan each pixel in the infrared data, and the template can be used to determine the The weighted average gray value replaces the value of the center pixel of the template to achieve noise filtering of infrared data.
S1240、在滤除噪声的红外数据中提取至少一个边缘点。S1240. Extract at least one edge point from the noise-filtered infrared data.
在本申请实施例中,在对红外数据滤除噪声后,可以在红外数据形成的图像中提取边缘点。可以将红外数据中所有边缘点全部提取,也可以每隔一段距离提取一个边缘点,以进一步提高机器人定位的效率。提取边缘点的方式可以通过图像识别的方式,例如,可以将红外数据形成的图像中与周围其他像素点的颜色差别较大的点作为边缘点。In the embodiment of the present application, after the infrared data is filtered out, edge points may be extracted from the image formed by the infrared data. All the edge points in the infrared data can be extracted, or an edge point can be extracted at a certain distance to further improve the efficiency of robot positioning. The way of extracting edge points can be through image recognition, for example, the points in the image formed by the infrared data that have a large color difference from other surrounding pixel points can be regarded as edge points.
在一个示例性的实施方式中,可以按照Canny边缘检测算法对红外数据处理,依次对红外数据进行高斯滤波降低噪声、使用一阶偏导的有限差分计算梯度的幅值和方向、对梯度幅值进行非极大值抑制和使用双阈值算法检测和连接边缘,从而获取到红外数据中的边缘点。Canny边缘检测算法可以是对图像边缘进行检测的算法,例如可以包括图像高斯滤波降低噪声、使用一阶偏导的有限差分计算梯度的幅值和方向、对梯度幅值进行非极大值抑制和使用双阈值算法检测和连接边缘等步骤。还可以使用:Sobel边缘检测、Prewitt边缘检测、Roberts边缘检测、Canny边缘检测、Marr-Hildreth边缘检测等方法检测红外数据中的边缘点。In an exemplary embodiment, the infrared data can be processed according to the Canny edge detection algorithm, and the infrared data is sequentially subjected to Gaussian filtering to reduce noise, using the finite difference of the first-order partial derivative to calculate the magnitude and direction of the gradient, and the gradient magnitude Perform non-maximum suppression and use a double-threshold algorithm to detect and connect edges to obtain edge points in infrared data. The Canny edge detection algorithm can be an algorithm for detecting the edge of an image, for example, it can include image Gaussian filtering to reduce noise, use the finite difference of the first-order partial derivative to calculate the magnitude and direction of the gradient, and perform non-maximum suppression and Steps such as detecting and connecting edges using a double-threshold algorithm. You can also use: Sobel edge detection, Prewitt edge detection, Roberts edge detection, Canny edge detection, Marr-Hildreth edge detection and other methods to detect edge points in infrared data.
S1250、使用机器人的相机模型以及深度数据将至少一个边缘点的二维坐标转换为三维坐标以构成特征点云。S1250. Convert the two-dimensional coordinates of at least one edge point into three-dimensional coordinates by using the camera model of the robot and the depth data to form a feature point cloud.
相机模型可以是机器人建立的用于三维转换的相机模型,可以用于使用深度数据校正边缘点的坐标以获取畸变较小的特征点云。深度数据可以包括至少一个边缘点的深度信息,该深度信息可以作为对应边缘点的三维坐标的第三维度信息。相机模型可以包括欧拉相机模型、UVN相机模型、针孔相机模型、鱼眼相机模型和广角相机模型中的一种或者多种。The camera model can be a camera model established by the robot for three-dimensional transformation, and can be used to correct the coordinates of edge points using depth data to obtain a feature point cloud with less distortion. The depth data may include depth information of at least one edge point, and the depth information may serve as third-dimensional information corresponding to the three-dimensional coordinates of the edge point. The camera model may include one or more of an Euler camera model, a UVN camera model, a pinhole camera model, a fisheye camera model, and a wide-angle camera model.
可以使用预设的相机模型为参照系将每个边缘点的坐标进行转换,使得边缘点的二维坐标与世界坐标系的坐标处于相同参照系,可以在深度数据中确定至少一个边缘点对应的深度信息,可以将每个边缘点的二维坐标以及深度信息构成三维坐标,可以将多个具有三维坐标的边缘点构成特征点云。示例性的,可以按照下述转换方式实现边缘点的二维数据到三维点云的转换:The coordinates of each edge point can be converted using the preset camera model as the reference system, so that the two-dimensional coordinates of the edge point and the coordinates of the world coordinate system are in the same reference system, and at least one edge point corresponding to the depth data can be determined For depth information, the two-dimensional coordinates and depth information of each edge point can be used to form three-dimensional coordinates, and multiple edge points with three-dimensional coordinates can be used to form a feature point cloud. Exemplarily, the conversion of the two-dimensional data of the edge points to the three-dimensional point cloud can be realized in the following conversion manner:
Figure PCTCN2021096828-appb-000017
Figure PCTCN2021096828-appb-000017
其中,Z为边缘点的深度信息,(u,v)为边缘点的二维坐标,K为相机内参矩阵,可以由相机模型确定,P为三维点云(特征点云)的坐标。Among them, Z is the depth information of the edge point, (u, v) is the two-dimensional coordinates of the edge point, K is the internal parameter matrix of the camera, which can be determined by the camera model, and P is the coordinate of the three-dimensional point cloud (feature point cloud).
S1260、将特征点云内至少一个边缘点的坐标转换到世界坐标系,映射坐标转换后的每个边缘点到预设全局栅格地图的目标栅格。S1260. Transform the coordinates of at least one edge point in the feature point cloud into the world coordinate system, and map each edge point after the coordinate transformation to the target grid of the preset global grid map.
可以将该特征点云中至少一个边缘点的坐标进行坐标系转换,使得至少一个边缘点的坐标以世界坐标系为基准。图23是本申请实施例提供的一种坐标转换的示例图。参见图23,采集的边缘点位于机器人坐标系下,相机模型对应一个坐标系,可以按照相机模型的坐标系将深度数据添加到机器人坐标系下的边缘点中,实现无畸变或低畸变的坐标转换,然后将三维坐标再转换到世界坐标下。示例性的,转换过程可以如下公式表示:w i=Proj(T*p i),其中,w i表示第i个边缘点在世界坐标系下的坐标,p i表示第i个边缘点在机器人坐标下的坐标,Proj()函数将三维坐标映射为二维坐标,T为机器人的当前位姿,由x,y和yaw表示,T可以如下公式: Coordinate system transformation may be performed on the coordinates of at least one edge point in the feature point cloud, so that the coordinates of at least one edge point are based on the world coordinate system. Fig. 23 is an example diagram of a coordinate transformation provided by the embodiment of the present application. See Figure 23. The collected edge points are located in the robot coordinate system, and the camera model corresponds to a coordinate system. The depth data can be added to the edge points in the robot coordinate system according to the coordinate system of the camera model to achieve undistorted or low-distorted coordinates. Transform, and then convert the three-dimensional coordinates to world coordinates. Exemplarily, the conversion process can be expressed by the following formula: w i =Proj(T*p i ), wherein, w i represents the coordinates of the i-th edge point in the world coordinate system, and p i represents the coordinates of the i-th edge point in the robot The coordinates under the coordinates, the Proj() function maps the three-dimensional coordinates to the two-dimensional coordinates, T is the current pose of the robot, represented by x, y and yaw, and T can be expressed as follows:
Figure PCTCN2021096828-appb-000018
Figure PCTCN2021096828-appb-000018
在本申请实施例中,确定至少一个边缘点在世界坐标系下的坐标后,可以将至少一个边缘点按照坐标依次映射到预设全局栅格地图的目标栅格中,例如, 预设全局栅格地图中不同栅格具有不同的坐标范围,可以将至少一个边缘点按照各自的坐标范围映射到对应的栅格中,可以将具有边缘点映射的栅格记为目标栅格。In the embodiment of the present application, after determining the coordinates of at least one edge point in the world coordinate system, at least one edge point can be mapped to the target grid of the preset global grid map in sequence according to the coordinates, for example, the preset global grid Different grids in the grid map have different coordinate ranges, at least one edge point can be mapped to the corresponding grid according to their respective coordinate ranges, and the grid with the edge point mapping can be marked as the target grid.
S1270、若一个边缘点所映射到的目标栅格为预设全局栅格地图中障碍区域内的栅格,则获取所述目标栅格的概率值作为所述一个边缘点的障碍得分。S1270. If the target grid to which an edge point is mapped is a grid within the obstacle area in the preset global grid map, acquire the probability value of the target grid as the obstacle score of the one edge point.
障碍区域内的栅格可以通过信息进行标识。Grids within obstacle areas can be identified by information.
可以对预设全局栅格地图中存在边缘点映射的目标栅格进行检验,若该目标栅格为障碍区域内的栅格,则将目标栅格内存储的概率值作为对应的边缘点的障碍得分。The target grid with edge point mapping in the preset global grid map can be tested. If the target grid is in the obstacle area, the probability value stored in the target grid will be used as the obstacle of the corresponding edge point Score.
若一个边缘点所映射到的目标栅格不是障碍区域内的栅格,例如,边缘点所映射至的栅格在预设全局栅格内为无障碍区域或者未知区域,则可以不对该边缘点对应的概率值进行统计,可以将该边缘点删除或者不获取该边缘点对应的栅格存储的概率值。If the target grid mapped to an edge point is not a grid in the obstacle area, for example, the grid mapped to the edge point is an unobstructed area or an unknown area in the preset global grid, then the edge point can not be The corresponding probability value is counted, and the edge point can be deleted or the probability value stored in the grid corresponding to the edge point can not be acquired.
S1280、统计特征点云内全部边缘点的障碍得分的总和作为特征点云的障碍得分。S1280. Count the sum of the obstacle scores of all edge points in the feature point cloud as the obstacle score of the feature point cloud.
在本申请实施例中,可以统计特征点云中所有边缘点的障碍得分,可以将该障碍得分的总和作为特征点云的障碍得分。In the embodiment of the present application, the obstacle scores of all edge points in the feature point cloud can be counted, and the sum of the obstacle scores can be used as the obstacle score of the feature point cloud.
S1290、根据当前位姿以及特征点云的障碍得分构建残差函数。S1290. Construct a residual function according to the current pose and the obstacle score of the feature point cloud.
在本申请实施例中,可以根据当前位姿和特征点云的障碍得分构建用于优化当前位姿的残差函数,其中,残差函数可以是优化机器人位姿的函数关系,可以代表机器人当前位姿与障碍得分的关联关系,残差函数可以是通过非线性最小二乘法问题构成的优化函数,例如,In the embodiment of the present application, the residual function for optimizing the current pose can be constructed according to the current pose and the obstacle score of the feature point cloud, where the residual function can be the functional relationship of the optimized robot pose, which can represent the robot's current The relationship between the pose and the obstacle score, the residual function can be an optimization function formed by a nonlinear least squares problem, for example,
Figure PCTCN2021096828-appb-000019
Figure PCTCN2021096828-appb-000019
其中,e 1为残差,p k为特征点云中第k个边缘点,M(T,p k)为第k个边缘点p k在机器人位姿为T时投影到预设全局栅格地图计算的障碍得分,n为特征点云中的边缘点的数量。 Among them, e 1 is the residual, p k is the kth edge point in the feature point cloud, M(T,p k ) is the kth edge point p k is projected to the preset global grid when the robot pose is T The obstacle score calculated by the map, n is the number of edge points in the feature point cloud.
S12100、调整残差函数中参数信息使得残差函数的结果值最小,残差函数 中参数信息为当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数的取值。S12100. Adjust the parameter information in the residual function to minimize the result value of the residual function. The parameter information in the residual function is at least one of the abscissa and ordinate of the current pose and the angle between the robot direction and the X-axis of the world coordinate system. The value of the parameter.
可以调整残差函数中的当前位姿的取值使得残差公式的结果值最小,对当前位姿调整的方式可以包括梯度下降法、牛顿法和拟牛顿法等。The value of the current pose in the residual function can be adjusted to minimize the result value of the residual formula, and the methods for adjusting the current pose can include gradient descent method, Newton method, and quasi-Newton method.
S12110、将结果值最小时当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为定位位姿信息。S12110, taking the abscissa and ordinate of the current pose and the angle between the robot direction and the X-axis of the world coordinate system when the result value is the minimum as positioning pose information.
定位位姿信息可以是用于机器人定位的位姿信息,该定位位姿信息可以表示机器人当前最可能处于的状态。The positioning pose information may be pose information used for robot positioning, and the positioning pose information may indicate the most likely current state of the robot.
在本申请实施例中,可以当残差函数的结果值最小时,可以确定当前位姿得到最佳优化,可以将此时调整后的当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为输出的定位位姿信息。In the embodiment of the present application, when the result value of the residual function is the smallest, it can be determined that the current pose is optimally optimized, and the abscissa, ordinate, and robot direction of the adjusted current pose can be compared with the world coordinates The included angle of the X-axis of the system is used as the output positioning pose information.
本申请实施例,通过获取机器人在世界坐标系下的当前位姿以及使用设置在机器人上的深度传感器采集预设方向的深度数据和红外数据,滤除红外数据包含的噪声,使用机器人的相机模型以及深度数据将红外数据处理为特征点云,将特征点云内至少一个边缘点的坐标转换到世界坐标系,映射每个边缘点到预设全局栅格地图的目标栅格,在目标栅格为障碍位置时获取目标栅格的概率值作为对应的边缘点的障碍得分,统计特征点云的至少一个边缘点的障碍得分的总和作为特征点云的障碍得分,使用障碍得分和当前位姿构建残差函数,调整残差函数中的当前位姿的取值,使得残差函数的结果值最小,将结果值最小时的当前位姿作为机器人的定位信息,本申请实施例选择红外数据对应的边缘点组成特征点云,在不改变定位信息精度的前提下降低数据运算量,提高定位信息确定效率,使用障碍得分优化当前位姿,提高机器人位置信息确定的准确性,可增强机器人的服务质量,有助于提高用户的使用体验。In this embodiment of the application, by acquiring the current pose of the robot in the world coordinate system and using the depth sensor installed on the robot to collect depth data and infrared data in a preset direction, filtering out the noise contained in the infrared data, using the camera model of the robot And the depth data process the infrared data into a feature point cloud, convert the coordinates of at least one edge point in the feature point cloud to the world coordinate system, and map each edge point to the target grid of the preset global grid map, in the target grid Obtain the probability value of the target grid as the obstacle score of the corresponding edge point when it is the obstacle position, and the sum of the obstacle scores of at least one edge point of the statistical feature point cloud is used as the obstacle score of the feature point cloud, which is constructed using the obstacle score and the current pose Residual function, adjust the value of the current pose in the residual function, so that the result value of the residual function is the smallest, and the current pose when the result value is the smallest is used as the positioning information of the robot. In the embodiment of the present application, the infrared data corresponding to The edge points form a feature point cloud, which reduces the amount of data calculation without changing the accuracy of positioning information, improves the efficiency of positioning information determination, optimizes the current pose by using obstacle scores, improves the accuracy of robot position information determination, and enhances the service quality of robots , which helps to improve the user experience.
在上述实施例的基础上,所述方法还包括:根据所述机器人的移动速度和障碍得分优化所述定位信息。On the basis of the above embodiments, the method further includes: optimizing the positioning information according to the moving speed and obstacle score of the robot.
在本申请实施例的基础上,还可以获取机器人的移动速度,可以使用移动速度和障碍得分共同对当前位姿进行优化,得到定位信息,进一步提高机器人定位的准确性,例如,可以采集机器人的移动速度,基于移动速度构成限制条件,可以基于该限制条件对基于障碍得分优化当前位姿后得到的定位信息构造 非线性最小二乘问题,并根据梯度下降方法解决该问题,在非线性最小二乘问题的结果值最小时,可以得到最后的定位信息。On the basis of the embodiment of the present application, the moving speed of the robot can also be obtained, and the current pose can be optimized by using the moving speed and the obstacle score to obtain positioning information to further improve the positioning accuracy of the robot. For example, the robot's moving speed can be collected Moving speed, constituting a restriction condition based on the moving speed, based on the restriction condition, a nonlinear least squares problem can be constructed for the positioning information obtained after optimizing the current pose based on the obstacle score, and the problem can be solved according to the gradient descent method. In the nonlinear least squares When the result value of the multiplication problem is the smallest, the final positioning information can be obtained.
图24是本申请实施例提供的另一种定位方法的流程图,本申请实施是在上述实施例基础上进行说明,参见图24,本申请实施例提供的方法包括如下步骤。Fig. 24 is a flowchart of another positioning method provided by the embodiment of the present application. The implementation of the present application is described on the basis of the above embodiment. Referring to Fig. 24, the method provided by the embodiment of the present application includes the following steps.
S1310、采集机器人的当前位姿、至少一个预设方向的深度数据和至少一个预设方向的红外数据。S1310. Collect the current pose of the robot, depth data in at least one preset direction, and infrared data in at least one preset direction.
S1320、在红外数据中提取至少一个边缘点,并根据深度数据将至少一个边缘点转换为特征点云。S1320. Extract at least one edge point from the infrared data, and convert the at least one edge point into a feature point cloud according to the depth data.
S1330、确定特征点云在预设全局栅格地图中的障碍得分,其中,预设全局栅格地图基于历史点云构成。S1330. Determine the obstacle score of the feature point cloud in the preset global grid map, where the preset global grid map is formed based on the historical point cloud.
S1340、根据当前位姿以及特征点云的障碍得分构建第一残差项。S1340. Construct a first residual item according to the current pose and the obstacle score of the feature point cloud.
在本申请实施例中,可以基于当前位姿和特征点云的障碍得分构建非线性最小二乘问题对应的第一残差项,例如,
Figure PCTCN2021096828-appb-000020
其中,e 1为残差,p k为特征点云中第k个边缘点,M(T,p k)为特征点云中第k个边缘点p k在机器人位姿为T时投影到栅格地图计算的障碍得分,n为特征点云中边缘点点的数量。
In the embodiment of the present application, the first residual term corresponding to the nonlinear least squares problem can be constructed based on the current pose and the obstacle score of the feature point cloud, for example,
Figure PCTCN2021096828-appb-000020
Among them, e 1 is the residual, p k is the kth edge point in the feature point cloud, M(T,p k ) is the kth edge point in the feature point cloud, p k is projected to the grid when the robot pose is T The obstacle score calculated by the grid map, n is the number of edge points in the feature point cloud.
S1350、基于移动速度确定预测位姿,并将预测位姿与上一时刻的历史位姿的差值作为第二残差项。S1350. Determine the predicted pose based on the moving speed, and use the difference between the predicted pose and the historical pose at the previous moment as a second residual term.
预测位姿可以是根据移动速度确定出的机器人位姿,例如,可以通过移动速度确定出机器人的移动位置,可以根据该移动位置确定出机器人的位姿作为预测位姿。The predicted pose may be the pose of the robot determined according to the moving speed. For example, the moving position of the robot may be determined through the moving speed, and the pose of the robot may be determined according to the moving position as the predicted pose.
可以采集机器人的移动速度,可以使用该移动速度生成机器人的位置,可以按照该位置确定出预测位姿,基于预测位姿以及上一时刻的历史位姿的差值作为第二残差项,其中,历史位姿可以是机器人在上一个时刻确定出的位姿信息,可以包括坐标以及机器人方向与世界坐标系的横坐标轴的夹角。The moving speed of the robot can be collected, the moving speed can be used to generate the position of the robot, and the predicted pose can be determined according to the position, based on the difference between the predicted pose and the historical pose at the previous moment as the second residual term, where , the historical pose can be the pose information determined by the robot at the last moment, which can include the coordinates and the angle between the robot's direction and the abscissa axis of the world coordinate system.
S1360、调整第一残差项和第二残差项中预测位姿中参数信息和/或当前位姿中参数信息使得第一残差项和第二残差项的和取值最小,参数信息包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数的取值。S1360. Adjust the parameter information in the predicted pose in the first residual item and the second residual item and/or the parameter information in the current pose so that the sum of the first residual item and the second residual item is the smallest, and the parameter information Including the value of at least one parameter among the abscissa, ordinate and the angle between the robot direction and the X-axis of the world coordinate system.
在本申请实施例中,可以分别调整预测位姿和/或当前位姿中的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数,使得第一残差项和第二残差项的和取值最小,调整的方式可以包括梯度下降法、牛顿法和拟牛顿法等。In the embodiment of the present application, at least one parameter among the abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system in the predicted pose and/or the current pose can be adjusted separately, so that the first residual term and the second residual term have the smallest value, and the adjustment methods can include gradient descent method, Newton method and quasi-Newton method, etc.
S1370、将第一残差项和第二残差项的和取值最小时的预测位姿和/或当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为定位位姿信息。S1370, taking the predicted pose and/or the abscissa and ordinate of the current pose when the sum of the first residual term and the second residual term is the smallest, and the angle between the robot direction and the X-axis of the world coordinate system as positioning pose information.
当第一残差项和第二残差项的和取值最小时,机器人的当前位姿可以完成优化,可以将此时调整后的预测位姿和/或当前位姿的相关信息作为机器人的定位位姿信息,可以将该定位位姿信息作为机器人定位过程使用的定位信息,其中,相关信息包括横坐标、纵坐标以及机器人方向与世界坐标系X轴的夹角。When the sum of the first residual term and the second residual term is the smallest, the current pose of the robot can be optimized, and the adjusted predicted pose and/or current pose information can be used as the robot’s The positioning pose information can be used as the positioning information used in the robot positioning process, wherein the relevant information includes the abscissa, the ordinate, and the angle between the robot direction and the X-axis of the world coordinate system.
S1380、将定位信息对应的机器人位姿更新到预设全局栅格地图。S1380. Update the pose of the robot corresponding to the positioning information to the preset global grid map.
可以按照定位信息中的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角确定出机器人的最终位姿,可以基于该最终位姿确定出位于预设全局栅格地图中多个栅格的概率值,并将该概率值添加到对应预设全局栅格地图中对应栅格中,实现预设全局栅格地图的更新。The final pose of the robot can be determined according to the abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system in the positioning information. Based on the final pose, multiple The probability value of the grid, and the probability value is added to the corresponding grid in the corresponding preset global grid map, so as to realize the update of the preset global grid map.
本申请实施例,通过采集机器人当前位姿以及预设方向的深度数据和红外数据,在红外数据中提取至少一个边缘点,根据深度数据处理至少一个边缘点后,处理后的至少一个边缘组成特征点云,确定特征点云在预设全局栅格地图中的障碍得分,基于障碍得分和当前位姿构建第一残差项,基于移动位姿确定的预测位姿以及历史位姿构建第二残差项,调整当前位姿以及预测位姿使得第一残差项与第二残差项的和最小,将和最小时的当前位姿和预测位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为机器人的定位位姿信息,定位位姿信息作为定位信息,将定位信息对应的位姿更新到预设全局栅格地图,实现了机器人定位信息的准确获取,减少复杂环境对位姿确定的影响,可增强机器人服务质量,有助于提高用户使用体验。In the embodiment of the present application, by collecting the depth data and infrared data of the robot's current pose and preset direction, extracting at least one edge point from the infrared data, and processing at least one edge point according to the depth data, the processed at least one edge constitutes a feature Point cloud, determine the obstacle score of the feature point cloud in the preset global grid map, construct the first residual item based on the obstacle score and the current pose, and construct the second residual item based on the predicted pose determined by the moving pose and the historical pose The difference term, adjust the current pose and the predicted pose so that the sum of the first residual term and the second residual term is the smallest, and the abscissa and ordinate of the current pose and the predicted pose when the sum is minimized, as well as the direction of the robot and the world The angle between the X-axis of the coordinate system is used as the positioning pose information of the robot, and the positioning pose information is used as the positioning information, and the pose corresponding to the positioning information is updated to the preset global grid map, which realizes accurate acquisition of robot positioning information and reduces The influence of complex environment on pose determination can enhance robot service quality and help improve user experience.
在一个示例性的实施方式中,图25是本申请实施例提供的一种定位方法的示例图,参见图25,基于顶视TOF相机的机器人定位和建图方法可以包括如下步骤:In an exemplary embodiment, FIG. 25 is an example diagram of a positioning method provided in the embodiment of the present application. Referring to FIG. 25 , the robot positioning and mapping method based on a top-view TOF camera may include the following steps:
步骤一:获取顶视特征点云:Step 1: Obtain the top-view feature point cloud:
1、对采集的红外数据进行高斯滤波降噪处理。1. Perform Gaussian filter noise reduction processing on the collected infrared data.
2、对红外数据进行Canny边缘检测以获取边缘点,主要包括有限差分计算幅值和方向、非最大值抑制、用双阈值算法检测和连接边缘等步骤。2. Carry out Canny edge detection on infrared data to obtain edge points, mainly including steps such as finite difference calculation amplitude and direction, non-maximum suppression, detection and connection of edges with double threshold algorithm.
3、对2中提取到的边缘点为二维图像层面的坐标点,利用深度数据的中至少一个边缘点的深度信息以及相机模型获取三维的特征点云,转换方式如下:3. The edge point extracted in 2 is the coordinate point of the two-dimensional image layer, and the depth information of at least one edge point in the depth data and the camera model are used to obtain the three-dimensional feature point cloud. The conversion method is as follows:
Figure PCTCN2021096828-appb-000021
Figure PCTCN2021096828-appb-000021
其中,Z为边缘点的深度信息,(u,v)为边缘点的二维坐标,K为相机内参矩阵,可以深度相机模型确定,P为三维点云的坐标。Among them, Z is the depth information of the edge point, (u, v) is the two-dimensional coordinates of the edge point, K is the internal parameter matrix of the camera, which can be determined by the depth camera model, and P is the coordinate of the three-dimensional point cloud.
步骤二、点云匹配:Step 2. Point cloud matching:
机器人在进行定位和建图时,可以使用转换到世界坐标系下的历史特征点云构建栅格地图,使用新生成的特征点云与栅格地图进行匹配,匹配过程可以包括:When the robot performs positioning and mapping, it can use the historical feature point cloud converted to the world coordinate system to construct a grid map, and use the newly generated feature point cloud to match the grid map. The matching process can include:
1、将提取到的特征点云P通过机器人位姿T转换到世界坐标系下,转换公式为w i=Proj(T*p i),其中,w i表示第i个边缘点在世界坐标系下的坐标,p i表示第i个边缘点在机器人坐标下的坐标,Proj()函数将三维坐标映射为二维坐标,T为机器人的当前位姿,由x,y和yaw表示,T可以如下公式: 1. Convert the extracted feature point cloud P to the world coordinate system through the robot pose T. The conversion formula is w i =Proj(T*p i ), where w i means that the i-th edge point is in the world coordinate system The coordinates below, p i represents the coordinates of the i-th edge point under the robot coordinates, the Proj() function maps the three-dimensional coordinates to two-dimensional coordinates, T is the current pose of the robot, represented by x, y and yaw, T can be The formula is as follows:
Figure PCTCN2021096828-appb-000022
Figure PCTCN2021096828-appb-000022
2、特征点云中每个边缘点映射到栅格地图中,将边缘点映射到的栅格是障碍的概率作为该边缘点的匹配得分(障碍得分),所有点的匹配得分的总和记为特征点云的得分:2. Each edge point in the feature point cloud is mapped to a grid map, and the probability that the grid to which the edge point is mapped is an obstacle is taken as the matching score (obstacle score) of the edge point, and the sum of the matching scores of all points is recorded as The score of the feature point cloud:
s=1*p cell s=1*p cell
其中,s为一个边缘点的匹配得分,p cell为边缘点映射到的栅格的占据概率。 Among them, s is the matching score of an edge point, and p cell is the occupancy probability of the grid to which the edge point is mapped.
3、构建代价函数模型优化机器人位姿T,通过调整位姿T使代价函数最小, 方程如下:3. Build a cost function model to optimize the pose T of the robot, and minimize the cost function by adjusting the pose T. The equation is as follows:
Figure PCTCN2021096828-appb-000023
Figure PCTCN2021096828-appb-000023
其中,e 1为残差,p k为特征点云的第k个边缘点,M(T,p k)为特征点云的第k个边缘点p k在机器人位姿为T时投影到栅格地图计算的障碍得分,n为特征点云中的边缘点的数量。 Among them, e 1 is the residual, p k is the kth edge point of the feature point cloud, M(T,p k ) is the kth edge point of the feature point cloud p k is projected to the grid when the robot pose is T The obstacle score calculated by the grid map, n is the number of edge points in the feature point cloud.
步骤三、机器人位姿优化Step 3. Robot pose optimization
对于步骤二中特征点云和车速(机器人的移动速度)作为约束一起构建非线性最小二乘问题,联合优化机器人的位姿,包括如下步骤:For the feature point cloud in step 2 and the vehicle speed (moving speed of the robot) as constraints to construct a nonlinear least squares problem together, jointly optimize the pose of the robot, including the following steps:
1、构建代价函数模型优化机器人位姿T,通过调整位姿T使代价函数最小的误差项,误差项如下:1. Build a cost function model to optimize the pose T of the robot, and adjust the pose T to minimize the error term of the cost function. The error term is as follows:
Figure PCTCN2021096828-appb-000024
Figure PCTCN2021096828-appb-000024
其中,e 1为残差,p k为特征点云的第k个边缘点,M(T,p k)为特征点云的第k个边缘点p k在机器人位姿为T时投影到栅格地图计算的障碍得分,n为特征点云中的边缘点的数量。 Among them, e 1 is the residual, p k is the kth edge point of the feature point cloud, M(T,p k ) is the kth edge point of the feature point cloud p k is projected to the grid when the robot pose is T The obstacle score calculated by the grid map, n is the number of edge points in the feature point cloud.
2、构建车速得到的误差项e 3: 2. The error item e3 obtained by constructing the vehicle speed:
e 3=L-L last e 3 =LL last
其中,L是由车速推送当前时刻机器人的位姿,L last是上一时刻得到的机器人位姿。 Among them, L is the pose of the robot at the current moment pushed by the vehicle speed, and L last is the pose of the robot obtained at the previous moment.
3、对于所有的误差项,构建下面的最优化问题,使用优化库求(谷歌ceres)解得到当前时刻的位姿:3. For all error terms, construct the following optimization problem, and use the optimization library to find (Google ceres) solution to get the pose at the current moment:
(x,y,yaw)=argmin∑|e 1|+|e 3| (x,y,yaw)=argmin∑|e 1 |+|e 3 |
4、对于步骤二中得到的特征点云,通过优化后的位姿将其转换到世界坐标系下,并基于特征点云更新栅格地图。4. For the feature point cloud obtained in step 2, transform it into the world coordinate system through the optimized pose, and update the grid map based on the feature point cloud.
图26是本申请实施例提供的另一种定位方法的流程示意图,本实施例可适用于人流密集场景下机器人定位的情况,该方法可以由定位装置来执行,该装置可以采用硬件和/或软件的方式来实现,参见图26,本申请实施例提供的定位方法包括如下步骤。Fig. 26 is a schematic flow chart of another positioning method provided by the embodiment of the present application. This embodiment is applicable to robot positioning in crowded scenes. The method can be performed by a positioning device, which can use hardware and/or It is realized by software, referring to FIG. 26 , the positioning method provided by the embodiment of the present application includes the following steps.
S1410、采集机器人的当前位姿以及至少一个预设方向的深度数据。S1410. Collect the current pose of the robot and depth data in at least one preset direction.
当前位姿可以是表示机器人当前时刻位置和状态的信息,可以包括世界坐标系下的位置坐标以及机器人方向与世界坐标系的X轴的夹角等。参见图20,本申请实施例中机器人的当前位姿可以包含3个未知量,分别为x、y和yaw,其中,x表示机器人作为世界坐标系下的横坐标,y表示世界坐标下的纵坐标,yaw表示机器人方向与世界坐标系的X轴的夹角。预设方向可以是预先设置的数据采集方向,可以由用户或者服务商设置,预设方向可以为空间内任意方向,例如,机器人的竖直方向以及机器人的水平向上45度方向等。深度数据可以是反映物体到机器人距离的数据,深度数据可以包括物体在空间中的位置信息以及距离深度数据采集装置的距离等,深度数据可以通过设置在机器人上的传感器采集获得。The current pose can be information representing the current position and state of the robot, and can include the position coordinates in the world coordinate system and the angle between the robot direction and the X-axis of the world coordinate system. Referring to Fig. 20, the current pose of the robot in the embodiment of the present application may contain three unknown quantities, namely x, y and yaw, where x represents the abscissa of the robot in the world coordinate system, and y represents the vertical in the world coordinate Coordinates, yaw indicates the angle between the robot direction and the X-axis of the world coordinate system. The preset direction can be a preset data collection direction, which can be set by the user or the service provider. The preset direction can be any direction in space, for example, the vertical direction of the robot and the horizontal direction of 45 degrees upward of the robot. The depth data can be the data reflecting the distance from the object to the robot. The depth data can include the position information of the object in space and the distance from the depth data acquisition device. The depth data can be collected by sensors installed on the robot.
在本申请实施例中,可以使用设置在机器人上的传感器采集当前位姿,例如,可以使用惯性导航或者位移传感器采集移动距离从而确定出机器人的当前位姿。可以在机器人的至少一个预设方向上采集深度数据,例如,可以在机器人顶部方向或者水平向上45度方向上采集障碍物的深度数据。可以理解的是,机器人通过在多个预设方向上采集深度数据,以进一步提高机器人定位的准确性。In the embodiment of the present application, the current pose of the robot can be collected using sensors installed on the robot. For example, inertial navigation or displacement sensors can be used to collect the moving distance to determine the current pose of the robot. Depth data may be collected in at least one preset direction of the robot, for example, depth data of obstacles may be collected in the direction of the top of the robot or in a 45-degree horizontal direction. It can be understood that the robot collects depth data in multiple preset directions to further improve the accuracy of robot positioning.
S1420、在深度数据中提取至少一个平面的外轮廓点并将提取的至少一个平面的外轮廓点构成特征点云。S1420. Extract the outer contour points of at least one plane from the depth data, and form the feature point cloud with the extracted outer contour points of at least one plane.
平面可以是深度数据构成的点云包括的一个或者多个平面,平面可以在深度数据构成的点云中沿竖直方向或者水平方向划分生成,外轮廓点可以是构成平面外轮廓的位置点集合,通过外轮廓点代表平面所有位置点,可减少机器人 定位使用的数据量。特征点云可以是反映深度数据特征的位置点集合。The plane can be one or more planes included in the point cloud composed of depth data. The plane can be divided and generated along the vertical or horizontal direction in the point cloud composed of depth data. The outer contour points can be the set of position points that constitute the outer contour of the plane. , through the outer contour points to represent all the position points of the plane, which can reduce the amount of data used by the robot positioning. The feature point cloud can be a collection of position points reflecting the characteristics of the depth data.
可以将深度数据沿不同方向划分一个或者多个平面,可以提取每个平面中处于外轮廓点的位置点,并将提取到的多个位置点构成特征点云,可以通过特征点云反映深度数据的特征。The depth data can be divided into one or more planes in different directions, and the position points at the outer contour points in each plane can be extracted, and the extracted multiple position points can form a feature point cloud, which can reflect the depth data through the feature point cloud Characteristics.
S1430、确定特征点云在预设全局栅格地图中的障碍得分,其中,预设全局栅格地图基于历史点云构成。S1430. Determine the obstacle score of the feature point cloud in the preset global grid map, where the preset global grid map is formed based on the historical point cloud.
障碍得分可以是特征点云中全部位置点映射到预设全局栅格地图的情况下,被映射至障碍区域内的栅格的位置点所映射到的栅格的概率总分。预设全局栅格地图可以是历史点云构成,可以反映机器人所处空间内的情况,预设全局栅格地图中可以包括一个或者多个栅格,每个栅格内可以包括机器人位于该栅格的概率值。预设全局栅格地图可以在机器人的移动过程中逐渐完善。障碍得分可以是位置点处于预设全局栅格地图中障碍物位置的概率值总和。参见图21,预设全局栅格地图中可以包括未知区域、障碍区域和无障碍区域三部分组成,障碍得分可以统计特征点云映射到障碍区域内栅格的概率值的总和确定。The obstacle score may be the total probability score of the grid that is mapped to the grid in the obstacle area when all the position points in the feature point cloud are mapped to the preset global grid map. The preset global grid map can be composed of historical point clouds, which can reflect the situation in the space where the robot is located. The preset global grid map can include one or more grids, and each grid can include grid probability value. The preset global grid map can be gradually improved during the movement of the robot. The obstacle score may be the sum of probability values that the location point is located at the obstacle location in the preset global grid map. Referring to Fig. 21, the preset global grid map may consist of three parts: unknown area, obstacle area and unobstructed area, and the obstacle score may be determined by the sum of the probability values of the statistical feature point cloud mapped to the grid in the obstacle area.
在本申请实施例中,可以将特征点云中全部位置点一一映射到预设全局栅格地图中,确定出每个位置点所在对应栅格的概率值,可以统计位置点所处于的障碍区域内的栅格的概率值的和作为特征点云在预设全局栅格地图的障碍得分。In the embodiment of this application, all the position points in the feature point cloud can be mapped to the preset global grid map one by one, the probability value of the corresponding grid where each position point is located can be determined, and the obstacles where the position point is located can be counted The sum of the probability values of the grids in the area is used as the obstacle score of the feature point cloud in the preset global grid map.
S1440、根据障碍得分优化当前位姿以确定机器人的定位信息。S1440. Optimizing the current pose according to the obstacle score to determine the positioning information of the robot.
定位信息可以反映机器人在当前时刻的位置信息,可以包括机器人在世界坐标系下的坐标以及机器人与世界坐标系的X轴的夹角。The positioning information can reflect the position information of the robot at the current moment, and can include the coordinates of the robot in the world coordinate system and the angle between the robot and the X-axis of the world coordinate system.
可以使用特征点云的障碍得分作为当前位姿的限制条件,可以在上述障碍得分的基础下对当前位姿进行优化,可以将优化后的当前位姿作为机器人的定位信息。优化当前位姿的方式可以非线性最小二乘法优化以及拉格朗日乘子法优化等。The obstacle score of the feature point cloud can be used as the constraint condition of the current pose, the current pose can be optimized on the basis of the above obstacle score, and the optimized current pose can be used as the positioning information of the robot. The way to optimize the current pose can be nonlinear least square optimization and Lagrange multiplier optimization.
本申请实施例,通过获取机器人当前位姿以及预设方向上的深度数据,在深度数据中提取至少一个平面的外轮廓点,并使用外轮廓点构成特征点云,确定特征点云在预设全局栅格地图中的障碍得分,使用该障碍得分优化当前位姿以获取机器人的定位信息,实现了机器人位置信息的准确获取,减少复杂环境 对位姿确定的影响,可增强机器人服务质量,有助于提高用户使用体验。In the embodiment of the present application, by acquiring the current pose of the robot and the depth data in the preset direction, extracting the outer contour points of at least one plane from the depth data, and using the outer contour points to form a feature point cloud, it is determined that the feature point cloud is in the preset The obstacle score in the global grid map, using the obstacle score to optimize the current pose to obtain the robot's positioning information, realizes the accurate acquisition of the robot's position information, reduces the impact of complex environments on the pose determination, and can enhance the robot's service quality. Help improve user experience.
图27是本申请实施例提供的另一种定位方法的流程图,本申请实施例是在上述实施例基础上进行说明,参见图27,本发明实施例提供的方法具体包括如下步骤:Fig. 27 is a flowchart of another positioning method provided by the embodiment of the present application. The embodiment of the present application is described on the basis of the above-mentioned embodiments. Referring to Fig. 27, the method provided by the embodiment of the present invention specifically includes the following steps:
S1510、获取机器人世界坐标系下的当前位姿,其中,当前位姿至少包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角。S1510. Obtain the current pose in the world coordinate system of the robot, wherein the current pose at least includes an abscissa, a ordinate, and an included angle between the robot direction and the X-axis of the world coordinate system.
世界坐标系可以是机器人的绝对坐标系,世界坐标系的原点可以在机器人初始化时确定。The world coordinate system can be the absolute coordinate system of the robot, and the origin of the world coordinate system can be determined when the robot is initialized.
在本申请实施例中,当前位姿可以包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角三个元素,当前位姿可以通过机器人中设置的传感器获取。可以获取机器人的传感器采集的数据,基于传感器采集的数据确定出机器人当前时刻在世界坐标系中所处的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为当前位姿。In the embodiment of the present application, the current pose may include three elements, the abscissa, the ordinate, and the angle between the robot direction and the X-axis of the world coordinate system, and the current pose may be acquired by sensors set in the robot. The data collected by the sensor of the robot can be obtained, and based on the data collected by the sensor, the abscissa and ordinate of the robot in the world coordinate system at the current moment and the angle between the robot direction and the X-axis of the world coordinate system are determined as the current pose.
S1520、使用预先设置在机器人上的至少一个深度数据传感器采集所述至少一个预设方向的深度数据,其中,至少一个预设方向包括水平方向和竖直方向中的至少一种。S1520. Use at least one depth data sensor preset on the robot to collect depth data in the at least one preset direction, where the at least one preset direction includes at least one of a horizontal direction and a vertical direction.
深度数据传感器可以是采集深度数据的设备,可以采集机器人到被采集物体的距离,可以感知空间中物体的深度,深度数据传感器可以包括结构光深度传感器、相机阵列深度传感器和飞行时间深度传感器。至少一个预设方向可以为机器人的水平方向和竖直方向中至少一种。A depth data sensor can be a device that collects depth data. It can collect the distance from the robot to the object to be collected, and can perceive the depth of objects in space. The depth data sensor can include a structured light depth sensor, a camera array depth sensor, and a time-of-flight depth sensor. At least one preset direction may be at least one of a horizontal direction and a vertical direction of the robot.
在本申请实施例中,机器人上预设安装有深度相机,该深度相机的数据采集方向可以为机器人的水平方向或者竖直方向,深度相机的数据采集方向可以为用户或者服务商设置的预设方向。机器人可以使用深度相机采集对应的预设方向上物体的深度数据。机器人上可以预先设置有多个深度数据传感器,多个深度数据传感器设置的采集数据的预设方向可以不同,预设方向可以为用户或者服务商设置的预设方向,例如,可以是便于采集室内天花板特征的方向,预设方向可以包括竖直向上、或者水平向上45度等方向,通过多方向和多数据采集来源的方式进一步提高深度数据的可靠性,以增强定位信息的精度。In the embodiment of the present application, a depth camera is preset installed on the robot, and the data collection direction of the depth camera can be the horizontal direction or the vertical direction of the robot, and the data collection direction of the depth camera can be a preset value set by the user or the service provider. direction. The robot can use the depth camera to collect the depth data of the object in the corresponding preset direction. A plurality of depth data sensors can be preset on the robot, and the preset directions of collecting data set by the multiple depth data sensors can be different, and the preset directions can be preset directions set by users or service providers, for example, it can be convenient for collecting For the direction of ceiling features, the preset direction can include vertical upward or horizontal upward 45 degrees, etc. The reliability of depth data is further improved by means of multi-direction and multi-data collection sources to enhance the accuracy of positioning information.
S1530、滤除所述深度数据中的噪声。S1530. Filter out noise in the depth data.
在本申请实施例中,由于红外数据采集过程中由于受到环境的影响,数据中存在噪声,降低了边缘点采集的准确性,为了提高机器人定位的准确性,可以对红外数据中的噪声进行滤除。滤除的方法可以包括高斯滤波、双边滤波、中值滤波、均值滤波等方法。高斯滤波可以是线性平滑滤波,可以在图像处理过程中消除高斯噪声,实现图像数据的减噪。In the embodiment of this application, due to the influence of the environment during the infrared data collection process, there is noise in the data, which reduces the accuracy of edge point collection. In order to improve the accuracy of robot positioning, the noise in the infrared data can be filtered remove. The filtering method may include Gaussian filtering, bilateral filtering, median filtering, mean filtering and other methods. The Gaussian filter can be a linear smoothing filter, which can eliminate Gaussian noise during image processing and achieve noise reduction of image data.
示例性的,以对采集到的红外数据通过高斯滤波来消除红外数据中的噪声为例,可以使用一个模板(卷积或掩膜)扫描红外数据中每个像素,使用模板确定邻域内像素的加权平均灰度值替代模板中心像素点的值,实现红外数据的噪声滤除。Exemplarily, taking Gaussian filtering on the collected infrared data as an example to eliminate noise in the infrared data, a template (convolution or mask) can be used to scan each pixel in the infrared data, and the template can be used to determine the The weighted average gray value replaces the value of the center pixel of the template to achieve noise filtering of infrared data.
S1540、基于机器人的相机模型将深度数据转换为三维点云,深度数据至少包括位置点信息和深度信息。S1540. Convert the depth data into a three-dimensional point cloud based on the camera model of the robot, where the depth data at least includes position point information and depth information.
深度数据由位置点信息和深度信息组成,位置点信息可以在一个平面内构成图像,深度信息可以是该位置点信息对应的位置点距离采集装置的深度距离,深度数据可以为深度图像,每个像素点的三维分别为横坐标、纵坐标和深度信息。相机模型可以是机器人建立的用于三维转换的相机模型,可以用于使用深度数据校正边缘点的坐标以获取畸变较小的特征点云,深度数据可以包括多个边缘点的深度信息,该深度信息可以作为对应边缘点三维坐标的第三维度信息。相机模型可以包括欧拉相机模型、UVN相机模型、针孔相机模型、鱼眼相机模型和广角相机模型中的一种或者多种。The depth data is composed of position point information and depth information. The position point information can form an image in a plane. The depth information can be the depth distance between the position point corresponding to the position point information and the acquisition device. The depth data can be a depth image. Each The three dimensions of a pixel point are abscissa, ordinate and depth information respectively. The camera model can be a camera model established by a robot for three-dimensional conversion, and can be used to correct the coordinates of edge points using depth data to obtain a feature point cloud with less distortion. The depth data can include depth information of multiple edge points. The depth The information may be used as third-dimensional information corresponding to the three-dimensional coordinates of the edge points. The camera model may include one or more of an Euler camera model, a UVN camera model, a pinhole camera model, a fisheye camera model, and a wide-angle camera model.
在本申请实施例中,可以提取到深度数据中的位置点信息和深度信息,可以基于深度数据中的位置点信息和深度信息在空间内确定出一个或多个三维位置点,可以使用机器人预设的相机模型对获取到的三维位置点进行转换,以降低由于深度数据采集装置导致的位置点的畸变,提高定位的准确性,可以使用转换后的三维位置点构建三维点云。示例性的,可以按照下述转换方式实现深度数据到三维点云的转换:In the embodiment of the present application, the location point information and depth information in the depth data can be extracted, and one or more three-dimensional location points can be determined in space based on the location point information and depth information in the depth data, and the robot can be used to predict The established camera model converts the obtained 3D position points to reduce the distortion of the position points caused by the depth data acquisition device and improve the accuracy of positioning. The converted 3D position points can be used to construct a 3D point cloud. Exemplarily, the conversion from depth data to 3D point cloud can be realized in the following conversion manner:
Figure PCTCN2021096828-appb-000025
Figure PCTCN2021096828-appb-000025
其中,Z为位置点的深度信息,(u,v)为深度数据的位置点信息,K为相机内参矩阵,可以相机模型确定,P为三维点云的坐标。Among them, Z is the depth information of the position point, (u, v) is the position point information of the depth data, K is the camera internal reference matrix, which can be determined by the camera model, and P is the coordinate of the 3D point cloud.
S1550、依据深度信息和预设法向量信息将三维点云分割为至少一个平面。S1550. Divide the 3D point cloud into at least one plane according to the depth information and the preset normal vector information.
深度信息可以是分割平面过程中的深度数据包括的深度信息,预设法向量信息可以包括用于分割三维点云使用的法线信息和法向量信息。预设法向量信息可以预先设置在系统内部,也可以由用户输入。The depth information may be the depth information included in the depth data in the process of segmenting the plane, and the preset normal vector information may include the normal information and normal vector information used for segmenting the 3D point cloud. The preset normal vector information can be preset inside the system, or can be input by the user.
可以获取到深度信息和预设法向量信息,可以按照该深度信息和预设法向量信息将三维点云划分为多个平面。平面的划分可以法向量信息为基础,并且平面划分的最大迭代深度不超过获取的深度。示例性的,可以使用点云库(Point Cloud Library,PCL)函数中的SACMODEL_PLANE模型对获取到的三维点云进行划分,可以将深度和法向量信息以及三维点云输入到SACMODEL_PLANE模型获取到多个平面。The depth information and the preset normal vector information can be obtained, and the 3D point cloud can be divided into multiple planes according to the depth information and the preset normal vector information. The division of the plane can be based on the normal vector information, and the maximum iteration depth of the plane division does not exceed the acquired depth. Exemplarily, the SACMODEL_PLANE model in the Point Cloud Library (PCL) function can be used to divide the obtained 3D point cloud, and the depth and normal vector information and the 3D point cloud can be input to the SACMODEL_PLANE model to obtain multiple flat.
S1560、提取每个平面的外轮廓点作为特征点云。S1560. Extract the outer contour points of each plane as a feature point cloud.
在本申请实施例中,为了减少运算量可以在每个平面中选取外轮廓点代表所述每个平面,可以将提取到的全部外轮廓点的集合作为特征点云。In the embodiment of the present application, in order to reduce the amount of calculation, the outer contour points in each plane may be selected to represent each plane, and the set of all extracted outer contour points may be used as a feature point cloud.
S1570、将特征点云内每个位置点的坐标转换到世界坐标系,映射坐标转换后的每个位置点到预设全局栅格地图的目标栅格。S1570. Convert the coordinates of each position point in the feature point cloud to the world coordinate system, and map each position point after coordinate conversion to the target grid of the preset global grid map.
可以将该特征点云中全部位置点的坐标进行坐标系转换,使得特征点云的坐标以世界坐标系为基准。参见图23,提取的外轮廓点位于机器人坐标系下,相机模型对应一个坐标系,可以按照相机模型的坐标系将深度数据添加到对机器人坐标系下的外轮廓点中,实现无畸变或低畸变的坐标转换,然后将三维坐标再转换到世界坐标下,示例性的,转换过程可以如下公式表示:The coordinates of all the position points in the feature point cloud can be transformed into a coordinate system, so that the coordinates of the feature point cloud are based on the world coordinate system. Referring to Figure 23, the extracted outer contour points are located in the robot coordinate system, and the camera model corresponds to a coordinate system. The depth data can be added to the outer contour points in the robot coordinate system according to the coordinate system of the camera model to achieve no distortion or low Distorted coordinate conversion, and then convert the three-dimensional coordinates to world coordinates. For example, the conversion process can be expressed by the following formula:
w i=Proj(T*p i),其中,w i表示第i个位置点在世界坐标系下的坐标,p i表示第i个位置点在机器人坐标下的坐标,Proj()函数将三维坐标映射为二维坐标,T为机器人的当前位姿,由x,y和yaw表示,T可以如下公式: w i =Proj(T*p i ), where, w i represents the coordinates of the i-th position point in the world coordinate system, p i represents the coordinates of the i-th position point in the robot coordinates, and the Proj() function converts the three-dimensional Coordinates are mapped to two-dimensional coordinates, T is the current pose of the robot, represented by x, y and yaw, T can be formulated as follows:
Figure PCTCN2021096828-appb-000026
Figure PCTCN2021096828-appb-000026
在本申请实施例中,确定每个位置点在世界坐标系下的坐标后,可以将全部位置点按照坐标依次映射到预设全局栅格地图的目标栅格中,例如,预设栅格地图中不同栅格具有不同的坐标范围,可以将全部位置点按照各自的坐标范 围映射到对应的栅格中,可以将具有位置点映射的栅格记为目标栅格。In this embodiment of the application, after determining the coordinates of each location point in the world coordinate system, all location points can be mapped to the target grid of the preset global grid map in sequence according to the coordinates, for example, the preset grid map Different grids have different coordinate ranges, and all position points can be mapped to corresponding grids according to their respective coordinate ranges, and the grid with position point mapping can be marked as the target grid.
步骤S1580、若一个边缘点所映射到的目标栅格为预设全局栅格地图中障碍区域内的栅格,则获取所述目标栅格的概率值作为所述一个位置点的障碍得分。Step S1580, if the target grid to which an edge point is mapped is a grid within the obstacle area in the preset global grid map, obtain the probability value of the target grid as the obstacle score of the one location point.
障碍区域内的栅格可以通过信息进行标识。Grids within obstacle areas can be identified by information.
可以对预设全局栅格地图中存在位置点映射的目标栅格进行检验,若该目标栅格为障碍区域内的栅格,则将目标栅格内存储的概率值作为对应的位置点的障碍得分。It is possible to check the target grid with location point mapping in the preset global grid map. If the target grid is a grid in the obstacle area, the probability value stored in the target grid will be used as the obstacle of the corresponding location point Score.
若目标栅格不是障碍区域内的栅格,例如,位置点所映射到的栅格在预设全局栅格内为无障碍区域或者未知区域,则可以不对该位置点对应的概率值进行统计,可以将该位置点删除或者不获取该位置点对应的栅格存储的概率值。If the target grid is not a grid in the obstacle area, for example, the grid mapped to the location point is an obstacle-free area or an unknown area in the preset global grid, then the probability value corresponding to the location point may not be counted. The location point can be deleted or the probability value stored in the grid corresponding to the location point can not be acquired.
S1590、统计特征点云内全部位置点的障碍得分的总和作为特征点云的障碍得分。S1590. Count the sum of the obstacle scores of all the position points in the feature point cloud as the obstacle score of the feature point cloud.
在本申请实施例中,可以统计特征点云中所有位置点的障碍得分,可以将该障碍得分的总和作为特征点云的障碍得分。In the embodiment of the present application, the obstacle scores of all position points in the feature point cloud can be counted, and the sum of the obstacle scores can be used as the obstacle score of the feature point cloud.
S15100、根据当前位姿以及特征点云的障碍得分构建残差函数。S15100. Construct a residual function according to the current pose and the obstacle score of the feature point cloud.
在本申请实施例中,可以根据当前位姿和特征点云的障碍得分构建用于优化当前位姿的残差函数,其中,残差函数可以是优化机器人位姿的函数关系,可以代表机器人当前位姿与障碍得分的关联关系,残差函数可以是通过非线性最小二乘法问题构成的优化函数,例如,In the embodiment of the present application, the residual function for optimizing the current pose can be constructed according to the current pose and the obstacle score of the feature point cloud, where the residual function can be the functional relationship of the optimized robot pose, which can represent the robot's current The relationship between the pose and the obstacle score, the residual function can be an optimization function formed by a nonlinear least squares problem, for example,
Figure PCTCN2021096828-appb-000027
Figure PCTCN2021096828-appb-000027
其中,e 1为残差,p k为特征点云中第k个边缘点,M(T,p k)为第k个边缘点p k在机器人位姿为T时投影到预设全局栅格地图计算的障碍得分,m为特征点云中的位置点的数量。 Among them, e 1 is the residual, p k is the kth edge point in the feature point cloud, M(T,p k ) is the kth edge point p k is projected to the preset global grid when the robot pose is T The obstacle score calculated by the map, m is the number of location points in the feature point cloud.
S15110、调整残差函数中参数信息使得残差函数的结果值最小,残差函数中参数信息为当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数的取值。S15110. Adjust the parameter information in the residual function to minimize the result value of the residual function. The parameter information in the residual function is at least one of the abscissa and ordinate of the current pose and the angle between the robot direction and the X-axis of the world coordinate system The value of the parameter.
可以调整残差函数中的当前位姿的取值使得残差公式的结果值最小,对当前位姿调整的方式可以包括梯度下降法、牛顿法和拟牛顿法等。The value of the current pose in the residual function can be adjusted to minimize the result value of the residual formula, and the methods for adjusting the current pose can include gradient descent method, Newton method, and quasi-Newton method.
步骤15120、将结果值最小时当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为定位位姿信息。Step 15120, take the abscissa and ordinate of the current pose when the result value is the smallest, and the angle between the robot direction and the X-axis of the world coordinate system as the positioning pose information.
定位位姿信息可以是用于机器人定位的位姿信息,该定位位姿信息可以表示机器人当前最可能处于的状态。The positioning pose information may be pose information used for robot positioning, and the positioning pose information may indicate the most likely current state of the robot.
在本申请实施例中,可以当残差函数的结果值最小时,可以确定当前位姿得到最佳优化,可以将此时调整后的当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为定位位姿信息。In the embodiment of the present application, when the result value of the residual function is the smallest, it can be determined that the current pose is optimally optimized, and the abscissa, ordinate, and robot direction of the adjusted current pose can be compared with the world coordinates The included angle of the X-axis of the system is used as the positioning pose information.
本申请实施例,通过获取机器人在世界坐标系下的当前位姿以及使用设置在机器人上的深度传感器采集预设方向的深度数据,滤除深度数据包含的噪声,使用机器人的相机模型以及深度信息生成三维点云,根据深度信息和法向量信息处理将三维点云划分为多个平面,提取每个平面的外轮廓点作为特征点云,将特征点云内全部位置点的坐标转换到世界坐标系,映射全部位置点到预设全局栅格地图的目标栅格,在目标栅格为障碍位置时获取目标栅格的概率值作为位置点的障碍得分,统计特征点云的全部位置点的障碍得分的总和作为特征点云的障碍得分,使用障碍得分和当前位姿构建残差函数,调整残差函数中的当前位姿中参数信息的取值,使得残差函数的结果值最小,将结果值最小时的当前位姿作为机器人的定位位姿信息,本申请实施例选择深度数据对应的三维点云中的平面的轮廓点组成特征点云,在不改变定位信息精度的前提下降低数据运算量,提高定位信息确定效率,使用障碍得分优化当前位姿,提高机器人位置信息确定的准确性,可增强机器人的服务质量,有助于提高用户的使用体验。In the embodiment of the present application, by obtaining the current pose of the robot in the world coordinate system and using the depth sensor installed on the robot to collect depth data in a preset direction, filtering out the noise contained in the depth data, using the robot's camera model and depth information Generate a 3D point cloud, divide the 3D point cloud into multiple planes according to the depth information and normal vector information processing, extract the outer contour points of each plane as a feature point cloud, and convert the coordinates of all position points in the feature point cloud to world coordinates System, map all position points to the target grid of the preset global grid map, obtain the probability value of the target grid as the obstacle score of the position point when the target grid is an obstacle position, and count the obstacles of all position points in the feature point cloud The sum of the scores is used as the obstacle score of the feature point cloud, using the obstacle score and the current pose to construct the residual function, adjusting the value of the parameter information in the current pose in the residual function, so that the result value of the residual function is the smallest, and the result The current pose when the value is the smallest is used as the positioning pose information of the robot. In the embodiment of the present application, the outline points of the plane in the 3D point cloud corresponding to the depth data are selected to form the feature point cloud, and the data calculation is reduced without changing the accuracy of the positioning information. Quantity, improving the efficiency of positioning information determination, using obstacle scores to optimize the current pose, improving the accuracy of robot position information determination, can enhance the service quality of the robot, and help improve the user experience.
在上述实施例的基础上,所述方法还包括:根据所述机器人的移动速度和障碍得分优化所述定位信息。On the basis of the above embodiments, the method further includes: optimizing the positioning information according to the moving speed and obstacle score of the robot.
在本申请实施例的基础上,还可以获取机器人的移动速度,可以使用移动速度和障碍得分共同对当前位姿进行优化,得到定位信息,进一步提高机器人定位的准确性,例如,可以采集机器人的移动速度,基于移动速度构成限制条件,可以基于该限制条件对基于障碍得分优化后得到的定位信息构造非线性最小二乘问题,并根据梯度下降方法解决该问题,在非线性最小二乘问题的结果 值最小时,可以得到最后的定位信息。On the basis of the embodiment of the present application, the moving speed of the robot can also be obtained, and the current pose can be optimized by using the moving speed and the obstacle score to obtain positioning information to further improve the positioning accuracy of the robot. For example, the robot's moving speed can be collected Moving speed, constituting a restriction condition based on the moving speed, based on the restriction condition, a nonlinear least squares problem can be constructed for the positioning information obtained after optimizing the obstacle score, and the problem can be solved according to the gradient descent method. In the nonlinear least squares problem When the result value is minimum, the final positioning information can be obtained.
图28是本申请实施例提供的另一种定位方法的流程图,本申请实施是在上述实施例基础上进行说明,参见图28,本申请实施例提供的方法包括如下步骤。Fig. 28 is a flow chart of another positioning method provided by the embodiment of the present application. The implementation of the present application is described on the basis of the above embodiments. Referring to Fig. 28, the method provided by the embodiment of the present application includes the following steps.
S1610、采集机器人的当前位姿以及至少一个预设方向的深度数据。S1610. Collect the current pose of the robot and depth data in at least one preset direction.
S1620、在深度数据提取至少一个平面的外轮廓点并将至少一个平面的外轮廓点构成特征点云。S1620. Extract the outer contour points of at least one plane from the depth data, and form the outer contour points of at least one plane into a feature point cloud.
S1630、确定特征点云在预设全局栅格地图中的障碍得分,其中,预设全局栅格地图基于历史点云构成。S1630. Determine the obstacle score of the feature point cloud in the preset global grid map, where the preset global grid map is formed based on the historical point cloud.
S1640、根据当前位姿以及特征点云的障碍得分构建第一残差项。S1640. Construct a first residual item according to the current pose and the obstacle score of the feature point cloud.
在本申请实施例中,可以基于当前位姿和特征点云的障碍得分构建非线性最小二乘问题对应的第一残差项,例如,
Figure PCTCN2021096828-appb-000028
其中,e 1为残差,p k为特征点云中第k个位置点,M(T,p k)为特征点云中第k个位置点p k在机器人位姿为T时投影到栅格地图计算的障碍得分,m为特征点云中位置点的数量。
In the embodiment of this application, the first residual term corresponding to the nonlinear least squares problem can be constructed based on the current pose and the obstacle score of the feature point cloud, for example,
Figure PCTCN2021096828-appb-000028
Among them, e 1 is the residual, p k is the kth position point in the feature point cloud, M(T,p k ) is the kth position point in the feature point cloud, p k is projected to the grid when the robot pose is T The obstacle score calculated by the grid map, m is the number of position points in the feature point cloud.
S1650、基于移动速度确定预测位姿,并将预测位姿与上一时刻的历史位姿的差值作为第二残差项。S1650. Determine the predicted pose based on the moving speed, and use the difference between the predicted pose and the historical pose at the previous moment as a second residual term.
预测位姿可以是根据移动速度确定出的机器人位姿,例如,可以通过移动速度确定出机器人的移动位置,可以根据该移动位置确定出机器人的位姿作为预测位姿。The predicted pose may be the pose of the robot determined according to the moving speed. For example, the moving position of the robot may be determined through the moving speed, and the pose of the robot may be determined according to the moving position as the predicted pose.
,可以采集机器人的移动速度,可以使用该移动速度生成机器人的位置,可以按照该位置确定出预测位姿,基于预测位姿以及上一时刻的历史位姿的差值作为第二残差项,其中,历史位姿可以是机器人在上一个时刻确定出的位姿信息,可以包括坐标以及机器人方向与机器人方向横坐标轴的夹角。, the moving speed of the robot can be collected, the moving speed can be used to generate the position of the robot, and the predicted pose can be determined according to the position, based on the difference between the predicted pose and the previous historical pose as the second residual term, Wherein, the historical pose may be the pose information determined by the robot at the last moment, and may include the coordinates and the angle between the robot direction and the robot direction abscissa axis.
S1660、调整第一残差项和第二残差项中预测位姿中参数信息和/或当前位姿中参数信息使得第一残差项和第二残差项的和取值最小,所述参数信息包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数的取值。S1660. Adjust the parameter information in the predicted pose and/or the parameter information in the current pose in the first residual item and the second residual item to minimize the sum of the first residual item and the second residual item, the The parameter information includes the value of at least one parameter among the abscissa, ordinate and the angle between the robot direction and the X-axis of the world coordinate system.
在本申请实施例中,可以分别调整速度预测位姿和/或当前位姿中的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数,使得第一残 差项和第二残差项的和取值最小,调整的方式可以包括梯度下降法、牛顿法和拟牛顿法等。In the embodiment of the present application, at least one parameter among the abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system in the speed prediction pose and/or the current pose can be adjusted respectively, so that the first residual The sum of the term and the second residual term is the smallest, and the adjustment methods can include gradient descent method, Newton method, and quasi-Newton method.
S1670、将第一残差项和第二残差项的和取值最小时的预测位姿和/或当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为定位位姿信息。S1670, taking the predicted pose and/or the abscissa and ordinate of the current pose when the sum of the first residual term and the second residual term is the smallest, and the angle between the robot direction and the X-axis of the world coordinate system as positioning pose information.
当第一残差项和第二残差项的和取值最小时,机器人的当前位姿可以完成优化,可以将此时调整后的预测位姿和/或当前位姿的相关信息作为机器人的定位位姿信息,可以将该定位位姿信息作为机器人定位过程使用的定位信息,其中,相关信息包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角。When the sum of the first residual term and the second residual term is the smallest, the current pose of the robot can be optimized, and the adjusted predicted pose and/or current pose information can be used as the robot’s The positioning pose information can be used as the positioning information used in the robot positioning process, wherein the relevant information includes the abscissa, the ordinate, and the angle between the robot direction and the X-axis of the world coordinate system.
S1680、将定位信息对应的机器人位姿更新到预设全局栅格地图。S1680. Update the robot pose corresponding to the positioning information to the preset global grid map.
可以按照定位信息中的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角确定出机器人的最终位姿,可以基于该最终位姿确定出位于预设全局栅格地图中多个栅格的概率值,并将该概率值添加到对应预设全局栅格地图中对应栅格中,实现预设全局栅格地图的更新。The final pose of the robot can be determined according to the abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system in the positioning information. Based on the final pose, multiple The probability value of the grid, and the probability value is added to the corresponding grid in the corresponding preset global grid map, so as to realize the update of the preset global grid map.
本申请实施例,通过采集机器人当前位姿以及预设方向的深度数据,在深度数据中提取不同平面的外轮廓点构成特征点云,确定特征点云在预设全局栅格地图中的障碍得分,基于障碍得分和当前位姿构建第一残差项,基于根据移动位姿确定的预测位姿以及历史位姿构建第二残差项,调整当前位姿以及预测位姿使得第一残差项与第二残差项的和最小,将和最小时的当前位移和预测位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为机器人的定位信息,将定位信息对应的位姿更新到预设全局栅格地图,实现了机器人定位信息的准确获取,减少复杂环境对位姿确定的影响,可增强机器人服务质量,有助于提高用户使用体验。In the embodiment of the present application, by collecting the depth data of the robot's current pose and preset direction, extracting the outer contour points of different planes from the depth data to form a feature point cloud, and determining the obstacle score of the feature point cloud in the preset global grid map , the first residual term is constructed based on the obstacle score and the current pose, the second residual term is constructed based on the predicted pose determined according to the mobile pose and the historical pose, and the current pose and the predicted pose are adjusted so that the first residual term The sum of the second residual term is the smallest, and the abscissa and ordinate of the current displacement and the predicted pose when the sum is the smallest, and the angle between the robot direction and the X-axis of the world coordinate system are used as the positioning information of the robot, and the positioning information corresponds to The pose of the robot is updated to the preset global grid map, which realizes accurate acquisition of robot positioning information, reduces the impact of complex environments on pose determination, enhances the quality of robot service, and helps improve user experience.
在一个示例性的实施方式中,图29是本申请实施例提供的另一种定位方法的示例图,参见图29,基于顶视TOF相机的机器人定位和建图方法可以包括如下步骤:In an exemplary embodiment, FIG. 29 is an example diagram of another positioning method provided in the embodiment of the present application. Referring to FIG. 29, the robot positioning and mapping method based on a top-view TOF camera may include the following steps:
步骤一、获取顶视特征点云:Step 1. Obtain the top-view feature point cloud:
1、对采集的深度数据进行高斯滤波降噪处理。1. Perform Gaussian filter noise reduction processing on the collected depth data.
2、基于深度数据生成三维点云,利用深度信息和相机模型获得三维点云, 转换方式如下:2. Generate 3D point cloud based on depth data, and use depth information and camera model to obtain 3D point cloud. The conversion method is as follows:
Figure PCTCN2021096828-appb-000029
Figure PCTCN2021096828-appb-000029
其中,Z为位置点的深度信息,(u,v)为深度数据的二维坐标(位置点信息),K为相机内参矩阵,可以相机模型确定,P为三维点云的坐标。Among them, Z is the depth information of the position point, (u, v) is the two-dimensional coordinates of the depth data (position point information), K is the camera internal reference matrix, which can be determined by the camera model, and P is the coordinate of the three-dimensional point cloud.
3、对2中的三维点云依据深度和法向量信息提取进行面分割,并获得每个平面的方程,分割和方程计算方法不做详述,如PCL库函数中基于随机采样一致性的点云平面分割方法。取每个平面的外轮廓点作为特征点云。3. Segment the 3D point cloud in 2 according to the depth and normal vector information extraction, and obtain the equation of each plane. The segmentation and equation calculation methods will not be described in detail, such as the point based on random sampling consistency in the PCL library function Cloud plane segmentation method. Take the outer contour points of each plane as the feature point cloud.
步骤二、点云匹配:Step 2. Point cloud matching:
机器人在进行定位和建图时,可以使用转换到世界坐标系下的历史特征点云构建栅格地图,使用新生成的特征点云与栅格地图进行匹配,匹配过程可以包括:When the robot performs positioning and mapping, it can use the historical feature point cloud converted to the world coordinate system to construct a grid map, and use the newly generated feature point cloud to match the grid map. The matching process can include:
1、将提取到的特征点云P通过机器人位姿T转换到世界坐标系下,转换公式为w i=Proj(T*p i),其中,w i表示世界坐标系下位置点的坐标,p i表示机器人坐标下位置点的坐标,Proj()函数将三维坐标映射为二维坐标,T为机器人的当前位姿,由x,y和yaw表示,T可以如下公式: 1. Convert the extracted feature point cloud P to the world coordinate system through the robot pose T. The conversion formula is w i =Proj(T*p i ), where w i represents the coordinates of the position point in the world coordinate system, p i represents the coordinates of the position point under the robot coordinates. The Proj() function maps the three-dimensional coordinates to two-dimensional coordinates. T is the current pose of the robot, represented by x, y and yaw. T can be expressed as follows:
Figure PCTCN2021096828-appb-000030
Figure PCTCN2021096828-appb-000030
2、特征点云中每个位置点映射到栅格地图中,将边缘点映射到的栅格是障碍的概率作为该位置点的匹配得分(障碍得分),所有点的匹配得分的总和记为特征点云的得分:2. Each location point in the feature point cloud is mapped to the grid map, and the probability that the grid to which the edge point is mapped is an obstacle is taken as the matching score (obstacle score) of the location point, and the sum of the matching scores of all points is recorded as The score of the feature point cloud:
s=1*p cell s=1*p cell
其中,s为一个位置点的匹配得分,p cell为位置点映射到的栅格的占据概率。 Among them, s is the matching score of a location point, and p cell is the occupancy probability of the grid to which the location point is mapped.
3、构建代价函数模型优化机器人位姿T,通过调整位姿T使代价函数最小,方程如下:3. Build a cost function model to optimize the pose T of the robot, and minimize the cost function by adjusting the pose T. The equation is as follows:
Figure PCTCN2021096828-appb-000031
Figure PCTCN2021096828-appb-000031
其中,e 1为残差,p k为特征点云的第k个位置点,M(T,p k)为特征点云的第k个位置点p k在机器人位姿为T时投影到栅格地图计算的障碍得分,m为特征点云中位置点的数量。 Among them, e 1 is the residual, p k is the kth position point of the feature point cloud, M(T,p k ) is the kth position point of the feature point cloud, p k is projected to the grid when the robot pose is T The obstacle score calculated by the grid map, m is the number of position points in the feature point cloud.
步骤三、机器人位姿优化Step 3. Robot pose optimization
对于步骤二中特征点云和车速(机器人的移动速度)作为约束一起构建非线性最小二乘问题,联合优化机器人的位姿,包括如下步骤:For the feature point cloud in step 2 and the vehicle speed (moving speed of the robot) as constraints to construct a nonlinear least squares problem together, jointly optimize the pose of the robot, including the following steps:
1、构建代价函数模型优化机器人位姿T,通过调整位姿T使代价函数最小的误差项,误差项如下:1. Build a cost function model to optimize the pose T of the robot, and adjust the pose T to minimize the error term of the cost function. The error term is as follows:
Figure PCTCN2021096828-appb-000032
Figure PCTCN2021096828-appb-000032
其中,e 1为残差,p k为特征点云的第k个位置点,M(T,p k)为特征点云的第k个位置点p k在机器人位姿为T时投影到栅格地图计算的障碍得分,m为特征点云中的位置点的数量。 Among them, e 1 is the residual, p k is the kth position point of the feature point cloud, M(T,p k ) is the kth position point of the feature point cloud, p k is projected to the grid when the robot pose is T The obstacle score calculated by the grid map, m is the number of location points in the feature point cloud.
2、构建车速得到的误差项e 3: 2. The error item e3 obtained by constructing the vehicle speed:
e 3=L-L last e 3 =LL last
其中,L是由车速推送当前时刻机器人的位姿,L last是上一时刻优化后得到的机器人位姿。 Among them, L is the pose of the robot at the current moment pushed by the speed of the vehicle, and L last is the pose of the robot obtained after optimization at the last moment.
3、对于所有的误差项,构建下面的最优化问题,使用优化库求(谷歌ceres)解得到当前时刻的位姿:3. For all error terms, construct the following optimization problem, and use the optimization library to find (Google ceres) solution to get the pose at the current moment:
(x,y,yaw)=argmin∑|e 1|+|e 3| (x,y,yaw)=argmin∑|e 1 |+|e 3 |
4、对于步骤二中得到的特征点云,通过优化后的位姿将其转换到世界坐标系下,并基于特征点云更新栅格地图。4. For the feature point cloud obtained in step 2, transform it into the world coordinate system through the optimized pose, and update the grid map based on the feature point cloud.
图30为本申请实施例提供的一种定位装置的结构示意图,该装置可适用于对电子设备进行定位的情况,该装置配置于电子设备。如图30所示,该装置包括:获取模块21,设置为获取至少一个顶视传感器采集的传感器数据;处理模 块22,设置为处理所述传感器数据;定位模块23,设置为根据处理后的传感器数据对所述电子设备进行定位。FIG. 30 is a schematic structural diagram of a positioning device provided in an embodiment of the present application. The device is applicable to the positioning of electronic equipment, and the device is configured in the electronic equipment. As shown in Figure 30, the device includes: an acquisition module 21, configured to acquire sensor data collected by at least one top-view sensor; a processing module 22, configured to process the sensor data; The data locates the electronic device.
本实施例提供的定位装置基于由顶视传感器采集的数据进行电子设备定位,由于计算机设备上方的环境不容易发生改变,因此本实施例提供的定位方法有效的提升了电子设备的定位精度。The positioning device provided in this embodiment locates the electronic equipment based on the data collected by the top-view sensor. Since the environment above the computer equipment is not easy to change, the positioning method provided in this embodiment effectively improves the positioning accuracy of the electronic equipment.
可选的,所述电子设备包括至少两个顶视传感器;定位模块23是设置为在获取定位指令的情况下,根据处理后的传感器数据与全局地图中的局部地图数据进行匹配,确定所述电子设备在所述全局地图中的位姿。Optionally, the electronic device includes at least two top-view sensors; the positioning module 23 is configured to match the processed sensor data with the local map data in the global map to determine the The pose of the electronic device in the global map.
在本实施例中,该装置首先通过获取模块21获取传感器数据;其次通过处理模块22处理所述传感器数据;然后通过确定模块23在获取定位指令的情况下,根据处理后的传感器数据与全局地图中的局部地图数据进行匹配,确定所述电子设备在所述全局地图中的位姿。In this embodiment, the device first acquires sensor data through the acquisition module 21; secondly, processes the sensor data through the processing module 22; Match the local map data in the global map to determine the pose of the electronic device in the global map.
本实施例提供了一种定位装置,在进行定位时有效的避免了环境发生变换时,定位精度差的技术问题,有效的提升了定位精度。This embodiment provides a positioning device, which effectively avoids the technical problem of poor positioning accuracy when the environment changes during positioning, and effectively improves the positioning accuracy.
在一个实施例中,处理模块22是设置为:基于时间戳将至少两个顶视传感器采集的顶视环境数据进行对齐;预处理对齐后的顶视环境数据。In one embodiment, the processing module 22 is configured to: align the top-view environment data collected by at least two top-view sensors based on time stamps; and preprocess the aligned top-view environment data.
在一个实施例中,处理模块22是设置为通过如下方式预处理对齐后的顶视环境数据:去除对齐后的顶视环境数据中的噪点;拼接去除噪点后的顶视环境数据;提取拼接后的顶视环境数据中面边缘的点云信息。In one embodiment, the processing module 22 is configured to preprocess the aligned top-view environment data in the following manner: remove the noise points in the aligned top-view environment data; stitch the top-view environment data after the noise removal; extract the stitched top-view environment data The point cloud information of the surface edge in the top-view environment data of .
在一个实施例中,处理模块22是设置为通过如下方式拼接去除噪点后的顶视环境数据:将去除噪点后的顶视环境数据转换至目标顶视传感器所在坐标系下,所述目标顶视传感器为所述至少两个顶视传感器中的一个顶视传感器。In one embodiment, the processing module 22 is configured to splice the top-view environment data after noise removal in the following manner: convert the top-view environment data after noise removal to the coordinate system where the target top-view sensor is located, and the target top-view The sensor is one of the at least two top-looking sensors.
在一个实施例中,定位模块23是设置为:将处理后的传感器数据所包括面边缘的点云信息转换为栅格数据;确定全局地图中与所述栅格数据匹配的局部地图数据;根据所述局部地图数据和所述栅格数据,确定所述电子设备在所述全局地图中的位姿。In one embodiment, the positioning module 23 is configured to: convert the point cloud information of the edge of the surface included in the processed sensor data into raster data; determine the local map data in the global map that matches the raster data; The local map data and the grid data determine the pose of the electronic device in the global map.
在一个实施例中,确定模块23是设置为通过如下方式根据所述局部地图数据和所述栅格数据,确定所述电子设备在所述全局地图中的位姿:根据所述局部地图数据与所述全局地图之间的位姿关系,以及所述栅格数据与所述局部地 图数据间的位姿关系,确定所述电子设备在所述全局地图中的位姿。In one embodiment, the determining module 23 is configured to determine the pose of the electronic device in the global map according to the local map data and the grid data in the following manner: according to the local map data and the grid data The pose relationship between the global maps, and the pose relationship between the grid data and the local map data determine the pose of the electronic device in the global map.
在一个实施例中,该装置,还包括建图模块,建图模块设置为:若获取到建图指令,则将处理后的传感器数据所包括的面边缘的点云信息添加至处理后的传感器数据匹配的局部地图数据中;将添加处理后的传感器数据后的局部地图数据更新至所述全局地图中。In one embodiment, the device further includes a mapping module, and the mapping module is configured to: if the mapping instruction is obtained, add the point cloud information of the edge of the surface included in the processed sensor data to the processed sensor In the local map data for data matching; update the local map data after adding the processed sensor data to the global map.
可选的,所述传感器数据包括至少一个平视传感器采集的平视环境数据和一个顶视传感器采集的顶视环境数据;所述定位模块是设置为根据处理后的传感器数据生成平视栅格地图和顶视栅格地图;根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对电子设备进行定位。Optionally, the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by a top-view sensor; the positioning module is configured to generate a head-up grid map and a top-view grid map based on the processed sensor data A viewing grid map: positioning the electronic device according to the processed sensor data, the top-viewing grid map, and the head-up viewing grid map.
在本实施例中,该装置首先通过获取模块21获取传感器数据,所述传感器数据包括平视环境数据和顶视环境数据;其次通过处理模块22处理所述传感器数据;然后通过定位模块23根据处理后的传感器数据生成平视栅格地图和顶视栅格地图,根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对电子设备进行定位。In this embodiment, the device first obtains sensor data through the acquisition module 21, and the sensor data includes head-up environment data and top-view environment data; secondly, the sensor data is processed through the processing module 22; The head-up grid map and the top-view grid map are generated from the sensor data, and the electronic device is positioned according to the processed sensor data, the top-view grid map, and the head-up grid map.
本实施例提供了一种定位装置,通过结合顶视栅格地图和平视栅格地图避免环境对电子设备定位造成的影响,提高了定位的鲁棒性。This embodiment provides a positioning device, which avoids the influence of the environment on the positioning of electronic devices by combining the top-view grid map and the flat-view grid map, thereby improving the robustness of positioning.
参见图31,在一个实施例中,处理模块22,包括:预处理单元221,设置为预处理所述传感器数据;转换单元222,设置为将预处理后的传感器数据转换至机体坐标系下;优化单元223,设置为优化转换坐标系后的传感器数据,根据所述传感器数据得到处理后的顶视环境数据和处理后的平视环境数据。Referring to FIG. 31 , in one embodiment, the processing module 22 includes: a preprocessing unit 221 configured to preprocess the sensor data; a conversion unit 222 configured to convert the preprocessed sensor data into the body coordinate system; The optimization unit 223 is configured to optimize the sensor data after the coordinate system transformation, and obtain processed top-view environment data and processed head-up environment data according to the sensor data.
在一个实施例中,预处理单元221是设置为:基于时间戳将平视环境数据和顶视环境数据进行对齐;提取对齐后的顶视环境数据中面边缘的点云信息。In one embodiment, the preprocessing unit 221 is configured to: align the head-up environment data and the top-view environment data based on the time stamp; extract the point cloud information of the surface edge in the aligned top-view environment data.
在一个实施例中,优化单元223是设置为:通过暴力匹配处理转换坐标系后的传感器数据。In one embodiment, the optimization unit 223 is configured to: process the sensor data after the coordinate system transformation by brute force matching.
在一个实施例中,定位模块23是设置为通过如下方式根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对电子设备进行定位:根据处理后的平视数据集对所述平视栅格地图进行闭环检测得到平视匹配率;根据处理后的顶数据集对所述顶视栅格地图进行闭环检测得到顶视匹配率;在所述平视匹配率大于第一设定阈值和/或顶视匹配率大于第二设定阈值时,确定所述电子 设备的全局位姿。In one embodiment, the positioning module 23 is configured to locate the electronic device according to the processed sensor data, the top-view grid map and the head-up grid map in the following manner: according to the processed head-up data set The head-up grid map is subjected to closed-loop detection to obtain a head-up matching rate; according to the processed top-view grid map, a closed-loop detection is performed on the top-view grid map to obtain a top-view matching rate; when the head-up matching rate is greater than a first set threshold And/or when the top-view matching rate is greater than a second set threshold, determine the global pose of the electronic device.
一个实施例中,定位模块23是设置为通过如下方式根据处理后的传感器数据生成平视栅格地图和顶视栅格地图:基于处理后的平视环境数据生成平视栅格地图;基于处理后的顶视环境数据生成顶视栅格地图。In one embodiment, the positioning module 23 is configured to generate a head-up grid map and a top-view grid map according to the processed sensor data in the following manner: generate a head-up grid map based on the processed head-up environment data; Generate top-view raster maps based on environment data.
可选的,所述传感器数据包括至少一个平视传感器采集的平视环境数据和至少两个顶视传感器采集的顶视环境数据定位模块23,设置为根据处理后的传感器数据生成平视栅格地图和顶视栅格地图,根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对电子设备进行定位。Optionally, the sensor data includes head-up environment data collected by at least one head-up sensor and top-view environment data collected by at least two top-view sensors. The viewing grid map is used to locate the electronic device according to the processed sensor data, the top-viewing grid map and the head-up viewing grid map.
在本实施例中,该装置首先通过获取模块21获取传感器数据,所述传感器数据包括平视环境数据和顶视环境数据;其次通过处理模块22处理所述传感器数据;然后通过定位模块23根据处理后的传感器数据生成平视栅格地图和顶视栅格地图,根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对电子设备进行定位。In this embodiment, the device first obtains sensor data through the acquisition module 21, and the sensor data includes head-up environment data and top-view environment data; secondly, the sensor data is processed through the processing module 22; The head-up grid map and the top-view grid map are generated from the sensor data, and the electronic device is positioned according to the processed sensor data, the top-view grid map, and the head-up grid map.
本实施例提供了一种定位装置,通过结合顶视栅格地图和平视栅格地图避免环境对电子设备定位造成的影响,提高了定位的鲁棒性。This embodiment provides a positioning device, which avoids the influence of the environment on the positioning of electronic devices by combining the top-view grid map and the flat-view grid map, thereby improving the robustness of positioning.
在一个实施例中,处理模块22,包括:预处理单元221,设置为预处理所述传感器数据;转换单元222,设置为将预处理后的传感器数据转换至机体坐标系下;优化单元223,设置为优化转换坐标系后的传感器数据,根据所述传感器数据得到处理后的顶视环境数据和处理后的平视环境数据。In one embodiment, the processing module 22 includes: a preprocessing unit 221, configured to preprocess the sensor data; a conversion unit 222, configured to convert the preprocessed sensor data into the body coordinate system; an optimization unit 223, It is set to optimize the sensor data after the coordinate system transformation, and obtain the processed top-view environment data and the processed head-up environment data according to the sensor data.
在一个实施例中,预处理单元221是设置为:基于时间戳将所述平视环境数据和顶视环境数据进行对齐;将对齐后的顶视环境数据进行拼接;提取拼接后的顶视环境数据中面边缘的点云信息。In one embodiment, the preprocessing unit 221 is configured to: align the head-up environment data and the top-view environment data based on the time stamp; splice the aligned top-view environment data; extract the spliced top-view environment data Point cloud information of the edge of the midplane.
在一个实施例中,优化单元223是设置为:通过暴力匹配处理转换坐标系后的传感器数据。In one embodiment, the optimization unit 223 is configured to: process the sensor data after the coordinate system transformation by brute force matching.
在一个实施例中,定位模块24是设置为通过如下方式根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对电子设备进行定位:根据处理后的平视数据集对所述平视栅格地图进行闭环检测得到平视匹配率;根据处理后的顶视数据集对所述顶视栅格地图进行闭环检测得到顶视匹配率;在所述平视匹配率大于第一设定阈值和/或顶视匹配率大于第二设定阈值时,确定所述电 子设备的全局位姿。In one embodiment, the positioning module 24 is configured to locate the electronic device according to the processed sensor data, the top-view grid map and the head-up grid map in the following manner: according to the processed head-up data set Perform closed-loop detection on the head-up grid map to obtain a head-up matching rate; perform closed-loop detection on the top-view grid map according to the processed top-view data set to obtain a top-view matching rate; when the head-up matching rate is greater than the first setting When the threshold and/or the top-view matching rate is greater than a second set threshold, the global pose of the electronic device is determined.
一个实施例中,定位模块是设置为通过如下方式根据处理后的传感器数据生成平视栅格地图和顶视栅格地图:基于处理后的平视环境数据生成平视栅格地图;基于处理后的顶视环境数据生成顶视栅格地图。In one embodiment, the positioning module is configured to generate a head-up grid map and a top-view grid map according to the processed sensor data in the following manner: generate a head-up grid map based on the processed head-up environment data; generate a head-up grid map based on the processed top-view Environment data generate a top-view raster map.
上述定位装置可执行本申请任意实施例所提供的定位方法,具备执行方法相应的功能模块。The above-mentioned positioning device can execute the positioning method provided by any embodiment of the present application, and has corresponding functional modules for executing the method.
图32是本神奇实施例提供的另一种定位装置的结构示意图,可执行本申请任意实施例所提供的定位方法,具备执行方法相应的功能模块。该装置可以由软件和/或硬件实现,包括:数据采集模块31、特征点云确定模块32、障碍得分确定模块33和定位确定模块34。Fig. 32 is a schematic structural diagram of another positioning device provided by this magical embodiment, which can execute the positioning method provided by any embodiment of this application, and has corresponding functional modules for executing the method. The device can be implemented by software and/or hardware, including: a data acquisition module 31 , a feature point cloud determination module 32 , an obstacle score determination module 33 and a location determination module 34 .
数据采集模块31设置为:采集机器人的当前位姿以及至少一个预设方向的深度数据;特征点云确定模块32设置为:基于所述深度数据确定特征点云;障碍得分确定模块33设置为:确定所述特征点云在预设全局栅格地图中的障碍得分,其中,所述预设全局栅格地图基于历史点云构成;定位确定模块34,设置为根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。The data acquisition module 31 is set to: collect the current pose of the robot and the depth data of at least one preset direction; the feature point cloud determination module 32 is set to: determine the feature point cloud based on the depth data; the obstacle score determination module 33 is set to: Determine the obstacle score of the feature point cloud in the preset global grid map, wherein the preset global grid map is formed based on the historical point cloud; the positioning determination module 34 is configured to optimize the current Pose to determine the positioning information of the robot.
可选的,数据采集模块31是设置为:采集机器人的当前位姿以及至少一个预设方向的深度数据和红外数据;特征点云模块32是设置为在所述红外数据中提取至少一个边缘点,并根据所述深度数据将所述至少一个边缘点转换为特征点云。Optionally, the data collection module 31 is configured to: collect the current pose of the robot and depth data and infrared data in at least one preset direction; the feature point cloud module 32 is configured to extract at least one edge point from the infrared data , and converting the at least one edge point into a feature point cloud according to the depth data.
本申请实施例,通过数据采集模块获取机器人当前位姿以及预设方向上的深度数据和红外数据,特征点云确定模块在红外数据中提取至少一个边缘点,根据深度数据处理至少一个边缘点后组成特征点云,障碍得分确定模块确定特征点云在预设全局栅格地图中的障碍得分,定位确定模块使用该障碍得分优化当前位姿以获取机器人的定位信息,实现了机器人位置信息的准确获取,减少复杂环境对位姿确定的影响,可增强机器人服务质量,有助于提高用户使用体验。In the embodiment of the present application, the current pose of the robot and the depth data and infrared data in the preset direction are obtained through the data acquisition module, and the feature point cloud determination module extracts at least one edge point from the infrared data, and processes at least one edge point according to the depth data To form a feature point cloud, the obstacle score determination module determines the obstacle score of the feature point cloud in the preset global grid map, and the positioning determination module uses the obstacle score to optimize the current pose to obtain the positioning information of the robot, realizing the accuracy of the robot position information Acquisition, reducing the impact of complex environments on pose determination, can enhance the quality of robot services and help improve user experience.
可选的,在上述实施例的基础上,所述定位确定模块34包括:综合优化单元,设置为根据所述机器人的移动速度和障碍得分优化所述定位信息。Optionally, on the basis of the above embodiments, the location determining module 34 includes: a comprehensive optimization unit configured to optimize the location information according to the moving speed and obstacle score of the robot.
可选的,在上述实施例的基础上,所述数据采集模块31包括:位姿采集单 元,设置为获取所述机器人世界坐标系下的当前位姿,其中,当前位姿至少包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角;数据采集单元,设置为使用预先设置在所述机器人上的至少一个深度数据传感器采集所述预设方向的深度数据和至少一个红外数据传感器采集所述预设方向的红外数据,其中,所述预设方向至少包括水平方向和竖直方向中一种。Optionally, on the basis of the above embodiments, the data acquisition module 31 includes: a pose acquisition unit configured to acquire the current pose in the world coordinate system of the robot, wherein the current pose includes at least the abscissa, The ordinate and the angle between the direction of the robot and the X-axis of the world coordinate system; the data acquisition unit is configured to use at least one depth data sensor pre-set on the robot to collect depth data in the preset direction and at least one infrared data The sensor collects infrared data in the preset direction, wherein the preset direction includes at least one of a horizontal direction and a vertical direction.
可选的,在上述申请实施例的基础上,所述特征点云确定模块32包括:噪声处理单元,设置为滤除所述红外数据中的噪声;边缘提取单元,设置为在滤除噪声的所述红外数据中提取至少一个边缘点;点云生成单元,设置为使用所述机器人的相机模型以及所述深度数据将所述至少一个边缘点转换为三维坐标以构成特征点云。Optionally, on the basis of the above-mentioned application embodiments, the feature point cloud determination module 32 includes: a noise processing unit configured to filter out noise in the infrared data; an edge extraction unit configured to filter out noise At least one edge point is extracted from the infrared data; a point cloud generation unit is configured to use the camera model of the robot and the depth data to convert the at least one edge point into three-dimensional coordinates to form a feature point cloud.
可选的,在上述实施例的基础上,所述障碍得分确定模块33包括:位置映射单元,设置为将所述特征点云内至少一个边缘点的坐标转换到世界坐标系,映射转换坐标后的每个边缘点到所述预设全局栅格地图的目标栅格;得分确定单元,设置为在一个位置点映射到的目标栅格为预设全局栅格地图中的障碍区域的栅格的情况下,则获取所述目标栅格的概率值作为所述一个边缘点的障碍得分;得分统计单元,用于设置为统计所述特征点云内所述至少一个边缘点的障碍得分的总和作为所述特征点云的障碍得分。Optionally, on the basis of the above-mentioned embodiments, the obstacle score determination module 33 includes: a position mapping unit, configured to transform the coordinates of at least one edge point in the feature point cloud into the world coordinate system, and after mapping the transformed coordinates Each edge point of each edge point is to the target grid of the preset global grid map; the score determination unit is set to the grid where the target grid mapped to a position point is the grid of the obstacle area in the preset global grid map In this case, the probability value of the target grid is obtained as the obstacle score of the one edge point; the score statistics unit is configured to count the sum of the obstacle scores of the at least one edge point in the feature point cloud as The obstacle score of the feature point cloud.
可选的,在上述实施例的基础上,所述定位确定模块34还包括:第一残差单元,设置为根据所述当前位姿以及所述特征点云的障碍得分构建残差函数;参数调整单元设置为调整所述残差函数中参数信息使得所述残差函数的结果值最小,残差函数中参数信息为当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数的取值;定位确定单元,设置为将所述结果值最小时所述当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角定位位姿信息。Optionally, on the basis of the above-mentioned embodiments, the positioning determination module 34 further includes: a first residual unit configured to construct a residual function according to the current pose and the obstacle score of the feature point cloud; the parameter The adjustment unit is set to adjust the parameter information in the residual function so that the result value of the residual function is the smallest, and the parameter information in the residual function is the abscissa and ordinate of the current pose and the robot direction and the X axis of the world coordinate system The value of at least one parameter in the included angle; The location determination unit is set to the abscissa, ordinate, and the included angle location of the X-axis of the robot direction and the world coordinate system when the result value is minimized. Posture information.
可选的,在上述实施例的基础上,所述综合优化单元是设置为:根据所述当前位姿以及所述特征点云的障碍得分构建第一残差项;基于所述移动速度确定预测位姿,并将预测位姿与上一时刻的历史位姿的差值作为第二残差项;调整所述第一残差项和所述第二残差项中速度预测位姿中参数信息和/或所述当前位姿中参数信息使得所述第一残差项和所述第二残差项的和取值最小,所述参数信息包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一 个参数的取值;将所述第一残差项和所述第二残差项的和取值最小时的速度预测位姿和/或所述当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为优化后的定位位姿信息。Optionally, on the basis of the above embodiment, the comprehensive optimization unit is configured to: construct a first residual item according to the current pose and the obstacle score of the feature point cloud; determine a prediction based on the moving speed pose, and the difference between the predicted pose and the previous moment’s historical pose as the second residual item; adjust the parameter information in the speed prediction pose in the first residual item and the second residual item And/or the parameter information in the current pose makes the sum of the first residual term and the second residual term the smallest value, the parameter information includes the abscissa, ordinate, robot direction and world coordinate system The value of at least one parameter in the included angle of the X axis; the speed prediction pose and/or the current pose when the sum of the first residual term and the second residual term is minimized The abscissa, ordinate, and the angle between the robot direction and the X-axis of the world coordinate system are used as the optimized positioning pose information.
可选的,在上述实施例的基础上,所述装置还包括:地图更新模块,设置为将所述定位信息对应的机器人位姿更新到所述预设全局栅格地图。可选的,在上述实施例的基础上,数据采集模块31是设置为采集机器人的当前位姿以及至少一个预设方向的深度数据;特征点云确定模块32是设置为在所述深度数据提取至少一个平面的外轮廓点并将至少一个平面的外轮廓点构成特征点云。Optionally, on the basis of the above embodiments, the device further includes: a map update module configured to update the robot pose corresponding to the positioning information to the preset global grid map. Optionally, on the basis of the above-mentioned embodiments, the data collection module 31 is set to collect the current pose of the robot and the depth data of at least one preset direction; the feature point cloud determination module 32 is set to extract the depth data The outer contour points of at least one plane and the outer contour points of at least one plane form a feature point cloud.
本申请实施例,通过数据采集模块获取机器人当前位姿以及至少一个预设方向上的深度数据,特征点云确定模块在深度数据中提取至少一个平面的外轮廓点,并使用外轮廓点构成特征点云,障碍得分确定模块确定特征点云在预设全局栅格地图中的障碍得分,定位确定模块使用障碍得分优化当前位姿以获取机器人的定位信息,实现了机器人位置信息的准确获取,减少复杂环境对位姿确定的影响,可增强机器人服务质量,有助于提高用户使用体验。In the embodiment of the present application, the current pose of the robot and the depth data in at least one preset direction are obtained through the data acquisition module, and the feature point cloud determination module extracts at least one plane's outer contour points from the depth data, and uses the outer contour points to form features Point cloud, the obstacle score determination module determines the obstacle score of the feature point cloud in the preset global grid map, and the positioning determination module uses the obstacle score to optimize the current pose to obtain the positioning information of the robot, realizing the accurate acquisition of the robot position information, reducing The influence of complex environment on pose determination can enhance robot service quality and help improve user experience.
可选的,在上述发明实施例的基础上,所述定位确定模块44包括:综合优化单元,设置为根据所述机器人的移动速度和障碍得分优化所述定位信息。Optionally, on the basis of the above embodiments of the invention, the location determination module 44 includes: a comprehensive optimization unit configured to optimize the location information according to the moving speed and obstacle score of the robot.
可选的,在上述实施例的基础上,所述数据采集模块31包括:位姿采集单元,设置为获取所述机器人世界坐标系下的当前位姿,其中,当前位姿至少包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角;深度采集单元,设置为使用预先设置在所述机器人上的至少一个深度数据传感器采集所述至少一个预设方向的深度数据,其中,所述至少一个预设方向包括水平方向和竖直方向中的至少一种;深度处理单元,设置为滤除所述深度数据中的噪声。Optionally, on the basis of the above embodiments, the data acquisition module 31 includes: a pose acquisition unit configured to acquire the current pose in the world coordinate system of the robot, wherein the current pose includes at least the abscissa, The ordinate and the angle between the direction of the robot and the X-axis of the world coordinate system; the depth acquisition unit is configured to use at least one depth data sensor preset on the robot to collect depth data in the at least one preset direction, wherein, The at least one preset direction includes at least one of a horizontal direction and a vertical direction; a depth processing unit configured to filter out noise in the depth data.
可选的,在上述实施例的基础上,所述特征点云确定模块32包括:点云生成单元,设置为基于所述机器人的相机模型将所述深度数据转换为三维点云,所述深度数据至少包括位置点信息和深度信息;平面划分单元,设置为依据所述深度信息和预设法向量信息将所述三维点云分割为至少一个平面;特征提取单元,设置为提取每个平面的外轮廓点作为特征点云。Optionally, on the basis of the above-mentioned embodiments, the feature point cloud determination module 32 includes: a point cloud generation unit configured to convert the depth data into a three-dimensional point cloud based on the camera model of the robot, and the depth The data includes at least position point information and depth information; the plane division unit is configured to divide the three-dimensional point cloud into at least one plane according to the depth information and preset normal vector information; the feature extraction unit is configured to extract the Outline points are used as feature point cloud.
可选的,在上述实施例的基础上,所述障碍得分模块403包括:位置映射单元,设置为将所述特征点云内至少一个位置点的坐标转换到世界坐标系,映射 转换坐标后的每个位置点到所述预设全局栅格地图的目标栅格;得分确定单元,设置为在一个位置点映射到的所述目标栅格为预设全局栅格地图中的障碍区域的栅格的情况下,则获取所述目标栅格的概率值作为所述一个位置点的障碍得分;得分统计单元,设置为统计所述特征点云内全部所述位置点的障碍得分的总和作为所述特征点云的障碍得分。Optionally, on the basis of the above embodiments, the obstacle scoring module 403 includes: a position mapping unit configured to convert the coordinates of at least one position point in the feature point cloud into a world coordinate system, and map the transformed coordinates to Each location point is to the target grid of the preset global grid map; the score determination unit is configured to map a location point to the target grid as the grid of the obstacle area in the preset global grid map In the case of , the probability value of the target grid is obtained as the obstacle score of the one location point; the score statistics unit is set to count the sum of the obstacle scores of all the location points in the feature point cloud as the Obstacle scores for feature point clouds.
可选的,在上述实施例的基础上,所述定位确定模块34还包括:第一残差单元,设置为根据所述当前位姿以及所述特征点云的障碍得分构建残差函数;参数调整单元,设置为调整所述残差函数中参数信息使得所述残差函数的结果值最小,残差函数中参数信息为当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数的取值;定位确定单元,设置为将所述结果值最小时所述当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为输出的定位位姿信息。Optionally, on the basis of the above-mentioned embodiments, the positioning determination module 34 further includes: a first residual unit configured to construct a residual function according to the current pose and the obstacle score of the feature point cloud; the parameter The adjustment unit is configured to adjust the parameter information in the residual function so that the result value of the residual function is the smallest, and the parameter information in the residual function is the abscissa and ordinate of the current pose and the X of the robot direction and the world coordinate system The value of at least one parameter in the included angle of the axis; the positioning determination unit is set to the abscissa, ordinate and the included angle of the robot direction and the X-axis of the world coordinate system when the result value is the smallest. The output positioning pose information.
可选的,在上述实施例的基础上,所述综合优化单元是设置为:根据所述当前位姿以及所述特征点云的障碍得分构建第一残差项;基于所述移动速度确定预测位姿,并将预测位姿与上一时刻的历史位姿的差值作为第二残差项;调整所述第一残差项和所述第二残差项中预测位姿中参数信息和/或所述当前位姿中参数信息使得所述第一残差项和所述第二残差项的和取值最小,所述参数信息包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数的取值;将所述第一残差项和所述第二残差项的和取值最小时的速度预测位姿和/或所述当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为定位位姿信息。Optionally, on the basis of the above embodiment, the comprehensive optimization unit is configured to: construct a first residual item according to the current pose and the obstacle score of the feature point cloud; determine a prediction based on the moving speed pose, and the difference between the predicted pose and the previous moment’s historical pose as the second residual item; adjust the parameter information and the parameter information in the predicted pose in the first residual item and the second residual item /or the parameter information in the current pose makes the sum of the first residual term and the second residual term the smallest value, the parameter information includes the abscissa, ordinate, and the robot direction and the world coordinate system The value of at least one parameter in the included angle of the X axis; the velocity prediction pose and/or the transverse direction of the current pose when the sum of the first residual term and the second residual term is minimized Coordinates, ordinates, and the angle between the robot direction and the X-axis of the world coordinate system are used as positioning pose information.
可选的,在上述实施例的基础上,所述装置还包括:地图更新模块,设置为将所述定位信息对应的机器人位姿更新到所述预设全局栅格地图。Optionally, on the basis of the above embodiments, the device further includes: a map update module configured to update the robot pose corresponding to the positioning information to the preset global grid map.
图33为本申请实施例提供的一种电子设备的结构示意图。如图34所示,本申请实施例提供的电子设备包括:一个或多个处理器41和存储装置42;该电子设备中的处理器41可以是一个或多个,图34中以一个处理器41为例;存储装置42设置为存储一个或多个程序;所述一个或多个程序被所述一个或多个处理器41执行,使得所述一个或多个处理器41实现如本申请实施例中任一项所述的定 位方法。FIG. 33 is a schematic structural diagram of an electronic device provided by an embodiment of the present application. As shown in FIG. 34 , the electronic equipment provided by the embodiment of the present application includes: one or more processors 41 and storage devices 42; there may be one or more processors 41 in the electronic equipment, and one processor is used in FIG. 34 41 as an example; the storage device 42 is set to store one or more programs; the one or more programs are executed by the one or more processors 41, so that the one or more processors 41 realize the implementation of the present application The positioning method described in any one of the examples.
所述电子设备还可以包括:输入装置43和输出装置44。The electronic device may further include: an input device 43 and an output device 44 .
在一个实施例中,电子设备还包括传感器,所述传感器包括至少两个顶视传感器和轮式编码器,所述至少两个顶视传感器间的视场角存在共视区域,所述至少两个顶视传感器设置为采集电子设备上方的顶视环境数据;所述轮式编码器设置为确定所述电子设备运行的速度和距离。In one embodiment, the electronic device further includes a sensor, the sensor includes at least two top-view sensors and a wheel encoder, the field of view between the at least two top-view sensors has a common viewing area, and the at least two top-view sensors A top-view sensor is configured to collect top-view environmental data above the electronic device; and the wheel encoder is configured to determine the speed and distance at which the electronic device travels.
轮式编码器采集的速度和距离可以在定位时使用。顶视传感器所采集的顶视环境数据可以用于建图和定位。The speed and distance collected by the wheel encoder can be used in positioning. The top-view environmental data collected by the top-view sensor can be used for mapping and positioning.
此处不对顶视传感器在电子设备上的位置进行限定,只要保证顶视传感器能够采集顶视环境数据,且多个顶视传感器间的视场角存在共视区域即可。The position of the top-view sensor on the electronic device is not limited here, as long as the top-view sensor can collect top-view environmental data, and the field of view between multiple top-view sensors has a common viewing area.
多个顶视传感器间的视场角存在共视区域的技术手段不作限定,可以认为是所述至少两个顶视传感器中相邻两个顶视传感器存在共视区域;或者所述至少两个顶视传感器中有一目标顶视传感器,所述至少两个顶视传感器中除目标顶视传感器外的顶视传感器均与目标顶视传感器的视场角存在共视区域。The technical means that there is a common-view area in the field of view between a plurality of top-view sensors is not limited, it can be considered that there is a common-view area between two adjacent top-view sensors in the at least two top-view sensors; or the at least two There is a target top-view sensor in the top-view sensor, and the top-view sensors except the target top-view sensor among the at least two top-view sensors have a common view area with the field angle of the target top-view sensor.
所述至少两个顶视传感器的视场角存在共视区域能够进行顶视传感器的外参标定,进而所述至少两个顶视传感器所采集的顶视环境数据能够拼接。The common viewing area of the field angles of the at least two top-view sensors can be used for external parameter calibration of the top-view sensors, and then the top-view environment data collected by the at least two top-view sensors can be spliced.
在一个实施例中,该电子设备还包括:顶视传感器和平视传感器;所述平视传感器设置为采集所述电子设备运行方向的平视环境数据;所述顶视传感器设置为采集所述电子设备上方的顶视环境数据。In one embodiment, the electronic device further includes: a top-view sensor and a head-up sensor; the head-up sensor is configured to collect the head-up environment data of the running direction of the electronic device; the top-view sensor is configured to collect the above-mentioned electronic device top-view environment data.
在一个实施例中,所述顶视传感器的数量为至少两个,所述至少两个顶视传感器与水平方向夹角不同,相邻两个顶视传感器间的视场角存在设定比例的共视区域。In one embodiment, the number of the top-view sensors is at least two, and the angle between the at least two top-view sensors and the horizontal direction is different, and there is a set ratio of the field of view between two adjacent top-view sensors. common view area.
本实施例中的顶视传感器可以采集相同区域的顶视环境数据,以提高采集准确性。本实施例中多个顶视传感器可以采集电子设备上方不同区域的顶视环境数据,以通过采集范围的扩大,提高定位精度。The top-view sensor in this embodiment can collect top-view environmental data in the same area to improve collection accuracy. In this embodiment, multiple top-view sensors can collect top-view environmental data in different areas above the electronic device, so as to improve positioning accuracy by expanding the collection range.
本申请中多个顶视传感器可以不存在共视区域以采集更多的顶视环境数据。在本实施例中,相邻的顶视传感器间的视场角可以存在设定比例的共视区域,以能够有效拼接顶视环境数据。设定比例不作限定,如三分之一或四分之 一。In this application, multiple top-view sensors may not have a common-view area to collect more top-view environment data. In this embodiment, there may be a common-view area with a set ratio in the field of view between adjacent top-view sensors, so that the top-view environment data can be spliced effectively. The setting ratio is not limited, such as one-third or one-fourth.
在一个实施例中,该电子设备还包括:顶视传感器和平视传感器;所述平视传感器设置为采集所述电子设备运行方向的平视环境数据;所述顶视传感器设置为采集所述电子设备上方的顶视环境数据。In one embodiment, the electronic device further includes: a top-view sensor and a head-up sensor; the head-up sensor is configured to collect the head-up environment data of the running direction of the electronic device; the top-view sensor is configured to collect the above-mentioned electronic device top-view environment data.
在一个实施例中,所述顶视传感器的数量为一个。In one embodiment, the number of the top-view sensor is one.
本实施例中的顶视传感器在电子设备的位置不作限定,只要能够采集顶视环境数据即可。本实施例中顶视传感器的数量为一个,降低了电子设备的成本。The location of the top-view sensor in this embodiment is not limited, as long as it can collect top-view environment data. In this embodiment, the number of top-view sensors is one, which reduces the cost of electronic equipment.
可选的,电子设备还包括深度数据传感器和红外数据传感器,深度数据传感器设置为采集预设方向的深度数据,红外数据传感器设置为采集该预设方向的红外数据。Optionally, the electronic device further includes a depth data sensor and an infrared data sensor, the depth data sensor is configured to collect depth data in a preset direction, and the infrared data sensor is configured to collect infrared data in the preset direction.
可选的,电子设备还包括深度数据传感器,深度数据传感器设置为采集预设方向的深度数据。Optionally, the electronic device further includes a depth data sensor, and the depth data sensor is configured to collect depth data in a preset direction.
电子设备中的处理器41、存储装置42、输入装置43和输出装置344可以通过总线或其他方式连接,图3中以通过总线连接为例。The processor 41 , storage device 42 , input device 43 and output device 344 in the electronic device may be connected via a bus or in other ways. In FIG. 3 , connection via a bus is taken as an example.
该电子设备中的存储装置42作为一种计算机可读存储介质,可用于存储一个或多个程序,所述程序可以是软件程序、计算机可执行程序以及模块,如本申请实施例所提供定位方法对应的程序指令/模块(例如,图30所示的定位装置中的模块,包括:获取模块21、处理模块22、生成模块23和定位模块24或者图31中的数据采集模块31、特征点云确定模块32、障碍得分确定模块33和定位确定模块34)。处理器41通过运行存储在存储装置42中的软件程序、指令以及模块,从而执行电子设备的多种功能应用以及数据处理,即实现上述方法实施例中定位方法。The storage device 42 in the electronic device, as a computer-readable storage medium, can be used to store one or more programs, and the programs can be software programs, computer-executable programs and modules, such as the positioning method provided in the embodiment of the present application Corresponding program instructions/modules (for example, modules in the positioning device shown in FIG. 30 include: acquisition module 21, processing module 22, generation module 23 and positioning module 24 or data acquisition module 31 and feature point cloud in FIG. 31 determination module 32, obstacle score determination module 33 and positioning determination module 34). The processor 41 executes various functional applications and data processing of the electronic device by running the software programs, instructions and modules stored in the storage device 42 , that is, implements the positioning method in the above method embodiments.
存储装置42可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储装置42可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储装置42可包括相对于处理器31远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The storage device 42 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required for a function; the data storage area may store data created according to the use of the electronic device, and the like. In addition, the storage device 42 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices. In some examples, storage device 42 may include memory located remotely from processor 31, and such remote memory may be connected to the device via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
输入装置33可设置为接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置34可包括显示屏等显示设备。The input device 33 can be configured to receive input numbers or character information, and generate key signal input related to user settings and function control of the electronic device. The output device 34 may include a display device such as a display screen.
在一个示例性的实施方式中,电子设备具体可以是机器人,或者,机器人上安装的定位导航设备。机器人或者定位导航设备可以通过本发明实施例提供的定位方法实现机器人位置的精准确定。In an exemplary embodiment, the electronic device may specifically be a robot, or a positioning and navigation device installed on the robot. A robot or a positioning and navigation device can accurately determine the position of the robot through the positioning method provided by the embodiment of the present invention.
当上述电子设备所包括一个或者多个程序被所述一个或者多个处理器31执行时,例如程序进行如下操作:When one or more programs included in the above-mentioned electronic device are executed by the one or more processors 31, for example, the programs perform the following operations:
获取所述至少一个顶视传感器采集的传感器数据;处理所述传感器数据;根据处理后的传感器数据对所述电子设备进行定位;或者acquiring sensor data collected by the at least one top-looking sensor; processing the sensor data; locating the electronic device based on the processed sensor data; or
采集机器人的当前位姿以及至少一个预设方向的深度数据;基于所述深度数据确定特征点云;确定所述特征点云在预设全局栅格地图中的障碍得分,其中,所述预设全局栅格地图基于历史点云构成;根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。Collect the current pose of the robot and depth data in at least one preset direction; determine the feature point cloud based on the depth data; determine the obstacle score of the feature point cloud in the preset global grid map, wherein the preset The global grid map is constructed based on the historical point cloud; the current pose is optimized according to the obstacle score to determine the positioning information of the robot.
本申请实施例提供了一种计算机可读存储介质,图34为本申请实施例提供的一种存储介质的结构示意图,计算机可读存储介质51上存储有计算机程序53,该程序被处理器52执行时用于执行定位方法,该方法包括:The embodiment of the present application provides a computer-readable storage medium. FIG. 34 is a schematic structural diagram of a storage medium provided in the embodiment of the present application. A computer program 53 is stored on the computer-readable storage medium 51, and the program is executed by a processor 52. Execution is used to execute the positioning method, which includes:
获取所述至少一个顶视传感器采集的传感器数据;处理所述传感器数据;根据处理后的传感器数据对所述电子设备进行定位;或者acquiring sensor data collected by the at least one top-looking sensor; processing the sensor data; locating the electronic device based on the processed sensor data; or
采集机器人的当前位姿以及至少一个预设方向的深度数据;基于所述深度数据确定特征点云;确定所述特征点云在预设全局栅格地图中的障碍得分,其中,所述预设全局栅格地图基于历史点云构成;根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。Collect the current pose of the robot and depth data in at least one preset direction; determine the feature point cloud based on the depth data; determine the obstacle score of the feature point cloud in the preset global grid map, wherein the preset The global grid map is constructed based on the historical point cloud; the current pose is optimized according to the obstacle score to determine the positioning information of the robot.
可选的,该计算机程序53被处理器52执行时还可以用于执行本申请任意实施例所提供的定位方法。Optionally, when the computer program 53 is executed by the processor 52, it may also be used to execute the positioning method provided by any embodiment of the present application.
本申请实施例的计算机可读存储介质51,可以采用一个或多个计算机可读的介质的任意组合。计算机可读的介质可以是计算机可读信号介质或者计算机可读存储介质51。计算机可读存储介质51例如可以是,但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机 可读存储介质51的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式光盘只读存储器(Compact Disc-Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。计算机可读存储介质51可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The computer-readable storage medium 51 in the embodiment of the present application may use any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium 51 . The computer-readable storage medium 51 may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (a non-exhaustive list) of computer-readable storage media 51 include: electrical connections with one or more wires, portable computer disks, hard disks, Random Access Memory (RAM), read-only Memory (Read Only Memory, ROM), Erasable Programmable Read Only Memory (Erasable Programmable Read Only Memory, EPROM), flash memory, optical fiber, portable CD-ROM (Compact Disc-Read-Only Memory, CD-ROM), An optical storage device, a magnetic storage device, or any suitable combination of the above. The computer-readable storage medium 51 may be any tangible medium that contains or stores a program for use by or in conjunction with an instruction execution system, apparatus, or device.
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于:电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质51以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to: electromagnetic signals, optical signals, or any suitable combination of the foregoing. The computer-readable signal medium can also be any computer-readable medium other than the computer-readable storage medium 51, which can send, propagate, or transmit information for use by or in conjunction with an instruction execution system, apparatus, or device. program.
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、无线电频率(Radio Frequency,RF)等等,或者上述的任意合适的组合。Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out the operations of the present invention may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages, such as Java, Smalltalk, C++, and conventional A procedural programming language, such as the "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In cases involving a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it may be connected to an external computer such as use an Internet service provider to connect via the Internet).

Claims (44)

  1. 一种定位方法,应用于电子设备,所述电子设备包括至少一个顶视传感器,所述方法包括:A positioning method applied to an electronic device, the electronic device comprising at least one top-view sensor, the method comprising:
    获取所述至少一个顶视传感器采集的传感器数据;acquiring sensor data collected by the at least one top-looking sensor;
    处理所述传感器数据;processing said sensor data;
    根据处理后的传感器数据对所述电子设备进行定位。The electronic device is located based on the processed sensor data.
  2. 根据权利要求1所述的定位方法,其中,所述电子设备包括至少两个顶视传感器;The positioning method according to claim 1, wherein the electronic device comprises at least two top-view sensors;
    所述根据处理后的传感器数据对所述电子设备进行定位包括:在获取到定位指令的情况下,将处理后的传感器数据与全局地图中的局部地图数据进行匹配,确定所述电子设备在所述全局地图中的位姿。The positioning of the electronic device according to the processed sensor data includes: matching the processed sensor data with the local map data in the global map to determine where the electronic device is located. Describe the pose in the global map.
  3. 根据权利要求2所述的方法,其中,所述传感器数据包括顶视环境数据;The method of claim 2, wherein the sensor data comprises top-view environment data;
    处理所述传感器数据,包括:基于时间戳将所述至少两个顶视传感器的顶视环境数据进行对齐;预处理对齐后的顶视环境数据。Processing the sensor data includes: aligning the top-view environment data of the at least two top-view sensors based on time stamps; and preprocessing the aligned top-view environment data.
  4. 根据权利要求3所述的方法,其中,所述预处理对齐后的顶视环境数据,包括:The method according to claim 3, wherein said preprocessing the aligned top-view environment data comprises:
    去除对齐后的顶视环境数据中的噪点;Remove the noise in the aligned top-view environment data;
    拼接去除噪点后的顶视环境数据;Splicing the top-view environment data after denoising;
    提取拼接后的顶视环境数据中面边缘的点云信息。Extract the point cloud information of the surface edge in the stitched top-view environment data.
  5. 根据权利要求4所述的方法,其中,所述拼接去除噪点后的顶视环境数据,包括:The method according to claim 4, wherein said splicing the noise-removed top-view environment data comprises:
    将去除噪点后的顶视环境数据转换至目标顶视传感器所在的坐标系下,所述目标顶视传感器为所述至少两个顶视传感器中的一个顶视传感器。The top-view environment data after denoising is transformed into the coordinate system where the target top-view sensor is located, and the target top-view sensor is one top-view sensor among the at least two top-view sensors.
  6. 根据权利要求2所述的方法,其中,将处理后的传感器数据与全局地图中的局部地图数据进行匹配,确定所述电子设备在所述全局地图中的位姿,包括:The method according to claim 2, wherein matching the processed sensor data with local map data in the global map to determine the pose of the electronic device in the global map comprises:
    将处理后的传感器数据所包括的面边缘的点云信息转换为栅格数据;Convert the point cloud information of the surface edge included in the processed sensor data into raster data;
    确定全局地图中与所述栅格数据匹配的局部地图数据;determining local map data matching the raster data in the global map;
    根据所述局部地图数据和所述栅格数据,确定所述电子设备在所述全局地图中的位姿。Determine the pose of the electronic device in the global map according to the local map data and the grid data.
  7. 根据权利要求6所述的方法,其中,所述根据所述局部地图数据和所述栅格数据,确定所述电子设备在所述全局地图中的位姿,包括:The method according to claim 6, wherein the determining the pose of the electronic device in the global map according to the local map data and the grid data comprises:
    根据所述局部地图数据与所述全局地图之间的位姿关系,以及所述栅格数据与所述局部地图数据间的位姿关系,确定所述电子设备在所述全局地图中的位姿。Determine the pose of the electronic device in the global map according to the pose relationship between the local map data and the global map, and the pose relationship between the grid data and the local map data .
  8. 根据权利要求1所述的方法,还包括:The method according to claim 1, further comprising:
    在获取到建图指令的情况下,将处理后的传感器数据所包括的面边缘的点云信息添加至与所述处理后的传感器数据匹配的局部地图数据中;In the case of obtaining the mapping instruction, adding the point cloud information of the surface edge included in the processed sensor data to the local map data matched with the processed sensor data;
    将添加所述面边缘的点云信息后的局部地图数据更新至所述全局地图中。The local map data after adding the point cloud information of the surface edge is updated to the global map.
  9. 根据权利要求1所述的定位方法,其中,所述电子设备包括一个顶视传感器和至少一个平视传感器;所述传感器数据包括所述至少一个平视传感器采集的平视环境数据和所述一个顶视传感器采集的顶视环境数据;The positioning method according to claim 1, wherein said electronic device includes a top-view sensor and at least one head-up sensor; said sensor data includes head-up environment data collected by said at least one head-up sensor and said one top-view sensor Collected top-view environment data;
    根据处理后的传感器数据对所述电子设备进行定位包括:根据处理后的传感器数据生成平视栅格地图和顶视栅格地图;根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。Positioning the electronic device according to the processed sensor data includes: generating a head-up grid map and a top-view grid map according to the processed sensor data; The head-up grid map locates the electronic device.
  10. 根据权利要求9所述的方法,其中,所述处理所述传感器数据,包括:The method of claim 9, wherein said processing said sensor data comprises:
    预处理所述传感器数据;preprocessing said sensor data;
    将预处理后的传感器数据转换至机体坐标系下;Transform the preprocessed sensor data into the body coordinate system;
    优化转换坐标系后的传感器数据;Optimize the sensor data after transforming the coordinate system;
    根据优化后的传感器数据得到处理后的顶视环境数据和处理后的平视环境数据。According to the optimized sensor data, the processed top-view environment data and the processed head-up environment data are obtained.
  11. 根据权利要求10所述的方法,其中,所述预处理所述传感器数据,包括:The method of claim 10, wherein said preprocessing said sensor data comprises:
    基于时间戳将所述平视环境数据和所述顶视环境数据进行对齐;aligning the head-up environment data and the top-view environment data based on a timestamp;
    提取对齐后的顶视环境数据中的面边缘的点云信息。Extract the point cloud information of the face edges in the aligned top-view environment data.
  12. 根据权利要求10所述的方法,其中,所述优化转换坐标系后的传感器数据,包括:The method according to claim 10, wherein said optimizing the sensor data after transforming the coordinate system comprises:
    通过暴力匹配处理转换坐标系后的传感器数据。The sensor data after transforming the coordinate system is processed by brute force matching.
  13. 根据权利要求9所述的方法,其中,所述根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位,包括:The method according to claim 9, wherein the positioning of the electronic device according to the processed sensor data, the top-view grid map and the head-up grid map comprises:
    根据处理后的平视环境数据对所述平视栅格地图进行闭环检测得到平视匹配率;Carrying out closed-loop detection to the head-up grid map according to the processed head-up environment data to obtain the head-up matching rate;
    根据处理后的顶视环境数据对所述顶视栅格地图进行闭环检测得到顶视匹 配率;According to the top-view environment data after processing, described top-view grid map is carried out closed-loop detection and obtains top-view matching rate;
    在所述平视匹配率大于第一设定阈值和所述顶视匹配率大于第二设定阈值中的至少之一满足的情况下,确定所述电子设备的全局位姿。When at least one of the head-up matching rate greater than a first set threshold and the top-view matching rate greater than a second set threshold is satisfied, determine the global pose of the electronic device.
  14. 根据权利要求9所述的方法,其中,根据处理后的传感器数据生成平视栅格地图和顶视栅格地图,包括:The method according to claim 9, wherein generating a head-up grid map and a top-view grid map according to the processed sensor data comprises:
    基于处理后的平视环境数据生成平视栅格地图;Generate a head-up grid map based on the processed head-up environment data;
    基于处理后的顶视环境数据生成顶视栅格地图。Generate a top-view raster map based on the processed top-view environment data.
  15. 根据权利要求1所述的定位方法,其中,所述电子设备包括至少两个顶视传感器和至少一个平视传感器;所述传感器数据包括所述至少一个平视传感器采集的平视环境数据和所述至少两个顶视传感器采集的顶视环境数据;The positioning method according to claim 1, wherein the electronic device includes at least two top-view sensors and at least one head-up sensor; the sensor data includes the head-up environment data collected by the at least one head-up sensor and the at least two head-up sensors. Top-view environmental data collected by a top-view sensor;
    所述根据处理后的传感器数据对所述电子设备进行定位包括:根据处理后的传感器数据生成平视栅格地图和顶视栅格地图;根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位。The positioning of the electronic device according to the processed sensor data includes: generating a head-up grid map and a top-view grid map according to the processed sensor data; according to the processed sensor data, the top-view grid map and The head-up grid map locates the electronic device.
  16. 根据权利要求15所述的方法,其中,所述处理所述传感器数据,包括:The method of claim 15, wherein said processing said sensor data comprises:
    预处理所述传感器数据;preprocessing said sensor data;
    将预处理后的传感器数据转换至机体坐标系下;Transform the preprocessed sensor data into the body coordinate system;
    优化转换坐标系后的传感器数据;Optimize the sensor data after transforming the coordinate system;
    根据所述优化后的传感器数据得到处理后的顶视环境数据和处理后的平视环境数据。The processed top-view environment data and the processed head-up environment data are obtained according to the optimized sensor data.
  17. 根据权利要求16所述的方法,其中,所述预处理所述传感器数据,包括:The method of claim 16, wherein said preprocessing said sensor data comprises:
    基于时间戳将所述平视环境数据和所述顶视环境数据进行对齐;aligning the head-up environment data and the top-view environment data based on a timestamp;
    将对齐后的顶视环境数据进行拼接;Stitch the aligned top-view environment data;
    提取所述拼接后的顶视环境数据中的面边缘的点云信息。Extracting point cloud information of surface edges in the spliced top-view environment data.
  18. 根据权利要求16所述的方法,其中,所述优化转换坐标系后的传感器数据,包括:The method according to claim 16, wherein said optimizing the sensor data after transforming the coordinate system comprises:
    通过暴力匹配处理转换坐标系后的传感器数据。The sensor data after transforming the coordinate system is processed by brute force matching.
  19. 根据权利要求15所述的方法,其中,所述根据处理后的传感器数据、所述顶视栅格地图和所述平视栅格地图对所述电子设备进行定位,包括:The method according to claim 15, wherein said positioning the electronic device according to the processed sensor data, the top-view grid map and the head-up grid map comprises:
    根据处理后的平视环境数据对所述平视栅格地图进行闭环检测得到平视匹配率;Carrying out closed-loop detection to the head-up grid map according to the processed head-up environment data to obtain the head-up matching rate;
    根据处理后的顶视环境数据对所述顶视栅格地图进行闭环检测得到顶视匹配率;Perform closed-loop detection on the top-view grid map according to the processed top-view environment data to obtain a top-view matching rate;
    在所述平视匹配率大于第一设定阈值和所述顶视匹配率大于第二设定阈值中的至少之一满足的情况下,确定所述电子设备的全局位姿。When at least one of the head-up matching rate greater than a first set threshold and the top-view matching rate greater than a second set threshold is satisfied, determine the global pose of the electronic device.
  20. 根据权利要求15所述的方法,其中,根据处理后的传感器数据生成平视栅格地图和顶视栅格地图,包括:The method according to claim 15, wherein generating a head-up grid map and a top-view grid map according to the processed sensor data comprises:
    基于处理后的平视环境数据生成平视栅格地图;Generate a head-up grid map based on the processed head-up environment data;
    基于处理后的顶视环境数据生成顶视栅格地图。Generate a top-view raster map based on the processed top-view environment data.
  21. 一种定位方法,包括:A positioning method, comprising:
    采集机器人的当前位姿以及至少一个预设方向的深度数据;Collect the current pose of the robot and depth data in at least one preset direction;
    基于所述深度数据确定特征点云;determining a feature point cloud based on the depth data;
    确定所述特征点云在预设全局栅格地图中的障碍得分,其中,所述预设全局栅格地图基于历史点云构成;determining the obstacle score of the feature point cloud in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds;
    根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。Optimizing the current pose according to the obstacle score to determine positioning information of the robot.
  22. 根据权利要求21所述的定位方法,其中,采集机器人的当前位姿以及至少一个预设方向的深度数据包括:采集机器人的当前位姿、至少一个预设方向的深度数据和所述至少一个预设方向的红外数据;The positioning method according to claim 21, wherein collecting the current pose of the robot and depth data in at least one preset direction comprises: collecting the current pose of the robot, depth data in at least one preset direction, and the at least one preset direction Infrared data for setting direction;
    所述基于所述深度数据确定特征点云包括:在所述红外数据中提取至少一个边缘点,并根据所述深度数据将所述至少一个边缘点转换为特征点云。The determining the feature point cloud based on the depth data includes: extracting at least one edge point from the infrared data, and converting the at least one edge point into a feature point cloud according to the depth data.
  23. 根据权利要求22所述的方法,其中,所述根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息,包括:The method according to claim 22, wherein said optimizing said current pose according to said obstacle score to determine positioning information of said robot comprises:
    根据所述机器人的移动速度和所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。Optimizing the current pose according to the moving speed of the robot and the obstacle score to determine positioning information of the robot.
  24. 根据权利要求22所述的方法,其中,所述采集机器人的当前位姿、至少一个预设方向的深度数据和至少一个预设方向的红外数据,包括:The method according to claim 22, wherein the collecting the current pose of the robot, depth data in at least one preset direction, and infrared data in at least one preset direction comprises:
    获取所述机器人世界坐标系下的当前位姿,其中,所述当前位姿至少包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角;Acquiring the current pose of the robot in the world coordinate system, wherein the current pose at least includes an abscissa, a ordinate, and an angle between the direction of the robot and the X-axis of the world coordinate system;
    使用预先设置在所述机器人上的至少一个深度数据传感器采集所述至少一个预设方向的深度数据和使用预先设置在所述机器人上的至少一个红外数据传感器采集所述至少一个预设方向的红外数据,其中,所述至少一个预设方向包括水平方向和竖直方向中的至少一种。Using at least one depth data sensor preset on the robot to collect depth data in the at least one preset direction and using at least one infrared data sensor preset on the robot to collect infrared data in the at least one preset direction data, wherein the at least one preset direction includes at least one of a horizontal direction and a vertical direction.
  25. 根据权利要求22所述的方法,其中,所述在所述红外数据中提取至少一个边缘点,并根据所述深度数据将所述至少一个边缘点转换为特征点云,包括:The method according to claim 22, wherein said extracting at least one edge point from said infrared data, and converting said at least one edge point into a feature point cloud according to said depth data comprises:
    滤除所述红外数据中的噪声;filtering out noise in the infrared data;
    在滤除噪声的所述红外数据中提取至少一个边缘点;extracting at least one edge point in the noise-filtered infrared data;
    使用所述机器人的相机模型以及所述深度数据将所述至少一个边缘点的二维坐标转换为三维坐标以构成特征点云。converting the two-dimensional coordinates of the at least one edge point into three-dimensional coordinates by using the camera model of the robot and the depth data to form a feature point cloud.
  26. 根据权利要求22所述的方法,其中,所述确定所述特征点云在预设全局栅格地图中的障碍得分,包括:The method according to claim 22, wherein said determining the obstacle score of said feature point cloud in a preset global grid map comprises:
    将所述特征点云内的所述至少一个边缘点的坐标转换到世界坐标系,映射转换坐标后的每个边缘点到所述预设全局栅格地图的目标栅格;transforming the coordinates of the at least one edge point in the feature point cloud into a world coordinate system, and mapping each edge point after the transformed coordinates to the target grid of the preset global grid map;
    在所述每个边缘点映射到的目标栅格为所述预设全局栅格地图中障碍区域内的栅格的情况下,获取所述每个边缘点映射到的目标栅格的概率值作为所述每个边缘点的障碍得分;In the case that the target grid to which each edge point is mapped is a grid in the obstacle area in the preset global grid map, obtain the probability value of the target grid to which each edge point is mapped as The obstacle score of each edge point;
    统计所述特征点云内所述至少一个边缘点的障碍得分的总和作为所述特征点云的障碍得分。The sum of the obstacle scores of the at least one edge point in the feature point cloud is counted as the obstacle score of the feature point cloud.
  27. 根据权利要求26所述的方法,其中,所述定位信息包括定位位姿信息,所述定位位姿信息为用于机器人进行定位的位姿信息;The method according to claim 26, wherein the positioning information includes positioning pose information, and the positioning pose information is pose information used for robot positioning;
    所述根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息,包括:The optimizing the current pose according to the obstacle score to determine the positioning information of the robot includes:
    根据所述当前位姿以及所述特征点云的障碍得分构建残差函数;Constructing a residual function according to the current pose and the obstacle score of the feature point cloud;
    调整所述残差函数中参数信息使得所述残差函数的结果值最小,残差函数中参数信息为当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角中至少一个参数的取值;Adjust the parameter information in the residual function so that the result value of the residual function is the smallest, and the parameter information in the residual function is the abscissa, ordinate and the angle between the robot direction and the X-axis of the world coordinate system The value of at least one parameter;
    将所述结果值最小时所述当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系X轴的夹角作为定位位姿信息。The abscissa and ordinate of the current pose when the result value is the smallest, and the angle between the robot direction and the X-axis of the world coordinate system are used as positioning pose information.
  28. 根据权利要求23所述的方法,其中,所述定位信息包括定位位姿信息,所述定位位姿信息为用于机器人进行定位的位姿信息;The method according to claim 23, wherein the positioning information includes positioning pose information, and the positioning pose information is pose information used for robot positioning;
    所述根据所述机器人的移动速度和所述障碍得分优化所述当前位姿以确定所述机器人的定位信息,包括:The optimizing the current pose according to the moving speed of the robot and the obstacle score to determine the positioning information of the robot includes:
    根据所述当前位姿以及所述障碍得分构建第一残差项;Constructing a first residual term according to the current pose and the obstacle score;
    基于所述移动速度确定预测位姿,并将预测位姿与上一时刻的历史位姿的差值作为第二残差项;Determine the predicted pose based on the moving speed, and use the difference between the predicted pose and the historical pose at the previous moment as a second residual term;
    调整所述第一残差项和所述第二残差项中预测位姿中参数信息和所述当前位姿中参数信息中的至少之一使得所述第一残差项和所述第二残差项的和取值最小,所述参数信息包括横坐标、纵坐标以及机器人方向与世界坐标系X轴的夹角中至少一个参数的取值;Adjusting at least one of the parameter information in the predicted pose and the parameter information in the current pose in the first residual item and the second residual item so that the first residual item and the second residual item The sum of the residual items is the smallest value, and the parameter information includes the value of at least one parameter in the angle between the abscissa, the ordinate and the robot direction and the X-axis of the world coordinate system;
    将所述第一残差项和所述第二残差项的和取值最小时的预测位姿和所述当前位姿中的至少之一的横坐标、纵坐标以及机器人方向与世界坐标系X轴的夹角作为定位位姿信息。The abscissa and ordinate of at least one of the predicted pose and the current pose when the sum of the first residual term and the second residual term is minimized and the world coordinate system The included angle of the X axis is used as positioning pose information.
  29. 根据权利要求21所述的定位方法,其中,The positioning method according to claim 21, wherein,
    基于所述深度数据确定特征点云包括:在所述深度数据中提取至少一个平面的外轮廓点并将所述至少一个平面的外轮廓点构成特征点云。Determining the feature point cloud based on the depth data includes: extracting the outer contour points of at least one plane from the depth data and forming the feature point cloud from the outer contour points of the at least one plane.
  30. 根据权利要求29所述的方法,其中,所述根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息,包括:The method according to claim 29, wherein said optimizing said current pose according to said obstacle score to determine positioning information of said robot comprises:
    根据所述机器人的移动速度和所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。Optimizing the current pose according to the moving speed of the robot and the obstacle score to determine positioning information of the robot.
  31. 根据权利要求29所述的方法,其中,所述采集机器人的当前位姿以及至少一个预设方向的深度数据,包括:The method according to claim 29, wherein the collecting the current pose of the robot and the depth data of at least one preset direction comprises:
    获取所述机器人世界坐标系下的当前位姿,其中,当前位姿至少包括横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角;Acquiring the current pose of the robot in the world coordinate system, wherein the current pose at least includes an abscissa, a ordinate, and an angle between the direction of the robot and the X-axis of the world coordinate system;
    使用预先设置在所述机器人上的至少一个深度数据传感器采集所述至少一个预设方向的深度数据,其中,所述至少一个预设方向包括水平方向和竖直方向中的至少一种;collecting depth data in the at least one preset direction using at least one depth data sensor preset on the robot, wherein the at least one preset direction includes at least one of a horizontal direction and a vertical direction;
    滤除所述至少一个预设方向的深度数据中的噪声。Noise in the depth data in the at least one preset direction is filtered out.
  32. 根据权利要求29所述的方法,其中,所述在所述深度数据中提取至少一个平面的外轮廓点并将所述至少一个平面的外轮廓点构成特征点云,包括:The method according to claim 29, wherein said extracting the outer contour points of at least one plane from the depth data and forming the feature point cloud from the outer contour points of the at least one plane comprises:
    基于所述机器人的相机模型将所述深度数据转换为三维点云,所述深度数据至少包括位置点信息和深度信息;converting the depth data into a three-dimensional point cloud based on the camera model of the robot, the depth data at least including position point information and depth information;
    依据所述深度信息和预设法向量信息将所述三维点云分割为至少一个平面;segmenting the 3D point cloud into at least one plane according to the depth information and preset normal vector information;
    提取所述至少一个平面的外轮廓点作为特征点云。Extracting the outer contour points of the at least one plane as a feature point cloud.
  33. 根据权利要求29所述的方法,其中,所述确定所述特征点云在预设全局栅格地图中的障碍得分,包括:The method according to claim 29, wherein said determining the obstacle score of said feature point cloud in a preset global grid map comprises:
    将所述特征点云内多个位置点的坐标转换到世界坐标系,映射转换坐标后的每个位置点到所述预设全局栅格地图的目标栅格;Converting the coordinates of multiple position points in the feature point cloud to the world coordinate system, and mapping each position point after the transformed coordinates to the target grid of the preset global grid map;
    在所述每个位置点映射到的目标栅格为所述预设全局栅格地图中障碍区域内的栅格的情况下,获取所述每个位置点映射到的目标栅格的概率值作为所述每个位置点的障碍得分;In the case that the target grid to which each position point is mapped is a grid in the obstacle area in the preset global grid map, obtain the probability value of the target grid to which each position point is mapped as The handicap score for each location point;
    统计所述特征点云内所述多个位置点的障碍得分的总和作为所述特征点云的障碍得分。The sum of the obstacle scores of the plurality of position points in the feature point cloud is counted as the obstacle score of the feature point cloud.
  34. 根据权利要求33所述的方法,其中,所述定位信息包括定位位姿信息,所述定位位姿信息为用于机器人进行定位的位姿信息;The method according to claim 33, wherein the positioning information includes positioning pose information, and the positioning pose information is pose information used for robot positioning;
    所述根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息,包括:The optimizing the current pose according to the obstacle score to determine the positioning information of the robot includes:
    根据所述当前位姿以及所述特征点云的障碍得分构建残差函数;Constructing a residual function according to the current pose and the obstacle score of the feature point cloud;
    调整所述残差函数中参数信息使得所述残差函数的结果值最小,残差函数中参数信息为当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系X轴的夹角中至少一个参数的取值;Adjust the parameter information in the residual function so that the result value of the residual function is the smallest, and the parameter information in the residual function is at least the value of a parameter;
    将所述结果值最小时所述当前位姿的横坐标、纵坐标以及机器人方向与世界坐标系的X轴的夹角作为定位位姿信息。When the result value is minimum, the abscissa and ordinate of the current pose, and the angle between the robot direction and the X-axis of the world coordinate system are used as positioning pose information.
  35. 根据权利要求30所述的方法,其中,所述定位信息包括定位位姿信息,所述定位位姿信息为用于机器人进行定位的位姿信息;The method according to claim 30, wherein the positioning information includes positioning pose information, and the positioning pose information is pose information used for robot positioning;
    所述根据所述机器人的移动速度和所述障碍得分优化所述当前位姿以确定所述机器人的定位信息,包括:The optimizing the current pose according to the moving speed of the robot and the obstacle score to determine the positioning information of the robot includes:
    根据所述当前位姿以及所述障碍得分构建第一残差项;Constructing a first residual term according to the current pose and the obstacle score;
    基于所述移动速度确定预测位姿,并将预测位姿与上一时刻的历史位姿的差值作为第二残差项;Determine the predicted pose based on the moving speed, and use the difference between the predicted pose and the historical pose at the previous moment as a second residual term;
    调整所述第一残差项和所述第二残差项中预测位姿中参数信息和所述当前位姿中参数信息中的至少之一使得所述第一残差项和所述第二残差项的和取值最小,所述参数信息包括横坐标、纵坐标以及机器人方向与世界坐标系X轴的夹角中至少一个参数的取值;Adjusting at least one of the parameter information in the predicted pose and the parameter information in the current pose in the first residual item and the second residual item so that the first residual item and the second residual item The sum of the residual items is the smallest value, and the parameter information includes the value of at least one parameter in the angle between the abscissa, the ordinate and the robot direction and the X-axis of the world coordinate system;
    将所述第一残差项和所述第二残差项的和取值最小时的预测位姿和所述当前位姿中的至少之一的横坐标、纵坐标以及机器人方向与世界坐标系X轴的夹 角作为定位位姿信息。The abscissa and ordinate of at least one of the predicted pose and the current pose when the sum of the first residual term and the second residual term is minimized and the world coordinate system The included angle of the X axis is used as positioning pose information.
  36. 根据权利要求22-35中任一所述的方法,还包括:The method according to any one of claims 22-35, further comprising:
    将所述定位信息对应的机器人位姿更新到所述预设全局栅格地图。The robot pose corresponding to the positioning information is updated to the preset global grid map.
  37. 一种定位装置,配置于电子设备,所述装置包括:A positioning device configured on electronic equipment, the device comprising:
    获取模块,设置为获取至少一个顶视传感器采集的传感器数据;An acquisition module configured to acquire sensor data collected by at least one top-view sensor;
    处理模块,设置为处理所述传感器数据;a processing module configured to process the sensor data;
    定位模块,设置为根据处理后的传感器数据对所述电子设备进行定位。A positioning module configured to locate the electronic device according to the processed sensor data.
  38. 一种定位装置,包括:A positioning device, comprising:
    数据采集模块,设置为采集机器人的当前位姿以及至少一个预设方向的深度数据;The data collection module is configured to collect the current pose of the robot and depth data in at least one preset direction;
    特征点云确定模块,设置为基于所述深度数据确定特征点云;A feature point cloud determination module, configured to determine a feature point cloud based on the depth data;
    障碍得分确定模块,设置为确定所述特征点云在预设全局栅格地图中的障碍得分,其中,所述预设全局栅格地图基于历史点云构成;The obstacle score determination module is configured to determine the obstacle score of the feature point cloud in the preset global grid map, wherein the preset global grid map is formed based on historical point clouds;
    定位确定模块,设置为根据所述障碍得分优化所述当前位姿以确定所述机器人的定位信息。A positioning determining module configured to optimize the current pose according to the obstacle score to determine positioning information of the robot.
  39. 一种电子设备,包括:An electronic device comprising:
    至少一个处理器;at least one processor;
    存储装置,设置为存储一个或多个程序;a storage device configured to store one or more programs;
    当所述至少一个程序被所述一个或多个处理器执行,使得所述至少一个处理器实现如权利要求1-36中任一所述的定位方法。When the at least one program is executed by the one or more processors, the at least one processor implements the positioning method according to any one of claims 1-36.
  40. 根据权利要求39所述的电子设备,还包括:至少两个顶视传感器:所述至少两个顶视传感器间的视场角存在共视区域,所述至少两个顶视传感器设置为采集顶视环境数据。The electronic device according to claim 39, further comprising: at least two top-view sensors: there is a common viewing area in the field of view between the at least two top-view sensors, and the at least two top-view sensors are configured to collect top-view sensors. depending on the environment data.
  41. 根据权利要求39所述的电子设备,还包括:一个顶视传感器和至少一个平视传感器;The electronic device of claim 39, further comprising: a top-view sensor and at least one head-up sensor;
    所述至少一个平视传感器设置为采集平视环境数据;The at least one head-up sensor is configured to collect head-up environment data;
    所述一个顶视传感器设置为采集顶视环境数据。The one top-view sensor is configured to collect top-view environment data.
  42. 根据权利要求39所述的电子设备,还包括:至少两个顶视传感器和至少一个平视传感器;The electronic device of claim 39, further comprising: at least two top-view sensors and at least one head-up sensor;
    所述至少一个平视传感器平视传感器设置为采集平视环境数据;The at least one head-up sensor head-up sensor is configured to collect head-up environment data;
    所述至少两个顶视传感器设置为采集顶视环境数据。The at least two top-view sensors are configured to collect top-view environmental data.
  43. 根据权利要求42所述的电子设备,其中,所述至少两个顶视传感器与水平方向夹角不同,相邻两个顶视传感器间的视场角存在设定比例的共视区域。The electronic device according to claim 42, wherein the angles between the at least two top-view sensors and the horizontal direction are different, and there is a common-view area of a set ratio in the field of view between two adjacent top-view sensors.
  44. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-36中任一所述的定位方法。A computer-readable storage medium storing a computer program, which implements the positioning method according to any one of claims 1-36 when the computer program is executed by a processor.
PCT/CN2021/096828 2021-05-28 2021-05-28 Positioning method and apparatus, electronic device, and storage medium WO2022246812A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/096828 WO2022246812A1 (en) 2021-05-28 2021-05-28 Positioning method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/096828 WO2022246812A1 (en) 2021-05-28 2021-05-28 Positioning method and apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022246812A1 true WO2022246812A1 (en) 2022-12-01

Family

ID=84228396

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096828 WO2022246812A1 (en) 2021-05-28 2021-05-28 Positioning method and apparatus, electronic device, and storage medium

Country Status (1)

Country Link
WO (1) WO2022246812A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601434A (en) * 2022-12-12 2023-01-13 安徽蔚来智驾科技有限公司(Cn) Loop detection method, computer device, computer-readable storage medium and vehicle
CN116148823A (en) * 2023-04-12 2023-05-23 北京集度科技有限公司 External parameter calibration method, device, vehicle and computer program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130138247A1 (en) * 2005-03-25 2013-05-30 Jens-Steffen Gutmann Re-localization of a robot for slam
CN110530368A (en) * 2019-08-22 2019-12-03 浙江大华技术股份有限公司 A kind of robot localization method and apparatus
CN111590595A (en) * 2020-06-30 2020-08-28 深圳市银星智能科技股份有限公司 Positioning method and device, mobile robot and storage medium
CN111862214A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN111862215A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN111862216A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
US20210089040A1 (en) * 2016-02-29 2021-03-25 AI Incorporated Obstacle recognition method for autonomous robots

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130138247A1 (en) * 2005-03-25 2013-05-30 Jens-Steffen Gutmann Re-localization of a robot for slam
US20210089040A1 (en) * 2016-02-29 2021-03-25 AI Incorporated Obstacle recognition method for autonomous robots
CN110530368A (en) * 2019-08-22 2019-12-03 浙江大华技术股份有限公司 A kind of robot localization method and apparatus
CN111590595A (en) * 2020-06-30 2020-08-28 深圳市银星智能科技股份有限公司 Positioning method and device, mobile robot and storage medium
CN111862214A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN111862215A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN111862216A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601434A (en) * 2022-12-12 2023-01-13 安徽蔚来智驾科技有限公司(Cn) Loop detection method, computer device, computer-readable storage medium and vehicle
CN116148823A (en) * 2023-04-12 2023-05-23 北京集度科技有限公司 External parameter calibration method, device, vehicle and computer program product
CN116148823B (en) * 2023-04-12 2023-09-19 北京集度科技有限公司 External parameter calibration method, device, vehicle and computer program product

Similar Documents

Publication Publication Date Title
US11900536B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
CN107179086B (en) Drawing method, device and system based on laser radar
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109084746B (en) Monocular mode for autonomous platform guidance system with auxiliary sensor
US10794710B1 (en) High-precision multi-layer visual and semantic map by autonomous units
CN111612760B (en) Method and device for detecting obstacles
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN112525202A (en) SLAM positioning and navigation method and system based on multi-sensor fusion
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
CN111968229A (en) High-precision map making method and device
KR20220028042A (en) Pose determination method, apparatus, electronic device, storage medium and program
JP7422105B2 (en) Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device
WO2022246812A1 (en) Positioning method and apparatus, electronic device, and storage medium
JP7440005B2 (en) High-definition map creation method, apparatus, device and computer program
WO2022262160A1 (en) Sensor calibration method and apparatus, electronic device, and storage medium
CN113048980B (en) Pose optimization method and device, electronic equipment and storage medium
CN108537844B (en) Visual SLAM loop detection method fusing geometric information
CN110728751A (en) Construction method of indoor 3D point cloud semantic map
CN111338383A (en) Autonomous flight method and system based on GAAS and storage medium
CN112833892B (en) Semantic mapping method based on track alignment
CN111862214A (en) Computer equipment positioning method and device, computer equipment and storage medium
KR20200143228A (en) Method and Apparatus for localization in real space using 3D virtual space model
CN117036300A (en) Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping
CN112241718A (en) Vehicle information detection method, detection model training method and device
CN113313765B (en) Positioning method, positioning device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21942386

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE