CN115585802A - Map creation method and device, electronic equipment and readable storage medium - Google Patents

Map creation method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115585802A
CN115585802A CN202211073910.3A CN202211073910A CN115585802A CN 115585802 A CN115585802 A CN 115585802A CN 202211073910 A CN202211073910 A CN 202211073910A CN 115585802 A CN115585802 A CN 115585802A
Authority
CN
China
Prior art keywords
centroid
point
subset
normal
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211073910.3A
Other languages
Chinese (zh)
Inventor
齐奇
邝嘉隆
周佳
王轶丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202211073910.3A priority Critical patent/CN115585802A/en
Publication of CN115585802A publication Critical patent/CN115585802A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)

Abstract

The embodiment of the application provides a map creating method and device, electronic equipment and a readable storage medium, and relates to the technical field of robots. The method comprises the following steps: obtaining initial point cloud data of a target environment, wherein the initial point cloud data comprises coordinates of each measuring point in a target global coordinate system; obtaining a centroid information set from the initial point cloud data, wherein the centroid information set comprises coordinates of each centroid, and the number of the centroids in the centroid information set is smaller than that of the measuring points in the initial point cloud data; obtaining boundary information of the target environment according to the centroid information set, wherein the boundary information comprises a boundary line or a boundary plane; and obtaining an environment map of the target environment according to the boundary information. Therefore, the centroid can be determined from the point cloud, the boundary line or the boundary plane of the environment is determined based on the centroid, and the environment map is obtained based on the boundary line or the boundary plane.

Description

Map creation method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a map creation method, apparatus, electronic device, and readable storage medium.
Background
Map creation is a basic and very important problem in the mobile robot related technology, and has wide application in the related fields of mobile robot navigation positioning, global path planning and the like. Usually, point cloud data is collected first, and then a map is constructed based on the point cloud data. However, the existing map creation method is large in calculation amount and slow in speed.
Disclosure of Invention
The embodiment of the application provides a map creating method and device, electronic equipment and a readable storage medium, and the map creating method and device have the characteristics of small calculated amount, high speed and the like.
The embodiment of the application can be realized as follows:
in a first aspect, an embodiment of the present application provides a map creation method, where the method includes:
obtaining initial point cloud data of a target environment, wherein the initial point cloud data comprises coordinates of each measuring point in a target global coordinate system;
obtaining a centroid information set from the initial point cloud data, wherein the centroid information set comprises coordinates of centroids, and the number of centroids in the centroid information set is smaller than the number of measuring points in the initial point cloud data;
obtaining boundary information of the target environment according to the centroid information set, wherein the boundary information comprises a boundary line or a boundary plane;
and obtaining an environment map of the target environment according to the boundary information.
In a second aspect, an embodiment of the present application provides a map creation apparatus, where the apparatus includes:
the system comprises a point cloud data acquisition module, a target global coordinate system acquisition module and a data processing module, wherein the point cloud data acquisition module is used for acquiring initial point cloud data of a target environment, and the initial point cloud data comprises coordinates of each measuring point in the target global coordinate system;
the centroid calculation module is used for obtaining a centroid information set from the initial point cloud data, wherein the centroid information set comprises coordinates of each centroid, and the number of the centroids in the centroid information set is smaller than that of the measuring points in the initial point cloud data;
a boundary calculation module, configured to obtain boundary information of the target environment according to the centroid information set, where the boundary information includes a boundary line or a boundary plane;
and the processing module is used for obtaining an environment map of the target environment according to the boundary information.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor can execute the machine executable instructions to implement the map creation method described in the foregoing embodiment.
In a fourth aspect, the present application provides a readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the map creation method according to the foregoing embodiment.
According to the map creation method, the map creation device, the electronic equipment and the readable storage medium, the centroid is determined from the point cloud, then the boundary line or the boundary plane of the environment is determined based on the centroid, and the environment map is obtained, so that the environment map can be created in a mode of small calculation amount and high speed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a second schematic block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a third block schematic diagram of an electronic apparatus according to an embodiment of the present application;
fig. 4 is one of operation diagrams of the electronic device for 2D environment sensing according to the embodiment of the present application;
fig. 5 is a second schematic diagram illustrating the operation of the electronic device for 2D environment sensing according to the embodiment of the present application;
fig. 6 is a schematic diagram illustrating the operation of the electronic device for 3D environment sensing according to the embodiment of the present application;
fig. 7 is a schematic flowchart of a map creating method according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating one of the sub-steps included in step S200 of FIG. 7;
FIG. 9 is a second schematic flowchart of the sub-steps included in step S200 in FIG. 7;
FIG. 10 is a flowchart illustrating one of the sub-steps included in step S300 of FIG. 7;
FIG. 11 is a schematic flow chart of the substeps involved in substep S330 of FIG. 10;
FIG. 12 is a third schematic flowchart illustrating the sub-steps included in step S200 in FIG. 7;
FIG. 13 is a second schematic flowchart of the sub-steps included in step S300 in FIG. 7;
FIG. 14 is a schematic flow chart of the substeps involved in substep S380 of FIG. 13;
fig. 15 is a block diagram illustrating a map creation apparatus according to an embodiment of the present application.
Icon: 100-an electronic device; 110-a memory; 120-a processor; 130-a communication unit; 140-a mobile unit; 150-single point ranging unit; 161-pitch rotation subunit; 162-yaw rotation subunit; 200-map creation means; 210-point cloud data obtaining module; 220-centroid calculation module; 230-a boundary calculation module; 240-processing module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 100 may be, but is not limited to, a computer, a server, a robot, etc. The electronic device 100 may include a memory 110, a processor 120, and a communication unit 130. The elements of the memory 110, the processor 120 and the communication unit 130 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 110 is used to store programs or data. The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 120 is used to read/write data or programs stored in the memory 110 and perform corresponding functions. For example, the memory 110 stores therein a map creating apparatus 200, and the map creating apparatus 200 includes at least one software functional module that can be stored in the memory 110 in the form of software or firmware (firmware). The processor 120 executes various functional applications and data processing by running software programs and modules stored in the memory 110, such as the map creation apparatus 200 in the embodiment of the present application, so as to implement the map creation method in the embodiment of the present application.
The communication unit 130 is used for establishing a communication connection between the electronic apparatus 100 and another communication terminal via a network, and for transceiving data via the network.
It should be understood that the structure shown in fig. 1 is only a schematic structural diagram of the electronic device 100, and the electronic device 100 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Alternatively, the electronic device 100 may be a robot or the like that can autonomously move and complete mapping.
At present, the following two methods are generally adopted for drawing.
Mode 1: the autonomous mobile robot scans the environment by 360 degrees by using an airborne single-line or multi-line laser radar to acquire two-dimensional/three-dimensional information of a structured or unstructured environment. However, current single/multi-line laser ranging sensor technology is complex to implement, requires a high performance computing unit, and is costly.
Mode 2: the autonomous mobile robot acquires three-dimensional information of a structured or unstructured environment according to the photographed parallax (extracting feature points in an image and calculating positional deviation between corresponding points) by using an RGB binocular camera. However, this method has a certain range of requirements on the intensity of light, the ambient illumination cannot exceed the maximum/minimum value of the camera, and there is a certain requirement on the contrast of ambient light and shade. Moreover, a specific algorithm is adopted to identify the feature points in the image (in the environment), and the position deviation between the corresponding points of the image is calculated, so that the effect is poor in the environment without texture or weak texture. The method has large calculation amount, needs a high-performance calculation unit and is high in cost.
In this embodiment, the electronic device 100 may obtain the point cloud data as described in the manner 1, and then quickly create a map through the map creation manner provided by the embodiment of the present application.
In this embodiment, the electronic device 100 may also obtain the point cloud data in a single-point ranging manner, and then quickly create a map in the map creation manner provided by the embodiment of the present application. Thus, the map can be created, the device cost of the electronic device 100 can be reduced, and no requirements are imposed on the ambient illumination and the contrast.
Referring to fig. 2, fig. 2 is a second block diagram of the electronic device 100 according to the embodiment of the present disclosure. The electronic device 100 is an autonomous moving mechanism, and the electronic device 100 may further include a moving unit 140, a single-point ranging unit 150, a rotating unit, and a pose acquisition unit. The moving unit 140 may be a self-service moving mechanism, and is configured to carry the single-point ranging unit 150, the rotating unit, the pose acquisition unit, the memory 110, the processor 120, the communication unit 130, and the like, so as to drive the electronic device 100 to move.
The single-point ranging unit 150 is configured to measure a distance between the single-point ranging unit 150 and a measurement point, that is, the single-point ranging unit 150 can determine only one measurement point at a time. The single-point ranging unit 150 may calculate the distance between the single-point ranging unit 150 and the obstacle based on the time difference by modulating the light wave or the sound wave with the specific wavelength. The unit ranging 150 may be fixed under the rotation unit, and may be driven by the rotation unit to rotate, so as to measure different directions.
The rotating unit is used for driving the single-point distance measuring unit 150 to rotate. The rotating unit may be located between the single point ranging unit 150 and the moving unit 140. The rotation unit may comprise a pitch rotation subunit 161 and/or a yaw rotation subunit 162. When the rotating unit includes a pitch rotating subunit 161 and a yaw rotating subunit 162, the single-point ranging unit 150 may be fixed to the pitch rotating subunit 161 as shown in fig. 2, the pitch rotating subunit 161 is fixed to the yaw rotating subunit 162, and the yaw rotating subunit 162 is fixed to the moving unit 140; alternatively, the single-point ranging unit 150 may be fixed to the yaw rotation subunit 162, the yaw rotation subunit 162 is fixed to the pitch rotation subunit 161, and the pitch rotation subunit 161 is fixed to the moving unit 140, and the specific arrangement may be set according to actual requirements.
As shown in fig. 2, the pitch rotation subunit 161 performs vertical plane pitching by electrical control. Since the single-point ranging unit 150 is fixed thereon, the single-point ranging unit 150 is operated to form a pitch angle. And a yaw rotation subunit 162 for performing horizontal rotation by electric control. Since the pitch rotation subunit 161 is fixed thereto, rotation in its horizontal direction will cause the pitch rotation subunit 161 to rotate in the same direction/angle as it.
The pose acquisition unit is used for acquiring pose description information of the electronic device 100. The pose description information may include position coordinates and a pose of the electronic device 100 in a target global coordinate system in an environment sensing process; the system can also include mileage information, attitude information and the like of the electronic device 100 in the process of environment sensing, and the mileage information and the attitude information can be used for calculating the coordinates of the electronic device 100 in the target global coordinate system. The target global coordinate system may be a coordinate system established with the environment sensing starting point as an origin.
Optionally, the pose acquisition unit may include an odometer and a pose measurement sub-unit. The odometer is used for obtaining mileage information, and the posture measuring subunit is used for obtaining posture information. Optionally, the attitude measurement sub-unit may include a geomagnetic sensor and an inertial measurement unit, where the geomagnetic sensor is configured to obtain geomagnetic sensor information, and the inertial measurement unit is configured to obtain attitude information; correspondingly, the pose description information comprises: the geomagnetic sensor information, the attitude information, and the mileage information may be used to determine coordinates of the electronic device 100 in the target global coordinate system.
The processor 120 is electrically connected to the moving unit 140, the single-point ranging unit 150, the rotating unit, and the pose acquisition unit, and configured to implement movement by controlling the moving unit 140 to perform environment detection, obtain initial point cloud data according to the received ranging information, the rotation angle information of the rotating unit, and the pose description information, and complete map creation of a structured environment based on the initial point cloud data.
Referring to fig. 3, fig. 3 is a third block diagram of an electronic device 100 according to an embodiment of the present disclosure. In this embodiment, the electronic device 100 may further include an obstacle avoidance sensor, and the electronic device 100 may finish autonomous movement through a self-movement mechanism, and sense an environmental obstacle through the obstacle avoidance sensor, so as to finish exploration of a structured environment.
Optionally, in this embodiment, the electronic device 100 may further include a camera or a detection unit, configured to collect information to determine whether an endpoint of the environment detection is reached, and end the environment sensing when the endpoint is reached. For example, an exit that can treat the detection environment sets up a sign, has detected this sign in the image that the camera obtained, and when pixel size and the shape of this sign in the image satisfied the corresponding requirement of predetermineeing, then can confirm to reach the terminal point. For another example, an NFC (Near Field Communication) tag may be disposed at an exit of the environment to be detected, and when the detection unit detects the NFC tag, it may be determined that the endpoint is reached.
As shown in fig. 4 and 5, when performing 2D environmental sensing, the electronic device 100 may first perform environmental sensing at an entrance X, and then perform measurement again after traveling a distance to obtain environmental information; and so on until it reaches the exit Y. When the electronic device 100 stays at a position for sensing the environment, the single-point ranging unit 150 may be driven by the yaw rotation subunit 162 to rotate in the horizontal plane, for example, at an interval Δ θ, so as to obtain measurement point data obtained in different directions. It can be understood that the height H of the single-point ranging unit 150 does not change when 2D environment sensing is performed.
As shown in fig. 6, when performing 3D environmental sensing, such as a 2D environmental sensing type, the electronic device 100 may perform environmental sensing at the entrance X, and then perform measurement again after driving for a certain distance to obtain environmental information; and so on until it reaches the exit Y. When the electronic device 100 stays at a position for sensing the environment, the single-point ranging unit 150 may rotate in the horizontal plane under the driving of the yaw rotation subunit 162, and may also rotate in the vertical plane under the driving of the pitch rotation subunit 161, for example, rotate at the horizontal direction interval Δ θ and the vertical direction interval Δ β, so as to obtain measurement point data obtained in different directions. It can be understood that the height H of the single-point ranging unit 150 does not change when 2D environment sensing is performed.
Wherein, optionally, the electronic device 100 may stay at the position for environment detection after driving a distance (e.g., 30 cm).
Referring to fig. 7, fig. 7 is a flowchart illustrating a map creation method according to an embodiment of the present disclosure. The method may be applied to the electronic device 100 described above. The specific flow of the map creation method is explained in detail below. In this embodiment, the method may include steps S100 to S400.
Step S100, obtaining initial point cloud data of a target environment.
In this embodiment, the target environment is an environment in which the map construction is to be completed, and may be determined specifically by combining with actual requirements. The electronic device 100 may obtain the initial point cloud data from other devices; or obtaining original point cloud data from other equipment, and then processing the original point cloud data to obtain initial point cloud data; and environmental perception can be carried out by self to obtain the initial point cloud data. The acquisition mode of the initial point cloud data may be determined by combining with actual requirements, which is not specifically limited herein. And the initial point cloud data comprises the coordinates of each measuring point in a target global coordinate system. The origin of the target global coordinate system (i.e. the target origin) may be determined according to actual conditions, for example, the origin may be a position where the target environment starts to be detected, that is, the reference point X in fig. 4 may be used as the target origin of the target global coordinate system.
And step S200, obtaining a centroid information set from the initial point cloud data.
Where the initial point cloud data is obtained, the initial point cloud data may be processed to obtain a set of centroid information. Wherein the centroid information set comprises coordinates of each centroid, and the number of centroids in the centroid information set is less than the number of measurement points in the initial point cloud data.
And step S300, obtaining boundary information of the target environment according to the centroid information set.
Wherein the boundary information includes a boundary line or a boundary plane. It is understood that, in the case where a 2D map needs to be created, a boundary line is obtained based on the centroid information set; in case a 3D map needs to be created, a bounding plane is obtained based on the set of centroid information.
And step S400, obtaining an environment map of the target environment according to the boundary information.
According to the method and the device, the centroid is determined from the point cloud, then the boundary line or the boundary plane of the environment is determined based on the centroid, and then the environment map is obtained, so that the environment map can be created in a mode of small calculated amount and high speed.
As a possible implementation manner, the initial point cloud data may be obtained by rotating a single-point ranging unit. The single-point ranging unit is used for obtaining information of one measuring point through one-time measurement.
In this embodiment, the entrance of the structured terrain (i.e., the target environment) that needs to be mapped may be artificially identified. With the entrance as a reference point, the electronic apparatus 100 is placed at the reference point to start the operation. The end point of the end of the exploration of the electronic device 100 is manually confirmed and may be visually labeled. When the electronic device 100 recognizes the visual tag within a certain range, the point cloud data collection of the structured environment is completed with the visual tag as an end point.
After the current acquisition point finishes the acquisition of the environmental point cloud data, the electronic device 100 confirms the next acquisition point according to the acquired obstacle information (for example, the distance between the local device and the obstacle) and the direction and the self pose information, and stops after reaching the point position, and starts to finish the acquisition of the environmental point cloud data of the acquisition point.
The environmental point cloud data may be divided into 2D point cloud data and 3D point cloud data. When a 2D map needs to be created, 2D point cloud data can be obtained; when a 3D map needs to be created, 3D point cloud data may be obtained.
The 2D point cloud data may be obtained as follows. Based on the local coordinate system of the electronic device 100, the processor controls the yaw rotation subunit to perform 360 ° rotation traversal by using Δ θ as a step interval angle from the front of the single-point ranging unit. The local coordinate system is the electronic device coordinate system, and an origin of the coordinate system may be a certain point of the electronic device 100.
Within the set effective measurement range of the single-point distance measuring unit, the distance and yaw angle data [ D (n), n × Δ θ ] of the measurement point are recorded and converted into coordinate data in the local coordinate system of the electronic device 100.
The 3D point cloud data may be obtained as follows. Taking a local coordinate system of the electronic device 100 as a reference, the processor controls the yaw rotation subunit to perform 360-degree rotation traversal on the horizontal plane by taking delta theta as a stepping interval; the processor controls the pitching rotation subunit to rotate and traverse within the pitching controllable angle range by taking delta beta as a stepping interval. The measurement point distances, yaw angles and pitch angles [ L (n), n Δ θ, m Δ β ] are recorded. The recorded data is converted into coordinate data in the local coordinate of the electronic device 100 based on the calculation.
Alternatively, to ensure the accuracy of the recorded distance of the measuring point, the distance may be specially marked when the measured distance exceeds the measuring range.
Assuming that the maximum measurement distance of the single-point ranging unit is Lmax, the measurement error is + -Delta s%. The measurement error at Lmax was Lmax ± Δ s%. In order to reduce the measurement error, the effective measurement range of the single-point ranging unit can be set as Lef (Lef < Lmax), and when the distance measured by the single-point ranging unit is greater than or equal to Lef, the data can be set as invalid data. That is, the specially marked data is invalid data. Invalid data may not be saved.
After the conversion of the coordinate data in the local coordinate system of the current acquisition point is completed, the transformation equation of the coordinate system of the current acquisition point and the coordinate system of the reference point X (namely, the target global coordinate system) can be confirmed through information recorded by related sensors such as a speedometer, an I inertial measurement unit, a geomagnetic sensor and the like, and the coordinate data of the current acquisition point is converted into the coordinate data of the coordinate system of the reference point X, so that the coordinate data under the target global coordinate system can be obtained.
By analogy, when the detection is stopped, the initial point cloud data under the 2D environment or the initial point cloud data under the 3D environment can be obtained. Then, data processing can be performed based on the point cloud data set with the reference point X as a coordinate system, and a 2D or 3D map is obtained through reconstruction.
First, how to obtain a 2D environment map in the case of obtaining initial point cloud data in a 2D environment will be explained below.
Referring to fig. 8, fig. 8 is a flowchart illustrating one of sub-steps included in step S200 in fig. 7. The set of centroid information may include coordinates and weights for each centroid. In this embodiment, step S200 may include substeps S230 and step S240.
And a substep S230, performing grid division on the measuring points corresponding to the initial point cloud data by taking the target origin of the target global coordinate system as an origin and taking the first preset length as a side length.
And a substep S240, for each grid, obtaining coordinates of the centroid of the grid according to the coordinates of each measurement point in the grid, and obtaining a weight corresponding to the centroid of the grid according to the number of measurement points in the grid.
In this embodiment, the target origin of the target global coordinate system may be an origin, and a first preset length may be a side length, which are divided into a plurality of two-dimensional grids, and a point cloud included in each grid is determined. Then, aiming at each grid, taking the point cloud data in the grid as a subset to obtain two-dimensional coordinates of the centroid of the grid; and determining the weight corresponding to the centroid of the grid according to the number of the measuring points in the grid. After each grid is processed, a set of centroid parameters can be obtained, which is a 2D weighted point cloud (xn, yn, wn), and the set of centroid parameters can be used as a first set of centroid information corresponding to the 2D environment. And (xn, yn) is the coordinate of the first centroid under the target global coordinate system, wn is the weight corresponding to the first centroid, and the first centroid is the centroid of the two-dimensional grid. The first preset length may be set in combination with actual requirements.
Optionally, as a possible implementation manner, the first preset length is Lef Δ s%, lef is a measurement range set when measuring the distance, and Δ s% is a measurement error of a sensor used when measuring the distance, for example, a measurement error of a single-point measurement sensor. When the grids are divided based on the first preset length and the measurement points in each grid are determined, for each grid, the average value of the x values of the coordinates of each measurement point in the grid is used as the x value of the centroid coordinate, the average value of the y values of the coordinates of each measurement point in the grid is used as the y value of the centroid coordinate, and the number of measurement points in the grid is used as the weight corresponding to the centroid of the grid.
In order to reduce interference, before determining the centroid, discrete points in the point cloud data may be removed to obtain target point cloud data, and then data grid filtering shown in fig. 8 is performed on the target point cloud data to obtain a centroid information set. Referring to fig. 9, fig. 9 is a second flowchart illustrating the sub-steps included in step S200 in fig. 7. In this embodiment, before the substep S230, the substep S230 may further include substeps 211 and substep S221.
And a substep S211, obtaining, for each measurement point in the initial point cloud data, the number of measurement points in a circle with the measurement point as a center and a second preset length as a radius.
And a substep S212, determining whether to eliminate the measuring points according to the number of the measuring points in the circle corresponding to the measuring points so as to obtain first target point cloud data.
In this embodiment, when the 2D initial point cloud data is obtained, the number of measurement points in a circle with the measurement point as a center and a second preset length as a radius may be obtained for each measurement point in the initial point cloud data. The second preset length may be set in combination with an actual requirement, for example, may be the same as the first preset length, and is set as Lef × Δ s%.
Then, the number of the measurement points may be compared with a preset number, and if the number is smaller than the preset number, the measurement points are deleted. If so, the measurement point may be retained. The preset number may be set in combination with actual requirements, the preset number being at least greater than 1. Therefore, radius filtering can be carried out by taking the measuring point as the circle center and the second preset length as the radius, outliers are eliminated, and the 2D first target point cloud data are obtained. Then, the data grid filtering shown in substep S230 to substep S240 may be performed on the first target point cloud data, that is, in a 2D environment, the object targeted by grid division is a measurement point corresponding to the first target point cloud data, so as to obtain a first centroid information set corresponding to the 2D environment.
Referring to fig. 10, fig. 10 is a flowchart illustrating one of sub-steps included in step S300 in fig. 7. In this embodiment, in the case where the environment map is a 2D map, step S300 may include sub-steps S310 to S340.
And a substep S310, taking the first centroid closest to the target origin of the target global coordinate system in the first centroid information set as a starting point, determining a next first centroid closest to the starting point, taking the starting point and the next first centroid as a first centroid pair, updating the next first centroid as the starting point, and repeating the step of determining the first centroid pair until the first centroid pair corresponding to the last first centroid is determined.
And a substep S320, for each first centroid pair, obtaining a first equilibrium point of the first centroid pair according to the coordinates and the weight of each first centroid in the first centroid pair.
As a possible implementation, the first equalization point may be obtained by the following first preset formula:
Figure BDA0003830618620000121
Figure BDA0003830618620000122
wherein, (Hx, hy) represents the coordinates of the first equilibrium point, (Qx) s ,Qy s ) Coordinates representing the first centroid as a starting point in the first centroid pair, qw s A weight representing the first centroid as a starting point; (Qx) e ,Qy e ) Coordinates, qw, representing a first centroid being said next first centroid in the first centroid pair e Representing the weight of a first centroid being said next first centroid.
In the above process, the first centroid Q (0) closest to the target origin may be used as a starting point, the next first centroid Q (1) closest to Q (0) is found, and the first equilibrium point H (x 0, y 0) is obtained according to the relevant parameters of Q (0) and Q (1). Wherein:
Hx0=Qx0+(Qx1-Qx0)*[Qw1/(Qw0+Qw1)];
Hy0=Qy0+(Qy1-Qy0)*[Qw1/(Qw0+Qw1)];
the coordinates of the first centroid Q (0) are (Qx 0, qy 0), the corresponding weight is Qw0, the coordinates of the next first centroid Q (1) are (Qx 1, qy 1), and the corresponding weight is Qw1.
By analogy, the first centroid information set is traversed to obtain a first equilibrium point set H1. The first set of equilibrium points H1 includes the two-dimensional coordinates of the first equilibrium point of each second centroid pair.
For example, assuming that there are 5 first centroids, Q (0) is found first, then Q (1) closest to Q (0) is found from the first centroid which is never divided into the first centroid pair, and a first equilibrium point H1 (0) is calculated based on information of Q (0) and Q (1); and so on until the first equalization point H1 (4) is calculated based on the information of Q (4), Q (5).
And a substep S330, dividing the first set of equalization points into a plurality of first subsets of equalization points based on the normal turning points corresponding to the first set of equalization points.
In this embodiment, the first set of equalization points may be analyzed to obtain turning points of the first set of equalization points connecting lines, which are used as normal turning points, and then the first set of equalization points is divided into a plurality of first subsets of equalization points based on the normal turning points. The normal turning point is a first equalization point in the first equalization point set.
Referring to fig. 11, fig. 11 is a flowchart illustrating the sub-steps included in sub-step S330 in fig. 10. In this embodiment, the first equalization points in the first equalization point set H1 are sorted according to the obtaining order of the corresponding first centroid pair, and step S330 may include sub-step S331 to sub-step S333.
And a substep S331, sequentially connecting adjacent first equalization points in the first equalization point set to obtain a first normal set.
In this embodiment, the first equilibrium points in the first equilibrium point set H1 are sorted according to the obtaining order of the corresponding first centroid pair, two adjacent points in the first equilibrium point set H1 may be connected by a straight line to form a 2D weighted point cloud normal set N1 (0, 1, 2 … … N), and the 2D weighted point cloud normal set N1 is used as the first normal set. It is worth noting that, due to the simplified calculation, the first normals in the first set of normals are actually approximate normals.
The first normal line is a line determined by connecting two adjacent equilibrium points, that is, the first normal line in the first normal line set is a straight line connected by the adjacent first equilibrium points. The first normals in the first normal set are sorted according to the connection order of the first equilibrium points. For example, there are first equalization points H1 (0) to H1 (4), and by connecting two adjacent points with a straight line, 4 lines can be obtained.
And a substep S332, traversing the first normal set, and taking a first equilibrium point between the second first normal and the third first normal as a normal turning point when an included angle between the first normal and the fourth first normal is less than or equal to a first preset value and an included angle between the second first normal and the third first normal is less than or equal to a second preset value in 4 adjacent first normals in sequence.
The first set of normals is traversed and a normal turning point is determined when a second predetermined formula is satisfied. Wherein the second predetermined formula is:
∠A≥K[N1(n+1),N1(n+2)];
∠B≥K[N1(n),N1(n+3)];
wherein K is used for solving an included angle between two straight lines, angle B is used for representing a first preset value, and angle A is used for representing a second preset value.
In the case where the above second preset formula is satisfied, the first equilibrium point H1 (N + 2) between the normal line segment N1 (N + 1) and the normal line segment N1 (N + 2) may be determined as the normal turning point.
And a substep S333 dividing the first set of equalization points into a plurality of first subsets of equalization points based on the obtained normal turning points.
The first set of equalization points may be divided into a plurality of segments of subsets based on the determined normal turning points to obtain a plurality of first subsets of equalization points H1[ m ]. Wherein, the first equalization point in the latter first equalization point subset is the last equalization point in the former first equalization point subset, and the first equalization point is the normal turning point.
In substep S340, for each first subset of equalization points, a boundary line corresponding to the first subset of equalization points is determined.
In the case of obtaining the first subset of equalization points, a boundary line may be calculated from each of the first subset of equalization points for each first subset of equalization points. Wherein the sum of the squares of the vertical distances to the borderline for all points in the first subset of equilibrium points is minimal.
That is, a boundary line L [0] is obtained from the first subset of equalization points H1[0] such that the sum of the squares of the vertical distances to the straight line L [0] is minimized for all the first equalization points in the first subset of equalization points H1[0 ]. In the same way, L1 and L2 … L m are obtained in turn.
It should be noted that the boundary line expressions L1 and L2 … L m are not line segments, but straight lines. The boundary lines may determine corresponding line segments, and determine the map boundary positions by using the start and end coordinates of the first equilibrium point set H1 (i.e., the first equilibrium point corresponding to the start reference point X and the first equilibrium point corresponding to the end reference point Y when the environment is sensed), so as to obtain the 2D map. And the boundary lines corresponding to the adjacent first balance point subsets intersect to determine the outlet line section. For example, boundary lines L0, L1, L2, etc. are obtained as described above, the boundary line L0 intersects with the boundary line L1, the boundary line L1 intersects with the boundary line L2, a line segment can be determined from the boundary line L1 according to two occurring intersection points, and the like. And the line segment is cut from the first boundary line L0, the tail end point of the line segment is the intersection point of the boundary line L0 and the boundary line L1, and the initial end point is vertically and orthogonally cut from the initial point of the first balance point subset H1[0] corresponding to the boundary line L0. Similarly, the line segment is cut from the last boundary line L [ m ], the starting end point of the line segment is the intersection point of the boundary line L [ m-1] and the boundary line L [ m ], and the end point is vertically and orthogonally cut by the end point of the first balance point subset H1[ m ] corresponding to the boundary line L [ m ]. In this way, a 2D map can be obtained based on the line segments cut from the boundary lines.
In this embodiment, under the condition that a 2D map needs to be created, discrete point elimination is performed on initial point cloud data, and then raster data filtering is performed to obtain a first equilibrium point set; then, a first normal set is obtained based on the first equalization point set; and determining a first balance point serving as a first normal turning point, dividing the first balance point serving as the turning point to obtain a first balance subset based on the determined first balance point, and drawing a boundary reference line of the 2D map based on the first balance subset to obtain the 2D map. Thus, the calculation amount in the 2D map creating process can be reduced, and the speed is improved.
How to obtain a 3D environment map in the case of obtaining initial point cloud data in a 3D environment will be explained below.
First, a second centroid information set is obtained based on 3D initial point cloud data.
Referring to fig. 12, fig. 12 is a third schematic flowchart illustrating sub-steps included in step S200 in fig. 7. The second set of centroids is obtained in a similar manner to the first set of centroid information, and in this embodiment, step S200 may include sub-step S230 and sub-step S240.
And a substep S230, performing grid division on the measuring points corresponding to the initial point cloud data by taking the target origin of the target global coordinate system as an origin and taking the first preset length as a side length.
And a substep S240, for each grid, obtaining coordinates of the centroid of the grid according to the coordinates of each measurement point in the grid, and obtaining a weight corresponding to the centroid of the grid according to the number of measurement points in the grid.
In this embodiment, the target origin of the target global coordinate system may be an origin, and a first preset length may be a side length, which are divided into a plurality of three-dimensional grids, and a point cloud included in each grid is determined. Then, aiming at each grid, taking the point cloud data in the grid as a subset to obtain the three-dimensional coordinates of the centroid of the grid; and determining the weight corresponding to the centroid of the grid according to the number of the measuring points in the grid. In this manner, a second set of centroid information may be obtained. The centroid of each three-dimensional grid is the second centroid, and the second centroid information set comprises the three-dimensional coordinates and the weight of the second centroid.
Optionally, the 3D space of the dataset may be divided into a grid cubic space with the target origin as the origin and Lef Δ s% as the side length. And taking the point cloud data in each grid cube as a subset, solving a centroid, and giving a weight to the centroid according to the number of the measuring points in the grid cube. After each grid cube is processed, a centroid parameter set is obtained, the centroid parameter set is a 3D weighted point cloud Q (xn, yn, zn, wn), and the centroid parameter set can be used as a first centroid information set corresponding to a 3D environment. And (xn, yn, zn) is the coordinate of the second centroid under the target global coordinate system, and wn is the weight corresponding to the second centroid.
In order to reduce interference, before the second centroid is determined, discrete points in the point cloud data may be removed to obtain second target point cloud data, and then the data grid filtering described in substep S230 and substep S240 is performed on the second target point cloud data to obtain a second centroid information set. Referring to fig. 12 again, in the present embodiment, before the substep S230, the substep S230 may further include substeps 213 and substep S223.
And a substep S213 of obtaining, for each measurement point in the initial point cloud data, the number of measurement points in the sphere with the measurement point as the center of the sphere and the second preset length as the radius.
And a substep S223 of determining whether to eliminate the measuring points according to the number of the measuring points in the sphere corresponding to the measuring points so as to obtain second target point cloud data.
In this embodiment, when the 3D initial point cloud data is obtained, the number of measurement points in the sphere with the measurement point as the center and the second preset length as the radius may be obtained for each measurement point in the initial point cloud data. The second preset length may be set to Lef Δ s%.
The measurement points may be deleted in the case that the number of measurement points in the sphere is less than a preset number. Otherwise, the measurement point may be saved. The preset number can be set by combining with actual requirements, and the preset number is at least greater than 1. Therefore, radius filtering can be carried out by taking the measuring point as the sphere center and the second preset length as the radius, and outliers are eliminated so as to obtain the 3D second target point cloud data. Then, the second target point cloud data may be subjected to data grid filtering as shown in substep S230 to substep S240, that is, in a 3D environment, the object targeted by grid division is the measurement point corresponding to the second target point cloud data, so as to obtain a second centroid information set corresponding to the 3D environment.
Referring to fig. 13, fig. 13 is a second flowchart illustrating the sub-steps included in step S300 in fig. 7. In this embodiment, in the case where the environment map is a 3D map, step S300 may include sub-steps S350 to S390.
And a substep S350, taking the second centroid closest to the target origin of the target global coordinate system in the second centroid information set as a starting point, determining a next second centroid closest to the starting point, taking the starting point and the next second centroid as a second centroid pair, and updating the next second centroid as the starting point to repeatedly execute the step of determining the second execution pair until the second centroid pair corresponding to the last second centroid is determined.
And a substep S360, aiming at each second centroid pair, obtaining a second equilibrium point of the second centroid pair according to the coordinates and the weight of each second centroid in the second centroid pair.
The detailed description of substep S350 to substep S360 can refer to the description of substep S310 to substep S320 above.
The process of finding the second equalization point may be as follows. And taking a second centroid Q (0) closest to the target origin as a starting point, searching a next second centroid Q (1) closest to Q (0), and calculating a second equilibrium point H (x 0, y0, z 0) by using Q (0) and Q (1) related parameters. Wherein:
Hx0=Qx0+(Qx1-Qx0)*[Qw1/(Qw0+Qw1)];
Hy0=Qy0+(Qy1-Qy0)*[Qw1/(Qw0+Qw1)];
Hz0=Qz0+(Qz1-Qz0)*[Qw1/(Qw0+Qw1)];
the coordinates of the second centroid Q (0) are (Qx 0, qy0, qz 0), the corresponding weight is Qw0, the coordinates of the next second centroid Q (1) are (Qx 1, qy1, qz 1), and the corresponding weight is Qw1.
And by analogy, traversing the second centroid information set to obtain a second equilibrium point set H2. The second set of equilibrium points H2 includes the three-dimensional coordinates of the second equilibrium points of each second centroid pair.
And a substep S370, obtaining a plurality of reference planes from the second set of equalized points based on a manner that any nearest neighboring three points in the second set of equalized points form a plane.
And forming a plane by using any nearest adjacent three points in the second equilibrium point set H2, and traversing the second equilibrium point set H2 to obtain a reference plane set P consisting of a plurality of reference planes. Wherein each reference plane is a plane determined by the three second equalization points in the above manner.
And a substep S380 of dividing the reference plane set into a plurality of reference plane subsets based on the included angles between the reference planes.
In this embodiment, the planes with small included angles can be classified into one class based on the size of the included angle between the reference planes, so that a plurality of reference plane subsets are obtained based on the reference plane set.
Referring to fig. 14, fig. 14 is a flowchart illustrating sub-steps included in sub-step S380 in fig. 13. In the present embodiment, the substep S380 may include substeps S381 to substep S389.
And a substep S381 of calculating second normals of the respective reference planes to obtain a second set of normals.
A second normal to each reference plane in the set of reference planes may be computed for each reference plane, resulting in a second set of normals. The second set of normals includes the second normal for each reference plane in the set of reference planes.
Sub-step S382, from the second normals not assigned to the second subset of normals, selects one second normal as an element in the current second subset of normals.
When a new reference plane subset needs to be obtained, an empty subset may be established first. Then, from the second normals that are not assigned to the subset of second normals, one second normal is selected to be added to the subset to obtain a current subset of second normals. It is to be understood that when the division of the reference plane is performed for the first time, all second normals are second normals that are not assigned to the second subset of normals.
Substep S383, calculating an average normal vector angle from the elements in the current second normal subset, and determining a current reference plane subset corresponding to the current second normal subset.
Then, an average value of all the second normal vector angles may be calculated as an average normal vector angle from all the second normals included in the current second normal subset. At the same time, since the current second subset of normals is determined, a current reference plane subset corresponding to the current second subset of normals may be determined, the current reference plane subset including the reference plane corresponding to each second normal in the current second subset of normals.
In sub-step S384, a neighboring plane is found from the reference planes to which the reference plane subset is not assigned, based on the second leveling points included in the elements in the current reference plane subset.
In the case that the current reference plane subset is determined, a second equalization point included in an element in the current reference plane subset may be determined, and a reference plane including the second equalization point in the current reference plane subset is selected as a neighboring plane from among reference planes not allocated to the reference plane subset.
And a substep S385, calculating a vector angle of the second normal of the adjacent plane, and calculating an absolute included angle between the average normal vector angle and the vector angle corresponding to the adjacent plane.
In the case where a neighboring plane is found, the vector angle of the second normal to the neighboring plane can be calculated based on the neighboring plane. Then, an absolute angle between the average normal vector angle and the vector angle corresponding to the adjacent plane is found.
And a substep S386, judging whether the absolute included angle is larger than or equal to a third preset value.
This third preset value can be set in combination with the actual requirements.
If less than, substeps S387 to substep S388 may be performed. If greater than or equal to, then sub-step S3810 may be performed.
Substep S387 adds the neighboring plane to the current reference plane subset and adds the second normal of the neighboring plane to the current second normal subset.
In the substep S388, it is determined whether the critical plane is completely searched.
In this way, an update of the current reference plane subset and the current second normal subset may be done once based on the found neighboring planes. After one update is completed, it can be determined whether the search of the neighboring planes of the current reference plane subset is completed. If the search is completed, that is, there is no neighboring plane of the current reference plane subset in the unassigned reference planes, the current reference plane subset may be used as a reference plane subset for which the division has been completed, that is, the classification for the current reference plane subset is completed. In this manner, a subset of reference planes may be obtained.
If the search is not completed, go to step S383 to obtain a new average normal vector angle, so as to continue to add elements to the current reference plane subset.
In this case, when there is no reference plane to which the reference plane subset is not allocated, it may be considered that all the reference plane searches are completed, and at this time, there is no neighboring plane, and the reference plane subsets obtained at this time and before may be the results obtained by completing the division of the reference plane set.
And substep S3810, not adding the neighboring plane to the current reference plane subset.
And when the absolute included angle is larger than or equal to a third preset value, not adding the adjacent plane into the current reference plane subset, and not adding the second normal of the adjacent plane into the current second normal subset. Then, sub-step S388 is performed.
In case there are still reference planes to which no reference plane subsets are assigned, the execution may be repeated starting from sub-step S382 to obtain a plurality of reference plane subsets by repeating the step of obtaining reference plane subsets, while a plurality of second normal subsets may be obtained.
In sub-step S390, for each reference plane subset, a boundary plane corresponding to the reference plane subset is determined.
As a possible implementation, a second subset of normals corresponding to each subset of reference planes may be obtained. The second normal subset includes second normals of the respective reference planes in the corresponding reference plane subset. For each second subset of normals, a target normal is obtained from the second subset of normals as a representation of the second subset of normals, and based on the target normal, a boundary plane corresponding to the reference plane subset corresponding to the second subset of normals is obtained. Wherein the sum of the squares of all second equalization points in the subset of reference planes to the boundary plane is minimized.
Alternatively, the target normal Mav (M) may be obtained for one second normal subset M (M) in such a way that all elements of the second normal subset are averaged. Next, a plane O (M) may be determined based on the target normal Mav (M), such that a sum of squares of the second equalization points included in the reference plane corresponding to the second normal subset M (M) to the plane O (M) is minimized. Thus, the coordinate information of the O (m) plane can be obtained. O (m) is a certain boundary plane of the 3D map, and the coordinate information of the O (m) plane can be a three-dimensional linear equation of the O (m) plane.
The 3D map boundary O (M) is determined based on all the second subset of normals M (M) in turn. Then, the intersection of the remaining boundary planes can be confirmed, and the open area of the 3D map can be confirmed by the start and end boundary points (the second equilibrium point corresponding to the start reference point X and the second equilibrium point corresponding to the end reference point Y in the context sensing), so as to obtain the 3D map.
Where the second subset of normals is a plurality of second subsets of normals based on which a plurality of planes are available, for example, plane O (1) may be determined based on second subset of normals M (1) and plane O (2) may be determined based on second subset of normals M (2). Since the angle represented by the target normal exceeds the third preset value, there is an intersection between two nearby planes. From the plane equation of the plane O (m), the expression equation X (m) of the intersection line of adjacent planes can be determined. The intersection line contains the turning boundary information in the 3D map, the X (m) is intercepted according to a plane closed area O (m) s, and the intercepted line segment is the turning connection of each plane of the 3D map.
After the O (m) plane is obtained, the boundary points of the second equilibrium point subset H2 (m) are projected to the O (m) plane to obtain a projection point set, and the projection point set is connected to form a plane closed area O (m) s which is effectively expressed for 3D map measurement. The second equalization points used in the process of obtaining the O (m) plane constitute a second subset H2 (m) of equalization points corresponding to the O (m) plane, for example, the O (1) plane is determined based on all the second equalization points in the second subset H2 (1) of equalization points. The boundary points of the second subset of equalization points H2 (m) are part of the second equalization points in the second subset of equalization points H2 (m). For example, the boundary points of the second subset of equalization points H2 (1) may include intersections with the second subset of equalization points H2 (2), and second equalization points that do not represent a break but represent a boundary, the second subset of equalization points H2 (2) being a set of second equalization points used to determine the O (2) plane.
In this embodiment, under the condition that a 3D map needs to be created, discrete point elimination is performed on initial point cloud data first, and then grid data filtering is performed to obtain a second equilibrium point set; and then, determining a reference plane set based on the second equilibrium point set, completing the classification of the reference plane set, and drawing a boundary reference plane of the 3D map based on a classification result so as to obtain the 3D map. Thus, the calculation amount in the 3D map creation process can be reduced, and the speed is improved.
The map creation method provided by the embodiment of the application can be applied to scientific literacy education of primary and secondary school students, intelligent toys and the like. For example, when the method is applied to scientific literacy education of primary and secondary school students, a structured environment model can be constructed by simple materials (such as foam boards, paperboards, wood boards and the like), and a teacher can enable the students to manually practice and complete the construction of a digital map through the method through a contracted actual environment, so that the students can deeply understand related scientific knowledge involved in automatic navigation. When the method is applied to the intelligent toy, the environment can be sensed through the method when the intelligent toy runs for the first time, and the construction of a digital map is completed; when the user operates again, the intelligent toy can show the user as an intelligent expression that the user remembers the surrounding environment information.
In order to execute the corresponding steps in the above embodiment and various possible manners, an implementation manner of the map creating apparatus 200 is given below, and optionally, the map creating apparatus 200 may adopt the device structure of the electronic device 100 shown in fig. 1. Further, referring to fig. 15, fig. 15 is a block diagram illustrating a map creating apparatus 200 according to an embodiment of the present disclosure. It should be noted that the map creation apparatus 200 provided in the present embodiment has the same basic principle and technical effect as those of the above embodiments, and for the sake of brief description, no part of the present embodiment is mentioned, and reference may be made to the corresponding contents in the above embodiments. In this embodiment, the map creation apparatus 200 may include: a point cloud data obtaining module 210, a centroid calculating module 220, a boundary calculating module 230, and a processing module 240.
The point cloud data obtaining module 210 is configured to obtain initial point cloud data of a target environment. And the initial point cloud data comprises the coordinates of each measuring point in a target global coordinate system.
The centroid calculating module 220 is configured to obtain a centroid information set from the initial point cloud data. And the centroid information set comprises coordinates of centroids, and the number of centroids in the centroid information set is less than that of the measuring points in the initial point cloud data.
The boundary calculation module 230 is configured to obtain boundary information of the target environment according to the centroid information set. Wherein the boundary information includes a boundary line or a boundary plane.
The processing module 240 is configured to obtain an environment map of the target environment according to the boundary information.
Alternatively, the modules may be stored in the memory 110 shown in fig. 1 in the form of software or Firmware (Firmware) or may be fixed in an Operating System (OS) of the electronic device 100, and may be executed by the processor 120 in fig. 1. Meanwhile, data, codes of programs, and the like required to execute the above-described modules may be stored in the memory 110.
The embodiment of the application also provides a readable storage medium, wherein a computer program is stored on the readable storage medium, and the computer program realizes the map creation method when being executed by a processor.
In summary, the embodiments of the present application provide a map creating method, an apparatus, an electronic device, and a readable storage medium, where a centroid is determined from a point cloud, and then a boundary line or a boundary plane of an environment is determined based on the centroid, so as to obtain an environment map, and thus, the environment map can be created in a manner of small calculation amount and high speed.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The foregoing is illustrative of only alternative embodiments of the present application and is not intended to limit the present application, which may be modified or varied by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (17)

1. A map creation method, the method comprising:
obtaining initial point cloud data of a target environment, wherein the initial point cloud data comprises coordinates of each measuring point in a target global coordinate system;
obtaining a centroid information set from the initial point cloud data, wherein the centroid information set comprises coordinates of centroids, and the number of centroids in the centroid information set is less than the number of measurement points in the initial point cloud data;
obtaining boundary information of the target environment according to the centroid information set, wherein the boundary information comprises a boundary line or a boundary plane;
and obtaining an environment map of the target environment according to the boundary information.
2. The method of claim 1, wherein the origin at the target global coordinates is a starting point at the time of environment detection, and the obtaining a centroid information set from the initial point cloud data comprises:
performing grid division on the measuring points corresponding to the initial point cloud data by taking a target origin of the target global coordinate system as an origin and taking a first preset length as a side length;
and aiming at each grid, obtaining the coordinate of the centroid of the grid according to the coordinate of each measuring point in the grid, and obtaining the weight corresponding to the centroid of the grid according to the number of the measuring points in the grid, wherein the centroid information set comprises the coordinate and the weight of each centroid.
3. The method of claim 2, wherein the obtaining a set of centroid information from the initial point cloud data prior to the gridding in the case that the environment map is a 2D map further comprises:
aiming at each measuring point in the initial point cloud data, obtaining the number of measuring points in a circle which takes the measuring point as a circle center and a second preset length as a radius;
and determining whether the measuring points are removed or not according to the number of the measuring points in the circle corresponding to the measuring points to obtain first target point cloud data, wherein the object aimed at by the grid division is the measuring points corresponding to the first target point cloud data.
4. The method of claim 2, wherein in the case that the environment map is a 2D map, the set of centroid information comprises a first set of centroid information corresponding to a 2D environment, and wherein obtaining boundary information of the target environment from the set of centroid information comprises:
taking a first centroid which is closest to a target origin of the target global coordinate system in the first centroid information set as a starting point, determining a next first centroid which is closest to the starting point, taking the starting point and the next first centroid as a first centroid pair, updating the next first centroid as the starting point, and repeatedly executing the step of determining the first centroid pair until a first centroid pair corresponding to the last first centroid is determined;
aiming at each first centroid pair, obtaining a first balance point of the first centroid pair according to the coordinate and the weight of each first centroid in the first centroid pair;
dividing a first equilibrium point set into a plurality of first equilibrium point subsets based on normal turning points corresponding to the first equilibrium point set, wherein the first equilibrium point set comprises first equilibrium points of each first centroid pair;
for each first subset of equalization points, a boundary line corresponding to the first subset of equalization points is determined.
5. The method of claim 4, wherein the first equalization point is obtained by a first predetermined formula:
Figure FDA0003830618610000031
Figure FDA0003830618610000032
wherein, (Hx, hy) represents the coordinates of the first equilibrium point, (Qx) s ,Qy s ) Coordinates representing the first centroid as a starting point in the first centroid pair, qw s Representing a weight of the first centroid as a starting point; (Qx) e ,Qy e ) Coordinates, qw, representing a first centroid being said next first centroid in the first centroid pair e Representing the weight of a first centroid being said next first centroid.
6. The method of claim 4, wherein the first set of equalization points in the first set of equalization points is sorted in the order of acquisition of the corresponding first pair of centroids, and wherein the dividing the first set of equalization points into a plurality of first subsets of equalization points based on the normal turning points corresponding to the first set of equalization points comprises:
sequentially connecting adjacent first equalization points in the first equalization point set to obtain a first normal set, wherein first normals in the first normal set are straight lines connected by the adjacent first equalization points, and the first normals in the first normal set are sorted according to the connection sequence of the first equalization points;
traversing the first normal set, and taking a first balance point between a second first normal and a third first normal as a normal turning point under the condition that an included angle between the first normal and the fourth first normal is smaller than or equal to a first preset value and an included angle between the second first normal and the third first normal is smaller than or equal to a second preset value in 4 adjacent first normals in sequence;
and dividing the first equalization point set into a plurality of first equalization point subsets based on the obtained normal turning points, wherein a first equalization point in a later first equalization point subset is a last equalization point in a previous first equalization point subset, and the first equalization point is a normal turning point.
7. The method according to any of claims 4-6, wherein determining, for each first subset of equalization points, a boundary line corresponding to the first subset of equalization points comprises:
and calculating to obtain a boundary line according to each point in the first balanced point subset, wherein the sum of squares of vertical distances from all points in the first balanced point subset to the boundary line is minimum.
8. The method of claim 2, wherein, in the case that the environment map is a 3D map, the obtaining a set of centroid information from the initial point cloud data prior to the gridding, further comprises:
aiming at each measuring point in the initial point cloud data, obtaining the number of measuring points in a sphere with the measuring point as a sphere center and a second preset length as a radius;
and determining whether the measuring points are removed or not according to the number of the measuring points in the sphere corresponding to the measuring points so as to obtain second target point cloud data, wherein the object aimed at by grid division is the measuring point corresponding to the second target point cloud data.
9. The method of claim 2, wherein in the case that the environment map is a 3D map, the set of centroid information includes a second set of centroid information corresponding to the 3D environment, and wherein obtaining boundary information of the target environment from the set of centroid information comprises:
taking a second centroid which is closest to a target origin of the target global coordinate system in the second centroid information set as a starting point, determining a next second centroid which is closest to the starting point, taking the starting point and the next second centroid as a second centroid pair, updating the next second centroid as the starting point, and repeatedly executing the step of determining a second execution pair until a second centroid pair corresponding to a last second centroid is determined;
aiming at each second centroid pair, obtaining a second balance point of the second centroid pair according to the coordinate and the weight of each second centroid in the second centroid pair;
obtaining a plurality of reference planes according to a second equilibrium point set based on a mode that any nearest adjacent three points in the second equilibrium point set form a plane, wherein the second equilibrium point set comprises a plurality of second equilibrium points;
dividing a reference plane set into a plurality of reference plane subsets based on included angles between reference planes, wherein the plurality of reference planes are included in the reference plane set;
for each reference plane subset, a boundary plane corresponding to the reference plane subset is determined.
10. The method of claim 9, wherein dividing the set of reference planes into a plurality of reference plane subsets based on angles between the reference planes comprises:
calculating a second normal of each reference plane to obtain a second normal set;
selecting a second normal from second normals which are not allocated to a second normal subset as an element in a current second normal subset, calculating to obtain an average normal vector angle according to the element in the current second normal subset, and determining a current reference plane subset corresponding to the current second normal subset;
finding out a neighboring plane from the reference planes of the unassigned reference plane subset according to a second equalization point included in an element of the current reference plane subset;
calculating to obtain a vector angle of a second normal of the adjacent plane, and calculating to obtain an absolute included angle between the average normal vector angle and a vector angle corresponding to the adjacent plane;
judging whether the absolute included angle is larger than or equal to a third preset value or not;
if the current normal vector angle is smaller than the reference plane angle, adding the adjacent plane into the current reference plane subset, adding the second normal of the adjacent plane into the current second normal subset, and jumping to the step of calculating the average normal vector angle under the condition that the adjacent plane is not searched completely until the adjacent plane is searched completely to obtain a reference plane subset;
and repeating the step of obtaining the reference plane subsets to obtain a plurality of reference plane subsets.
11. The method of claim 10, wherein the dividing the set of reference planes into a plurality of reference plane subsets based on angles between the reference planes further comprises:
if the value is greater than or equal to the third preset value, the adjacent plane is not added into the current reference plane subset, and whether the adjacent plane is searched completely is judged;
if the adjacent plane searching is finished, taking the current reference plane subset as a reference plane subset;
and if the adjacent planes are not searched completely, skipping to the step of calculating the average normal vector angle until the adjacent planes are searched completely to obtain a reference plane subset.
12. The method according to any one of claims 9-11, wherein determining, for each subset of reference planes, the boundary plane corresponding to the subset of reference planes comprises:
obtaining a second normal subset corresponding to each reference plane subset, wherein the second normal subset comprises second normals of each reference plane in the corresponding reference plane subset;
and aiming at each second normal subset, obtaining a target normal according to the second normal subset, and obtaining a boundary plane corresponding to the reference plane subset corresponding to the second normal subset based on the target normal, wherein the square sum of all second equilibrium points in the reference plane subset to the boundary plane is minimum.
13. The method of claim 1, wherein the obtaining initial point cloud data of a target environment comprises:
and obtaining the initial point cloud data by rotating a single-point ranging unit, wherein the single-point ranging unit is used for obtaining the information of one measuring point through one-time measurement.
14. A map creation apparatus, characterized in that the apparatus comprises:
the system comprises a point cloud data acquisition module, a target global coordinate system acquisition module and a data processing module, wherein the point cloud data acquisition module is used for acquiring initial point cloud data of a target environment, and the initial point cloud data comprises coordinates of each measuring point in the target global coordinate system;
the centroid calculation module is used for obtaining a centroid information set from the initial point cloud data, wherein the centroid information set comprises coordinates of each centroid, and the number of the centroids in the centroid information set is smaller than that of the measuring points in the initial point cloud data;
a boundary calculation module, configured to obtain boundary information of the target environment according to the centroid information set, where the boundary information includes a boundary line or a boundary plane;
and the processing module is used for obtaining an environment map of the target environment according to the boundary information.
15. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the map creation method of any one of claims 1-13.
16. The electronic device according to claim 15, further comprising a moving unit, a one-point ranging unit, a rotating unit, and a pose acquisition unit,
the mobile unit is used for driving the electronic equipment to move;
the single-point ranging unit is used for measuring the distance between the single-point ranging unit and a measuring point;
the rotating unit is used for driving the single-point distance measuring unit to rotate;
the pose acquisition unit is used for acquiring pose description information of the electronic equipment;
the processor is electrically connected with the mobile unit, the single-point distance measuring unit, the rotating unit and the pose acquisition unit, and is used for realizing movement through controlling the mobile unit and obtaining the initial point cloud data according to the received distance measuring information, the received rotation angle information and the received pose description information.
17. A readable storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out a map creation method according to any one of claims 1 to 13.
CN202211073910.3A 2022-09-02 2022-09-02 Map creation method and device, electronic equipment and readable storage medium Pending CN115585802A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211073910.3A CN115585802A (en) 2022-09-02 2022-09-02 Map creation method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211073910.3A CN115585802A (en) 2022-09-02 2022-09-02 Map creation method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115585802A true CN115585802A (en) 2023-01-10

Family

ID=84771520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211073910.3A Pending CN115585802A (en) 2022-09-02 2022-09-02 Map creation method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115585802A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058358A (en) * 2023-10-12 2023-11-14 之江实验室 Scene boundary detection method and mobile platform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058358A (en) * 2023-10-12 2023-11-14 之江实验室 Scene boundary detection method and mobile platform
CN117058358B (en) * 2023-10-12 2024-03-22 之江实验室 Scene boundary detection method and mobile platform

Similar Documents

Publication Publication Date Title
EP3955158A1 (en) Object detection method and apparatus, electronic device, and storage medium
Oleynikova et al. Signed distance fields: A natural representation for both mapping and planning
US11694356B2 (en) Methods and systems for joint pose and shape estimation of objects from sensor data
CN108225358B (en) Vehicle navigation
Matthies et al. Stereo vision-based obstacle avoidance for micro air vehicles using disparity space
CN110853036A (en) Method and apparatus for training an object detection model
CN110388931A (en) The two-dimentional bounding box of object is converted into the method for the three-dimensional position of automatic driving vehicle
KR102096262B1 (en) Device for flight simulating of unmanned aerial vehicle, and system for flight simulating of unmanned aerial vehicle using thereof
CN111338383B (en) GAAS-based autonomous flight method and system, and storage medium
CN111060946A (en) Method and apparatus for estimating position
CN108981706B (en) Unmanned aerial vehicle aerial photography path generation method and device, computer equipment and storage medium
US10578453B2 (en) Render-based trajectory planning
JP2019504418A (en) Method and system for determining the position of a moving object
EP4160146A1 (en) Quadtree based data structure for storing information relating to an environment of an autonomous vehicle and methods of use thereof
CN111709988A (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN115585802A (en) Map creation method and device, electronic equipment and readable storage medium
CN115639823A (en) Terrain sensing and movement control method and system for robot under rugged and undulating terrain
CN112964263B (en) Automatic drawing establishing method and device, mobile robot and readable storage medium
CN113448340B (en) Unmanned aerial vehicle path planning method and device, unmanned aerial vehicle and storage medium
CN112597946A (en) Obstacle representation method and device, electronic equipment and readable storage medium
Kurdi et al. Navigation of mobile robot with cooperation of quadcopter
Buck et al. Unreal engine-based photorealistic aerial data generation and unit testing of artificial intelligence algorithms
Wu et al. Multi-objective reinforcement learning for autonomous drone navigation in urban areas with wind zones
CN113850915A (en) Vehicle tracking method based on Autoware
RU2705049C1 (en) High-adaptive autonomous mobile robot control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination