CN115164897A - Method and device for determining point cloud data, related equipment and storage medium thereof - Google Patents

Method and device for determining point cloud data, related equipment and storage medium thereof Download PDF

Info

Publication number
CN115164897A
CN115164897A CN202210752720.8A CN202210752720A CN115164897A CN 115164897 A CN115164897 A CN 115164897A CN 202210752720 A CN202210752720 A CN 202210752720A CN 115164897 A CN115164897 A CN 115164897A
Authority
CN
China
Prior art keywords
target
point cloud
cloud data
data
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210752720.8A
Other languages
Chinese (zh)
Inventor
高鸣岐
成慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202210752720.8A priority Critical patent/CN115164897A/en
Publication of CN115164897A publication Critical patent/CN115164897A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The application discloses a point cloud data determining method, a point cloud data determining device, related equipment and a storage medium, wherein the method comprises the following steps: acquiring initial point cloud data belonging to a target object in a target space region; the initial point cloud data is determined based on the acquisition data acquired by the acquisition component for the target space area, and comprises a plurality of target points belonging to a target; determining the confidence of each target point based on the spatial distribution condition of each target point and/or the data condition of each target point in the collected data; and removing the target object points in the initial point cloud data based on the confidence coefficient of each target object point to obtain target point cloud data belonging to the target object. By means of the method, the credibility of the point cloud data of the target object can be improved.

Description

Method and device for determining point cloud data, related equipment and storage medium thereof
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for determining point cloud data, and a related device and a storage medium thereof.
Background
In the field of robots, a robot establishes a map of an area to be processed is always an important part of the robot, and the main task is to distinguish obstacles and walkable areas in a scene and generate the map, so that the robot can conveniently perform related operations, obstacle avoidance and other tasks.
At present, the point cloud data of an area to be processed is mainly obtained, and a robot map about the area to be processed is constructed based on the point cloud data. Generally, the obtained point cloud data has a large noise scale, is not smooth enough or contains too much useless point cloud information, so that the credibility of the point cloud data is low, and the point cloud data cannot be used in processes of robot mapping, obstacle avoidance and the like.
Disclosure of Invention
The application at least provides a point cloud data determining method and device, and related equipment and storage media thereof.
The first aspect of the present application provides a method for determining point cloud data, the method including: acquiring initial point cloud data belonging to a target object in a target space region; the initial point cloud data is determined based on the acquisition data acquired by the acquisition component for the target space area, and comprises a plurality of target points belonging to a target; determining the confidence of each target point based on the spatial distribution condition of each target point and/or the data condition of each target point in the collected data; and based on the confidence coefficient of each target point, removing the target points in the initial point cloud data to obtain target point cloud data belonging to the target.
Therefore, noise information or useless point cloud information can be removed from the initial point cloud data of the target object through the confidence degree of each target object point in the initial point cloud data of the target object, so that effective point cloud information, namely target point cloud data, belonging to the target object is extracted from the initial point cloud data of the target object, and the target object can be accurately responded when a map of a robot about a target space area is constructed based on the target point cloud data of the target object, so that the target object, namely the constructed map, can be accurately displayed on the constructed map to more accurately divide the target area and the walking area; in addition, noise information or invalid point cloud information in the initial point cloud data of the target object is filtered out through the confidence degree of each target object point, the stability and the credibility of the point cloud data belonging to the target object are improved, the point cloud data volume is reduced, and the efficiency of subsequently constructing a map of the robot about a target space area is improved.
The determining the confidence of each target point based on the spatial distribution condition of each target point and/or the data condition of each target point in the collected data includes: determining at least one attribute parameter of the target object point; wherein the at least one attribute parameter comprises at least one of a distribution attribute parameter characterizing the spatial distribution condition and a data attribute parameter characterizing the data condition; acquiring an attribute confidence corresponding to each attribute parameter of a target object point; and obtaining the confidence coefficient of the target object point based on the attribute confidence coefficient corresponding to each attribute parameter of the target object point.
Therefore, the confidence of the target point can be determined according to the confidence corresponding to the attribute parameters of the target point.
The distribution attribute parameters of the target object points are the number of the target object points in a subspace region where the target object points are located, the subspace region is obtained by dividing the target space region according to a preset division strategy, and the distribution attribute parameters are in positive correlation with the corresponding attribute confidence; and/or the data attribute parameter of the target object point is the gradient of the data point corresponding to the target object point in the collected data, and the data attribute parameter and the corresponding attribute confidence coefficient are in negative correlation.
Therefore, the distribution attribute parameters and the data attribute parameters of the target object points can be flexibly set.
The method includes the following steps that target object points in initial point cloud data are removed based on confidence degrees of the target object points, and target point cloud data belonging to a target object are obtained, and the method includes the following steps: removing target object points with confidence degrees not meeting the confidence degree requirement from the initial point cloud data to obtain target point cloud data; wherein the target spatial region comprises a plurality of subspace regions, the confidence requirement comprising at least one of: the confidence coefficient of the subspace region is greater than or equal to a first confidence coefficient threshold value, the projection region to which the subspace region belongs meets a first preset condition, and the first preset condition comprises that the number of the subspace regions which belong to the projection region is greater than the first number, the confidence coefficient of the projection region is greater than or equal to a second confidence coefficient threshold value, or the confidence coefficient of the projection region is greater than or equal to the second confidence coefficient threshold value; the projection area to which the subspace area belongs is a projection area of the subspace area on a preset plane, the confidence coefficient of the subspace area is obtained based on the confidence coefficient of each target point in the subspace area, and the confidence coefficient of the projection area is obtained based on the confidence coefficient of the subspace area belonging to the projection area.
Therefore, whether the confidence corresponding to the subspace area meets the confidence requirement or not is judged to determine whether the target object point included in the subspace area is eliminated from the target point cloud data or not, so that the credibility of the determined point cloud data belonging to the target object is higher, namely the accuracy of the point cloud data of the target object is improved; in addition, reliability requirements can be set flexibly. Considering that the chance of high or low confidence of a single target point is stronger
The confidence coefficient of the subspace area is the sum of the confidence coefficients of all target points in the subspace area, and the confidence coefficient of the projection area is the sum of the confidence coefficients of the subspace areas belonging to the projection area; and/or the plurality of subspace areas comprise at least one group of area groups, each group of area groups comprise at least one subspace area arranged along the vertical direction, and the preset plane is a horizontal plane.
Therefore, the determination mode of the confidence of the subspace area and the confidence of the projection area can be flexibly set.
The method for determining the point cloud data further comprises the following steps: after target point cloud data corresponding to the second number of frames of acquired data are obtained, target object points at the same position on the world coordinate system in the second number of frames of target point cloud data are selected to obtain new target point cloud data.
Therefore, the accuracy of the target point cloud data of the target object can be improved.
Before determining the confidence of each target point based on the spatial distribution condition of each target point and/or the data condition of each target point in the acquired data, the method for determining point cloud data further comprises: performing spatial filtering processing on the initial point cloud data to obtain processed point cloud data belonging to a target object; based on the confidence coefficient of each target point, the target points in the initial point cloud data are removed to obtain target point cloud data belonging to a target, and the method comprises the following steps: and eliminating the target object points in the processed point cloud data based on the confidence coefficient of each target object point to obtain target point cloud data belonging to the target object.
Therefore, the initial point cloud data belonging to the target object is subjected to spatial filtering processing to filter noise information or invalid point cloud information in the initial point cloud data, so that the stability and the credibility of the point cloud data belonging to the target object are improved.
The method for acquiring the initial point cloud data belonging to the target object in the target space region comprises the following steps: acquiring regional point cloud data of a target space region determined based on the acquired data; and selecting data of points with height information meeting first preset requirements from the regional point cloud data as initial point cloud data of the target object.
Therefore, the point cloud data belonging to the target object can be accurately extracted from the area point cloud data by the height information.
The first preset requirement is that the robot does not belong to the ground and the height between the robot and the ground is smaller than the height of the robot; and/or, collecting data as a target depth image; and/or, before acquiring the regional point cloud data of the target space region determined based on the acquired data, the method for determining the point cloud data further comprises the following steps: and deleting data points with gradient larger than a preset gradient value in the collected data.
Therefore, from the angle of the height of each point in the regional point cloud data, the point cloud data belonging to the effective obstacle in the regional point cloud data are screened out, the point cloud data volume belonging to the obstacle is reduced, and the efficiency of subsequently constructing a map of the robot about the target region is improved; in addition, the collected data are preprocessed, noise information and invalid information included in the collected data can be filtered, and the stability and credibility of the initial point cloud data determined based on the collected data are improved.
After the target object points in the initial point cloud data are removed based on the confidence of each target object point to obtain target point cloud data belonging to a target object, the method for determining the point cloud data further comprises the following steps: determining position information of a target object and a walkable area in a target space area based on the target point cloud data; and constructing a map of the robot about the target space area by using the position information of the target object and the walkable area.
Therefore, because the position information of the target object is determined and obtained based on the target point cloud data belonging to the target object, and the target point cloud data is obtained by removing the initial point cloud data belonging to the target object based on the confidence coefficient of each target object point, the target object can be accurately responded when the map of the robot about the target space area is constructed, so that the target object can be more accurately displayed on the constructed map; in addition, the calculation amount is reduced, and the efficiency of constructing the map of the robot about the target space area is improved.
The second aspect of the application provides a device for determining point cloud data, which comprises an acquisition module, a determination module and a rejection module; the acquisition module is used for acquiring initial point cloud data belonging to a target object in a target space region; the initial point cloud data is determined based on the acquisition data acquired by the acquisition component for the target space area, and comprises a plurality of target points belonging to a target; the determining module is used for determining the confidence of each target point based on the space distribution condition of each target point and/or the data condition of each target point in the collected data; the eliminating module is used for eliminating the target object points in the initial point cloud data based on the confidence coefficient of each target object point to obtain target point cloud data belonging to the target object, and the target point cloud data is used for constructing a map of the robot about a target space area.
A third aspect of the present application provides an electronic device, which includes a memory and a processor, wherein the memory stores program instructions, and the processor is configured to execute the program instructions to implement the above-mentioned method for determining point cloud data.
A fourth aspect of the present application provides a computer-readable storage medium for storing program instructions that can be executed to implement the above-described method for determining point cloud data.
Drawings
Fig. 1 is a schematic flowchart of an embodiment of a method for determining point cloud data provided in the present application;
FIG. 2 is a schematic flowchart of an embodiment of step S11 shown in FIG. 1;
FIG. 3 is a schematic view of an embodiment of a robot and obstacle provided herein;
FIG. 4 is a flowchart illustrating an embodiment of step S12 shown in FIG. 1;
FIG. 5 is a schematic flow chart diagram illustrating another embodiment of a method for determining point cloud data provided by the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a device for determining point cloud data provided in the present application;
FIG. 7 is a schematic structural diagram of an embodiment of an electronic device provided in the present application;
FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. Additionally, the term "at least one" herein means any one of a variety or any combination of at least two of a variety, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a method for determining point cloud data according to the present application. It should be noted that the embodiments of the present application are not limited to the flow sequence shown in fig. 1 if substantially the same results are obtained. As shown in fig. 1, the present embodiment includes:
step S11: and acquiring initial point cloud data belonging to a target object in the target space area.
It should be noted that the method for determining point cloud data provided by the present application is executed by some electronic devices with processing capability, for example, a robot (e.g., a cleaning robot, a logistics robot, etc.) that needs to build a map based on point cloud data, or other execution devices that can be communicatively connected to the robot, etc., may also be implemented by executing program codes by a processor.
The method is used for denoising point cloud data belonging to a target object in a target space area to extract effective point cloud data belonging to the target object, so that the credibility of the point cloud data belonging to the target object is improved, and then accurate response can be carried out on the target object when a map of the robot about the target space area is constructed based on the point cloud data of the target object subsequently, the target object is accurately displayed on the constructed map, and the constructed map of the robot about the target space area is more accurately divided into the target area and the walkable area. The target spatial area described herein includes, but is not limited to, public areas (e.g., parks, squares, offices, etc.), residential areas (e.g., bedrooms, living rooms, kitchens, etc.), etc., and is not specifically limited herein.
In an embodiment of the application, initial point cloud data belonging to a target object in a target space area is obtained, wherein the initial point cloud data belonging to the target object is determined based on acquisition data acquired by an acquisition component for the target space area. That is, the initial point cloud data belonging to the target object can be determined based on the collected data collected by the collecting component for the target space region. In an embodiment, the acquisition data acquired by the acquisition component for the target space region is acquisition data corresponding to a target object, and initial point cloud data corresponding to the target object can be generated directly based on the acquisition data. In other embodiments, the acquisition component acquires the acquisition data of the target space region as the acquisition data corresponding to the target space region, and at this time, it is required to generate the area point cloud data corresponding to the target space region based on the acquisition data, and then select the initial point cloud data belonging to the target object from the area point cloud data of the target space region.
The target object may be an obstacle or other object that needs to be displayed on a map to be constructed subsequently, and is not limited in this respect.
In one embodiment, the acquisition component may be a depth camera, and since the depth camera is capable of acquiring an image of the target spatial region (e.g., an image including the entire target spatial region or an image of a target object in the target spatial region) and autonomously generating a corresponding depth image, the acquisition data is a depth image of the target spatial region; further, the point cloud data can be determined according to the depth information in the depth image. The depth camera is a camera with structured light, can acquire images under a dim condition and generate corresponding depth images, namely can acquire the images and generate the corresponding depth images in environments with poor lighting conditions such as at night; moreover, the frame rate of the depth camera is high, the required depth image can be acquired in real time, and target point cloud data belonging to a target object can be determined and generated in real time subsequently, so that a constructed map of the robot about a target space area can be updated in real time based on the target point cloud data belonging to the target object determined and generated in real time subsequently, and the robot can respond to environmental changes more quickly; in addition, the distance measurement precision of the depth camera is high, and the subsequently constructed robot can have a higher map fineness degree, namely a higher resolution, relative to the target space area. The specific type of the depth camera is not limited, and the depth camera can be specifically set according to actual use requirements. For example, the depth camera is a Structured Light (Structured Light) depth camera, a Time of Flight (TOF) depth camera, or the like.
It is to be understood that, in other embodiments, the acquisition component may also be a binocular camera, the binocular camera acquires two images about the target space region from different positions, and the acquired data acquired by the binocular camera is the two images about the target space region at this time; further, the binocular camera transmits the two collected images about the target space area to the processing device, and the processing device calculates the position deviation between corresponding points of the two images to obtain the required three-dimensional geometric information of the points, namely point cloud data.
In other embodiments, the acquisition component may also be a laser scanner, and the laser scanner may restore various data such as a three-dimensional model, a line, a surface, and a body of the measured object by recording information such as three-dimensional coordinates, reflectivity, and texture of a large number of dense points on the surface of the measured object by using the principle of laser ranging.
In one embodiment, the acquired initial point cloud data of the target object in the target space region is point cloud data under a coordinate system of the acquisition component. It is to be understood that, in other embodiments, the obtained initial point cloud data belonging to the target object in the target space region may also be point cloud data in a robot coordinate system or point cloud data in a world coordinate system, and is not limited specifically herein.
Step S12: and determining the confidence of each target point based on the spatial distribution condition of each target point and/or the data condition of each target point in the acquired data.
Noise information may be included in the initial point cloud data of the target object, so that the initial point cloud data belonging to the target object is unstable and has a low credibility, and the response to the target object when the map of the robot about the target space area is constructed subsequently is not accurate enough, so that the target object cannot be accurately displayed on the constructed map, and the constructed map of the robot about the target space area is not accurate enough for dividing the target area and the walkable area. Therefore, in the embodiment of the present application, the initial point cloud data of the target includes a plurality of target points belonging to the target, and the confidence of the target points is determined based on the spatial distribution of the target points and/or the data of the target points in the collected data. The confidence coefficient of the target object point represents the credibility of the target object point which is actually a point on the target object and is not a noise point, so that whether each target object point is actually a noise point can be conveniently determined according to the confidence coefficient of each target object point, on one hand, the noise in the initial point cloud data is filtered to extract effective point cloud information belonging to the target object, the credibility of the point cloud data belonging to the target object is improved, accurate response can be carried out on the target object when a map of the robot about a target space area is constructed subsequently, the target object can be displayed more accurately on the constructed map, and the influence of environment information such as dust on the accuracy of the subsequently constructed map of the robot about the target space area is reduced; on the other hand, noise in the initial point cloud data is filtered, so that the point cloud data volume is reduced, and the efficiency of subsequently constructing a robot map of the target space area is improved.
In an embodiment, the confidence level of the target point may be determined according to the confidence level corresponding to the at least one attribute parameter of the target point, so as to determine the confidence level of the target point based on the spatial distribution condition of the target point and/or the data condition of the target point in the acquired data. For example, the confidence of the target point may be determined according to the confidence corresponding to the distribution attribute parameter of the target point representing the spatial distribution condition, so as to determine the confidence of the target point from the perspective of the spatial distribution condition of the target point. For another example, the confidence level of the target point may be determined according to the confidence level corresponding to the data attribute parameter of the data characterizing the data condition of the target point, so as to determine the confidence level of the target point from the viewpoint of reliability of the collected data collected by the collecting assembly. For another example, the confidence level of the target point may be determined according to the confidence level corresponding to the distribution attribute parameter representing the spatial distribution condition of the target point and the confidence level corresponding to the data attribute parameter representing the data condition, so as to determine the confidence level of the target point from the perspective of the spatial distribution condition of the target point and the perspective of the reliability of the acquired data acquired from the acquisition assembly, and the reliability of the confidence level of the target point is higher.
Step S13: and removing the target object points in the initial point cloud data based on the confidence coefficient of each target object point to obtain target point cloud data belonging to the target object.
In the embodiment of the application, the target point in the initial point cloud data is removed based on the confidence of each target point, so that the target point cloud data belonging to the target is obtained. The confidence degree of the target object points indicates the credibility of the target object points belonging to the target object, so that the credibility of the target object points belonging to the target object can be determined according to the confidence degree of the target object points, the target object points with smaller confidence degrees are probably noise points greatly, the target object points with lower confidence degree in the initial point cloud data are removed to remove noise information or invalid point cloud information in the initial point cloud data, the extraction of effective point cloud information belonging to the target object is realized, the stability and credibility of the target point cloud data belonging to the target object are improved, accurate response can be carried out on the target object when a map of the robot about a target space area is constructed subsequently, the target object is displayed on the constructed map more accurately, and the constructed map can divide the target object area and the walking area more accurately.
In one embodiment, target point cloud data belonging to a target object is obtained by removing target object points whose confidence levels do not meet the confidence level requirement from the initial point cloud data. Wherein the confidence requirement is not limited. By removing target object points which do not meet the confidence degree requirement in the initial point cloud data, noise information or invalid point cloud information in the initial point cloud data is filtered, effective point cloud information belonging to a target object is extracted, and stability and reliability of the target point cloud data belonging to the target object are improved. In one embodiment, it can be directly determined whether the confidence of the target point meets the confidence requirement, so as to determine whether to eliminate the target point from the initial point cloud data. In other embodiments, whether the confidence degree corresponding to the space region meets the confidence degree requirement is determined by determining whether the confidence degree corresponding to the space region meets the confidence degree requirement, so that the credibility of the point cloud data belonging to the target object is determined to be higher, that is, the accuracy of the target point cloud data of the target object is improved.
In order to facilitate the determination of the confidence of the space region, the target space region may be divided into a plurality of subspace regions, and in an embodiment, the confidence is required to be that the confidence of the subspace region is greater than or equal to a first confidence threshold, where the first confidence threshold is not limited and may be specifically set according to actual use needs. Exemplarily, a subspace region where the target point a1 and the target point a2 are located is a, and since the confidence of the subspace region a is smaller than the first confidence threshold, the probability that the target point a1 and the target point a2 in the subspace region a are noise points is relatively high, so that the target point a1 and the target point a2 are removed from the initial point cloud data of the target.
Wherein the confidence of the subspace region is derived based on the confidence of each target object point in the subspace region. In one embodiment, the confidence of the subspace region is the sum of the confidences of the target points in the subspace region. Illustratively, the subspace region where the target object point a1 and the target object point a2 are located is a, the confidence of the target object point a1 is α, and the confidence of the target object point a2 is β, so the confidence of the subspace region a is α + β. It is understood that in other embodiments, the confidences of the target points in the subspace region may be summed in a weighted manner to obtain the confidence of the subspace region; or selecting the maximum value, the minimum value, the median value or the mean value of the confidence degrees of the target points in the subspace area as the confidence degree of the subspace area.
In other embodiments, the confidence requirement may also be that the projection region to which the subspace region belongs satisfies a first preset condition, where the first preset condition includes that the number of subspace regions belonging to the projection region is greater than the first number and the confidence of the projection region is greater than or equal to the second confidence threshold. The projection area to which the subspace area belongs is a projection area of the subspace area on a preset plane. That is, each subspace region is projected onto a predetermined plane, so as to compress the target points in each subspace region from three dimensions to two dimensions, and discard the height coordinates of each target point. It should be noted that, when the target object is relatively stable, each target object point belonging to the target object is generally continuous in a spatial region, so that when the target spatial region is divided into a plurality of subspace regions, an isolated subspace region or a subspace region less than a first number rarely appears, so that when the number of subspace regions corresponding to the projection region exceeds the first number and the confidence of the projection region is greater than or equal to a second confidence threshold, it indicates that a point in the subspace region corresponding to the projection region is actually a target object point; and when the number of the subspace areas corresponding to the projection area is smaller than the first number, the point in the subspace area corresponding to the projection area is possibly a noise point and needs to be removed.
The first number, the second confidence threshold and the preset plane are not limited, and can be specifically set according to actual use requirements. For example, the first number is 1, the second confidence threshold is 15, and the preset plane is a horizontal plane. Exemplarily, the subspace region where the target point a and the target point b are located is a; the subspace area where the target point c and the target point d are located is B; both the subspace region a and the subspace region B belong to a projection region α; since the number of the subspace regions belonging to the projection region α is greater than the first number but the confidence of the projection region is less than the second confidence threshold, the probability that the target object point a and the target object point B in the subspace region a and the target object point c and the target object point d in the subspace region B are noise points is relatively high, and therefore the target object points a, B, c and d are removed from the initial point cloud data of the target object.
In other embodiments, the first preset condition may also only include that the confidence of the projection region is greater than or equal to the second confidence threshold, which is not specifically limited herein.
In one embodiment, the plurality of subspace regions includes at least one group of regions, each group of regions including at least one subspace region arranged along the vertical direction. That is, the target spatial region is vertically divided into a plurality of region groups, and each region group is further divided laterally to obtain the subspace region.
Wherein the confidence of the subspace region is derived based on the confidence of each target object point in the subspace region. In a specific embodiment, the confidence of the projection region is the sum of the confidences of the subspace regions belonging to the projection region. Exemplarily, the confidence of the subspace region a is a, the confidence of the subspace region B is B, and both the subspace region a and the subspace region B belong to the projection region α; therefore, the confidence of the projection region α is a + b. It is understood that in other embodiments, the confidence degrees of the subspace regions belonging to the projection region may also be weighted and summed to obtain the confidence degree of the projection region; or selecting the maximum value, the minimum value, the median value or the mean value of the confidence degrees of the subspace regions belonging to the projection region as the confidence degree of the projection region.
In a possible implementation manner, after target point cloud data corresponding to the second number of frames of the acquired data are obtained, target object points located at the same position on the world coordinate system in the second number of frames of the target point cloud data can be selected to obtain new target point cloud data. That is, if the consecutive second number of frames of the target point are all located at the same position on the world coordinate system, the target point is considered to belong to the target, and the data of the target point where the positions of the consecutive second number of frames on the world coordinate system have not changed is selected from the target point cloud data of the target to be used as new target point cloud data of the target, so that the accuracy of the target point cloud data of the target can be improved. The second number is not limited, and may be specifically set according to actual use requirements. For example, the second number is 3, 4, or 5, etc.
In the above embodiment, the target point cloud data belonging to the target object is selected from the initial point cloud data belonging to the target object by the confidence degrees of the plurality of target object points belonging to the target object. Therefore, noise information or useless point cloud information can be removed from the initial point cloud data of the target through the confidence degree of each target point in the initial point cloud data of the target, so that effective point cloud information, namely target point cloud data, belonging to the target is extracted from the initial point cloud data of the target, the target can be accurately responded when a map of a target space area of the robot is constructed based on the target point cloud data of the target in the follow-up process, and the target area and the walkable area can be accurately divided by the constructed map, namely the constructed map, of the target; in addition, noise information or invalid point cloud information in the initial point cloud data of the target object is filtered out through the confidence degree of each target object point, the stability and the credibility of the point cloud data belonging to the target object are improved, the point cloud data volume is reduced, and the efficiency of subsequently constructing a map of the robot about a target space area is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an embodiment of step S11 shown in fig. 1, and it should be noted that, if substantially the same result is obtained, the embodiment is not limited to the flowchart shown in fig. 2. As shown in fig. 2, in the embodiment of the present application, obtaining initial point cloud data belonging to a target object from area point cloud data of a determined target space area specifically includes:
step S111: acquiring regional point cloud data of a target space region determined based on the acquired data.
In an embodiment of the application, regional point cloud data of a target spatial region determined based on acquired data is acquired. For example, the collected data is a depth image, and first, the collecting component collects an image of a target space region and generates a depth image corresponding to the target space region; and then, according to the depth information in the depth image, performing coordinate system conversion on each pixel point in the depth image to obtain area point cloud data which corresponds to the depth image and is related to the target space area.
In an embodiment, the area point cloud data of the target space area is point cloud data in a collection component coordinate system, and at this time, each pixel point in the depth image can be converted into the collection component coordinate system for representation only according to depth information in the depth image and internal parameters of the collection component, so as to obtain the area point cloud data of the target space area in the collection component coordinate system. The specific formula for converting each pixel point in the depth image into regional point cloud data under the acquisition component coordinate system is as follows:
Figure BDA0003718854410000111
wherein the content of the first and second substances,
Figure BDA0003718854410000112
coordinates of the target space area under the acquisition assembly coordinate system are obtained;
Figure BDA0003718854410000113
coordinates of each pixel point in the depth image are obtained; d is a depth value;
Figure BDA0003718854410000114
and
Figure BDA0003718854410000115
is the inverse of the focal length of the acquisition assembly, which, in general,
Figure BDA0003718854410000116
and
Figure BDA0003718854410000117
are the same.
In one embodiment, the area point cloud data of the target space area is point cloud data in a robot coordinate system, and at this time, each pixel point in the depth image needs to be converted into an acquisition component coordinate system for representation, and then converted into the robot coordinate system for representation, so as to obtain the area point cloud data of the target space area in the robot coordinate system. The specific process of converting each pixel point in the depth image into point cloud data under the acquisition component coordinate system is as shown above, and is not described herein again; the specific formula for converting the point cloud data of the target space area under the acquisition component coordinate system into the area point cloud data under the robot coordinate system is as follows:
Figure BDA0003718854410000118
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003718854410000119
representing the coordinates of the target space region in a robot coordinate system;
Figure BDA00037188544100001110
representing coordinates of the target space region in the acquisition component coordinate system; r and T represent external parameters of the acquisition assembly。
In one embodiment, the area point cloud data of the target space area is point cloud data in a world coordinate system, and at this time, each pixel point in the depth image needs to be converted into an acquisition component coordinate system for representation, then converted into a robot coordinate system for representation, and finally converted into the world coordinate system for representation, so as to obtain the area point cloud data of the target space area in the world coordinate system. Wherein, the specific process of converting each pixel in the depth image into point cloud data under the acquisition component coordinate system and the specific process of converting point cloud data of the target space region under the acquisition component coordinate system into point cloud data under the robot coordinate system are as above, and are not repeated here: the specific process of converting the point cloud data of the target space area under the robot coordinate system into the area point cloud data under the world coordinate system is as follows:
first, the position of the robot in the robot coordinate system, i.e., the coordinate T of the robot in the robot coordinate system, is obtained Machine for working And acquiring the position of the robot in the world coordinate system, namely the coordinate T of the center of the robot chassis in the world coordinate system 0 The coordinate T of the center of the robot chassis in the world coordinate system can be determined specifically according to a positioning algorithm (such as a laser radar positioning algorithm or a visual SLAM algorithm) of the robot 0 Wherein, T 0 Is a1 × 3 coordinate; then, according to the coordinate T of the robot under the robot coordinate system Machine for working And the coordinate T of the robot in the world coordinate system 0 Determining a rotation matrix R from the robot coordinate system to the world coordinate system 0 Wherein R is 0 Is a 3 x 3 matrix; then, according to the rotation matrix R 0 Coordinate T of the target space region in the robot coordinate system R Converting the coordinate system into a world coordinate system for representation, wherein the specific formula is as follows:
T w =R 0 *T R +T 0
wherein, T w Representing the coordinates of the target spatial region in the world coordinate system, T w Is a1 × 3 coordinate; r 0 Representing a rotation matrix; t is R Representing a target spatial regionCoordinates under a robot coordinate system; t is 0 Representing the coordinates of the robot in a world coordinate system.
The acquired data of the target space area acquired by the acquisition component may contain a lot of noise information and a lot of invalid information, so that the area point cloud data of the target space area determined based on the acquired data includes the noise information or the invalid point cloud information, which causes the area point cloud data to be not smooth enough and have larger noise, thereby causing the area point cloud data of the target space area to be not stable enough and have lower credibility, and further causing the map accuracy of the robot constructed based on the initial point cloud data which is searched from the area point cloud data and belongs to the target object to be lower about the target space area. Therefore, in an embodiment, before acquiring the region point cloud data of the target space region determined based on the acquired data, denoising processing is performed on the acquired data to delete data points in the acquired data having a gradient greater than a preset gradient value, so that smooth suppression of noise in the acquired data is realized, and the acquired data is smoother, and higher in stability and credibility. The preset gradient value is not limited, and the preset gradient value can be specifically set according to actual use requirements.
Illustratively, first, a 3 × 3 Sobel operator matrix is used to perform convolution with the collected data, and a horizontal gradient value and a vertical gradient value of each data point in the collected data are obtained. The specific formula is as follows:
Figure BDA0003718854410000121
Figure BDA0003718854410000131
wherein G is x A lateral gradient value representing a data point; g y A longitudinal gradient value representing a data point; a represents the collected data;
Figure BDA0003718854410000132
representing the convolution factor of the Sobel operator in the transverse direction;
Figure BDA0003718854410000133
representing the convolution factor of the Sobel operator in the vertical direction.
And secondly, for each data point in the collected data, combining the transverse gradient value and the longitudinal gradient value of the data point to obtain the gradient value of the data point. The specific formula is as follows:
Figure BDA0003718854410000134
wherein G represents a gradient value of a certain data point; g x A lateral gradient value representing a data point; g y Representing the longitudinal gradient value of a certain data point.
And finally, judging the magnitude relation between the gradient value of each data point in the acquired data and a preset gradient value, if the gradient value of the data point is greater than the preset gradient value, considering the data point as a noise point, and eliminating the data point from the acquired data so as to filter the noise point in the acquired data. It is to be understood that, in other embodiments, the collected data may also be denoised by using a Canny operator or a Laplacian operator, and the like, which is not particularly limited herein.
In other embodiments, before obtaining the area point cloud data of the target space area determined based on the collected data, the collected data may be further subjected to a clipping process, a smoothing process, and the like, which is not specifically limited herein. According to the imaging principle of the depth component, the light superposition is weak for the edge data points of the collected data, the area where the edge data points are located is a dim light area or an area with a poor imaging environment, so that the credibility of the edge data points of the collected data is low, the data points with low credibility in the edge area of the collected data can be filtered out by cutting the collected data, and the stability and credibility of the collected data are improved; and the noise of the acquired data is suppressed by smoothing each data point in the acquired data under the condition of keeping the detail characteristics of the acquired data, so that the reliability of the acquired data is improved.
Step S112: and selecting data of points with height information meeting first preset requirements from the regional point cloud data as initial point cloud data of the target object.
In the embodiment of the application, data of points meeting a first preset requirement is selected from the regional point cloud data and used as initial point cloud data of a target object. That is, starting from the heights of the points in the area point cloud data of the target space area, the point cloud data belonging to the target object can be determined by the height information of the points, so that the target object can be responded to subsequently when a map of the robot about the target space area is constructed based on the point cloud data of the target object, and the target object is correspondingly displayed on the constructed map. The target object may be specifically an obstacle or other object that needs to be displayed on the map, and is not specifically limited herein.
In one embodiment, the target object is an obstacle, and data of points with height information meeting a first preset requirement is selected from the area point cloud data and used as initial point cloud data of the obstacle. Since the height of the robot is fixed, during the movement of the robot, an object higher than the robot or a part of the object higher than the robot may not obstruct the movement of the robot, and an object lower than the robot may obstruct the movement of the robot, that is, an object lower than the robot or a part of the object lower than the robot may be an effective obstacle that may affect the movement of the robot. Therefore, the data of the points with the height information meeting the first preset requirement is selected from the area point cloud data to serve as the initial point cloud data of the obstacle, the point cloud data belonging to the effective obstacle can be screened from the area point cloud data, namely the point cloud data of the effective obstacle possibly blocking the robot in the moving process can be determined through the height information of each point in the area point cloud data, the obstacle identification accuracy is improved, the subsequently constructed robot can divide the obstacle area and the walking area more accurately relative to the map of the target space area, the subsequent robot can effectively avoid the obstacle in the moving process based on the map, and the possibility of collision between the robot and the obstacle is reduced; in addition, because the initial point cloud data are the point cloud data belonging to the effective obstacles, the point cloud data volume belonging to the obstacles is reduced, so that the calculation amount of constructing the map of the robot about the target space area is reduced, and the efficiency of constructing the map of the robot about the target space area is improved.
In one embodiment, the first predetermined requirement is that the robot does not belong to the ground and has a height with the ground that is less than the height of the robot. Since the robot moves on the ground, the robot may not be hindered from moving by the ground and an obstacle having a height greater than that of the robot from the ground, and therefore, data of a point which does not belong to the ground and has a height less than that of the robot from the ground is selected from the area point cloud data as initial point cloud data of the obstacle, that is, data of a point which belongs to a valid obstacle is selected from the area point cloud data as initial point cloud data of the obstacle.
In order to further reduce the possibility of collision between the robot and an obstacle when the robot moves based on a constructed map, in other embodiments, the first preset requirement may also be that the robot does not belong to the ground and the height between the robot and the ground is within a first preset range, wherein the first preset range is not limited and can be specifically set according to actual use requirements; because the determination of the height information of each point in the area point cloud data may have a deviation and the like, in order to reduce the possibility that the subsequent robot collides with an obstacle in the movement process, the data of the point with the height between the area point cloud data and the ground within a first preset range is used as the point cloud data of the effective obstacle, that is, the data of the point shorter than the height of the robot and the data of the point partially higher than the height of the robot are both used as the point cloud data of the effective obstacle.
Exemplarily, as shown in fig. 3, fig. 3 is a schematic diagram of an embodiment of a robot and an obstacle provided by the present application, and the robot and the obstacle are arranged at a robot height h 1 Is 10cm, and the first preset range is 0-13cm is taken as an example; an object A exists around the robot, and the object A can be specifically divided into a group a of point cloud data, b group of point cloud data and c group of point cloud data, because the group a of point cloud data does not belong to the ground and the height h between the group a of point cloud data and the ground 2 Is 0-h 2 The group a point cloud data is not more than 10 and is in a first preset range, so that the group a point cloud data belongs to one part of the initial point cloud data of the obstacle; the b group of point cloud data does not belong to the ground and the height h between the b group of point cloud data and the ground 3 H is more than or equal to 10 3 Less than or equal to 13, and the height of the robot is more than the height h of the robot 1 But within a first preset range, the b group of point cloud data also belongs to one part of the initial point cloud data of the obstacle; and the c group of point cloud data does not belong to the ground and has a height h with the ground 4 Is h 4 >13, the object part corresponding to the c group of point cloud data is not in the first preset range, so that the object part may not block the movement of the robot in the movement process of the robot; therefore, although the group a point cloud data, the group b point cloud data, and the group c point cloud data are all part of the point cloud data corresponding to the object a, in the moving process of the robot, only part of the object corresponding to the group a point cloud data and part of the object corresponding to the group b point cloud data may generate an obstacle corresponding to the movement of the robot, so that the group a point cloud data and the group b point cloud data are used as part of the initial point cloud data of the obstacle, and only the part of the object a corresponding to the group a point cloud data and the part of the object b point cloud data are used as effective obstacles.
Exemplarily, in robot height h 1 10cm, a first predetermined range of 0-13 cm; an object A exists around the robot, all point cloud data belonging to the object A do not belong to the ground, and the height h between at least part of the point cloud data and the ground 2 Is 0-h 2 Less than or equal to 10, and at least part of the point cloud data belonging to the object A is in a first preset range, so that the parts corresponding to all the point cloud data of the object A can be used as effective barriers.
In one embodiment, if the area point cloud data of the target space area is point cloud data in a robot coordinate system, the robot coordinate system takes a point at the center on the robot chassis as an origin of the coordinate system, so that a point on the X axis in the robot coordinate system in the area point cloud data can be regarded as point cloud data belonging to the ground. Since there may be fluctuations in the ground, in an embodiment, a point having a height from the ground that is less than or equal to the first preset value is also taken as a point in the point cloud data belonging to the ground. The first preset value is not limited, and may be specifically set according to actual use requirements, for example, the first preset value is 1cm, 1.5cm, or the like.
In another embodiment, the target object may be another object that needs to be displayed on the map, and data of a point whose height information meets the first preset requirement is selected from the point cloud data of the area as initial point cloud data of the target object, so that the robot can respond to the target object when building the map of the target space area based on the initial point cloud data of the target object, and correspondingly display the target object on the built map.
Illustratively, the target object is a target cargo, data of points with height information meeting first preset requirements are selected from the point cloud data of the region and serve as initial point cloud data of the target cargo, so that the target cargo can be responded when a map of the robot about a target space region is constructed based on the initial point cloud data of the target object subsequently, the target cargo is correspondingly displayed on the constructed map, and therefore the subsequent logistics robot can accurately carry the target cargo. The first preset requirement is that the height between the ground and the ground is within a second preset range, for example, the height between the ground and the first preset requirement is that the height between the ground and the ground is greater than or equal to a first height threshold value and less than or equal to a second height threshold value.
Illustratively, the target object is a window, and data of points with height information meeting a first preset requirement is selected from the point cloud data of the area and used as initial point cloud data of the window, so that the window can be responded when a map of the robot about the target space area is constructed based on the initial point cloud data of the target object subsequently, the window is correspondingly displayed on the constructed map, and therefore the window can be accurately wiped by the subsequent window wiping robot.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of step S12 shown in fig. 1, and it should be noted that the embodiment is not limited to the flowchart shown in fig. 4 if substantially the same result is obtained. As shown in fig. 4, in the embodiment of the present application, determining the confidence level of the target point according to the confidence level corresponding to the attribute parameter of the target point specifically includes:
step S121: at least one property parameter of the target object point is determined.
In an embodiment of the application, at least one property parameter of the target object point is determined, wherein the at least one property parameter comprises at least one of a distribution property parameter characterizing a spatial distribution and a data property parameter characterizing a data situation.
In one embodiment, the distribution attribute parameter of the target object point may be the number of target object points in a subspace region where the target object point is located. In other embodiments, the distribution attribute parameter of the target object point may also be the density of points in the subspace region in which the target object point is located. The subspace area is obtained by dividing the target space area according to a preset division strategy. The preset partition strategy of the subspace area is not limited, and can be specifically set according to actual use requirements. For example, the preset dividing strategy may be to divide the target space region equally, or the preset dividing strategy may also be to divide the target space region randomly.
In one embodiment, the data property parameter of the target point may be a gradient of a data point in the collected data corresponding to the target point. It is to be understood that, in other embodiments, the data attribute parameter of the target point may also be a difference value between a median filtered value of data points corresponding to the target point in the collected data and a depth value of the data point, which is not specifically limited herein.
Step S122: and acquiring the attribute confidence corresponding to each attribute parameter of the target object point.
In the embodiment of the application, the attribute confidence corresponding to each attribute parameter of the target object point is obtained. In one embodiment, the distribution attribute parameter of the target object point is the number of the target object points in the subspace region where the target object point is located, and the distribution attribute parameter is positively correlated with the corresponding attribute confidence. Exemplarily, the attribute confidence corresponding to the distribution attribute parameter is a density index of the target object point; for any solid target object, a plurality of target object points included in the initial point cloud data corresponding to the solid target object should be continuous and dense, so the target object points in the subspace region corresponding to the target object should also be continuous and dense, and therefore, the greater the number of target object points in the subspace region where the target object points are located, the greater the spatial density index of the subspace region is, the higher the probability that each target object point in the subspace region belongs to the target object is, i.e., the greater the confidence level that the target object point belongs to the target object is.
In one embodiment, the data attribute parameter of the target point is a gradient of a data point in the collected data corresponding to the target point, and the data attribute parameter is negatively correlated with the corresponding attribute confidence. Exemplarily, the attribute confidence corresponding to the data attribute parameter is the reciprocal of the gradient of the target object point; for any solid object, in a relatively smooth partial region of the object, the gray value is relatively small, and the gradient value is relatively small, so that the larger the gradient of the data point corresponding to the object point is, the lower the possibility that the object point belongs to the object is, that is, the lower the reliability of the object point belonging to the object is. It is understood that in other embodiments, the attribute confidence corresponding to the data attribute parameter may be one minus the gradient of the data point corresponding to the target object point, or the like.
When the attribute confidence corresponding to the data attribute parameter is the inverse of the target point gradient, in order to facilitate determining the confidence of the target point based on the attribute confidence corresponding to the data attribute of the target point, in a specific embodiment, the inverse of the target point gradient may be normalized to be between 0 and 1.
Step S123: and obtaining the confidence coefficient of the target object point based on the attribute confidence coefficient corresponding to each attribute parameter of the target object point.
In an embodiment, when the target object point only includes one attribute parameter, for example, when the target object point only includes a distribution attribute parameter representing a spatial distribution condition or a data attribute parameter representing a data condition, an attribute confidence corresponding to the attribute parameter may be directly used as the confidence of the target object point.
In one embodiment, when the target object point includes two or more attribute parameters, such as a fractional attribute parameter representing a spatial distribution condition and a data attribute parameter representing a data condition, the attribute confidence degrees corresponding to each attribute parameter of the target object point may be multiplied to obtain the confidence degree of the target object point. It is to be understood that, in other embodiments, the attribute confidence corresponding to each attribute parameter of the target object point may also be subjected to weighted summation or averaging, so as to obtain the confidence of the target object point, which is not specifically limited herein.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating a method for determining point cloud data according to another embodiment of the present disclosure. It should be noted that the embodiment of the present application is not limited to the flow sequence shown in fig. 5 if substantially the same result is obtained. As shown in fig. 5, the present embodiment includes:
step S51: and acquiring initial point cloud data belonging to a target object in the target space area.
Step S51 is similar to step S11 and will not be described herein.
Step S52: and determining the confidence of each target point based on the spatial distribution condition of each target point and/or the data condition of each target point in the acquired data.
Step S52 is similar to step S12 and will not be described in detail herein.
Step S53: and removing the target object points in the initial point cloud data based on the confidence coefficient of each target object point to obtain target point cloud data belonging to the target object.
Step S53 is similar to step S13 and will not be described herein.
Step S54: based on the target point cloud data, position information of the target object and the walkable area in the target space area is determined.
Because the target point cloud data belonging to the target is used for constructing a map of the robot about the target space area, the target points in the initial point cloud data are removed based on the confidence of each target point, and the map of the robot about the target space area is constructed after the target point cloud data belonging to the target is obtained. Therefore, in the embodiment of the present application, first, the position information of the target object and the walkable area in the target space area is determined based on the target point cloud data belonging to the target object. That is, after the target point cloud data belonging to the target object in the area point cloud data is determined, the point cloud data belonging to the walkable area can be determined according to the target point cloud data belonging to the target object, so that the position information of the target object and the walkable area in the target space area can be determined.
Since the target is composed of a plurality of target points, the position information of the target can be determined from the position information of each target point. Therefore, in one embodiment, the target point cloud data includes position information of a plurality of target points belonging to the target object, and the position information of the target object can be determined according to the position information of each target point; specifically, since each target point constitutes a target, the position information of the target constituted by each target point can be specified after the position information of each target point is specified.
In one embodiment, the point cloud data remaining after removing the target point cloud data belonging to the target object from the area point cloud data of the target space area may be directly used as the point cloud data belonging to the walkable area, so that the position information of the target object and the walkable area in the target space area is determined based on the target point cloud data belonging to the target object.
In other embodiments, the positional information of the walkable area is determined based on the positional relationship between the robot and each target point, considering that the robot cannot move in a part of the target space area due to the occlusion of the target. Specifically, considering that the range of the target space area is large, determining the position information of the walkable area directly according to the position relationship between each target object point and the robot in the target space area may cause low accuracy of the determined position information of the walkable area, so that the target space area is divided into a plurality of sub-target space areas, walkable sub-areas in each sub-target space area are determined respectively, and the walkable area in the target space area is determined according to each walkable sub-area.
Step S55: and constructing a map of the robot about the target space area by using the position information of the target object and the walkable area.
In the embodiment of the present application, a map of the robot about a target space area is constructed using position information of the target object and the walkable area. That is, after determining the position information of the target object and the walkable region in the target space region, it may be determined which regions in the target space region are the target object region and which regions are the regions where the robot can walk movably, and a map of the robot about the target space region may be constructed according to the target object region in the target space region and the regions where the robot can walk movably, so as to display the target object region and the walkable region in the map of the robot about the target space region, respectively. Because the position information of the target object is determined and obtained based on the target point cloud data belonging to the target object, and the target point cloud data is obtained by removing the initial point cloud data belonging to the target object based on the confidence coefficient of each target object point, the target object can be accurately responded when the map of the robot about the target space area is constructed, so that the target object can be more accurately displayed on the constructed map.
In order to improve the accuracy of constructing the map of the robot about the target space region, in other embodiments, the map of the robot about the target space region is constructed based on the target point located at the same position on the world coordinate system in the target point cloud data of the second number of consecutive frames. That is, if the second number of consecutive frames of the target point are all located at the same position on the world coordinate system, the target point is considered to belong to the target, the data of the target point of which the position on the world coordinate system of the second number of consecutive frames is unchanged is selected from the target point cloud data of the target, the data is used as new target point cloud data of the target, a map of the robot about a target space area is constructed based on the new target point cloud data, and the accuracy of the constructed map is improved. In one embodiment, when the target point cloud data belonging to the target object is point cloud data in a robot coordinate system, the position information of the target object and the walkable area is specifically position information in the robot coordinate system, so that in this case, the position information of the target object and the walkable area in the robot coordinate system needs to be converted into a world coordinate system to be expressed, and then a map of the robot about the target space area needs to be correspondingly constructed. In other embodiments, when the target point cloud data belonging to the target object is point cloud data in the coordinate system of the image capturing component, the target point cloud data also needs to be correspondingly converted into the world coordinate system for representation, which is not described herein again.
In one embodiment, the step of obtaining the initial point cloud data of the target object in the target space area to construct the map of the robot about the target space area may be performed during the movement of the robot in the area to be processed, and the robot continuously obtains the area point cloud data of different target space areas in the area to be processed during the movement to obtain the maps of different target space areas. Since the target space area is changed relative to the robot when the robot moves in the area to be processed, the robot can construct a map of the robot about the current target space area by continuously acquiring the area point cloud data of the current target space area. That is, during the movement of the robot in the area to be processed, the constructed map of the robot about the target space area is updated in real time, so that the response of the robot to the environmental change is faster.
In one embodiment, under the condition that the robot does not construct the map of the area to be processed, responding to the construction of a new map of the target space area, and constructing the current map of the area to be processed by using the currently constructed maps of all the target space areas; and repeating the steps until the robot constructs maps of all target space areas in the area to be processed so as to obtain a final map of the area to be processed. That is to say, a series of steps related to obtaining the initial point cloud data belonging to the target object in the target space area may be performed before the robot performs the related operation (for example, before the cleaning robot performs cleaning), so as to construct the final map of the area to be processed before the robot performs the related operation, so that the constructed final map of the area to be processed is more accurate and detailed, and the map of the area to be processed is constructed more efficiently. Of course, in other embodiments, a series of steps related to obtaining the initial point cloud data belonging to the target object in the target space area may also be performed during the related operation of the robot (for example, during the cleaning process of the cleaning robot), so as to construct a final map of the area to be processed during the related operation of the robot, thereby improving the efficiency of constructing the map of the area to be processed.
It will be appreciated that in other embodiments, where the robot has already constructed a map of the area to be processed, the map of the area to be processed is updated with the new map of the target spatial area in response to constructing the new map of the target spatial area. That is, a series of steps related to acquiring initial point cloud data belonging to the target object in the target space region may be performed during operation of the robot (e.g., during cleaning of the cleaning robot) or before operation of the robot (e.g., before cleaning of the cleaning robot), so as to update the map of the region to be processed in real time, and update the position where the target object is displayed and the range of the region where the robot can travel on the map of the region to be processed in real time, so that the robot responds to changes in the environment more quickly.
In one embodiment, the target object is an obstacle, the robot may be a cleaning robot, and the series of steps related to acquiring initial point cloud data belonging to the target object in the target space area and the like are performed during or before the cleaning robot cleans the area to be processed. That is to say, in the process of cleaning by moving the cleaning robot in the region to be processed, the map of the original region to be processed is updated by using the constructed new map of the robot about the target space region, so that the position of the obstacle can be updated and displayed in real time on the map of the region to be processed, the response of the cleaning robot to the environmental change is faster, and the purposes of timely avoiding obstacles and improving the cleaning efficiency are achieved. It is to be understood that, in other specific real-time manners, the robot may also be a medical robot, a handicapped robot, a greeting robot, a logistics robot, an early education robot, or the like, which is not limited herein.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a device for determining point cloud data according to the present application. The device 60 for determining point cloud data comprises an acquisition module 61, a determination module 62 and a rejection module 63. The acquisition module 61 is configured to acquire initial point cloud data belonging to a target object in a target space region; the initial point cloud data is determined based on the acquisition data acquired by the acquisition component for the target space area, and comprises a plurality of target points belonging to a target; the determining module 62 is configured to determine a confidence level of each target point based on a spatial distribution condition of each target point and/or a data condition of each target point in the collected data; the eliminating module 63 is configured to eliminate the target point in the initial point cloud data based on the confidence of each target point to obtain target point cloud data belonging to a target.
The determining module 62 is configured to determine the confidence level of each target point based on the spatial distribution condition of each target point and/or the data condition of each target point in the acquired data, and specifically includes: determining at least one attribute parameter of the target object point; wherein the at least one attribute parameter comprises at least one of a distribution attribute parameter characterizing a spatial distribution profile and a data attribute parameter characterizing a data profile; acquiring an attribute confidence corresponding to each attribute parameter of a target object point; and obtaining the confidence coefficient of the target object point based on the attribute confidence coefficient corresponding to each attribute parameter of the target object point.
The distribution attribute parameters of the target object points are the number of the target object points in a subspace region where the target object points are located, the subspace region is obtained by dividing the target space region according to a preset division strategy, and the distribution attribute parameters are in positive correlation with the corresponding attribute confidence; and/or the data attribute parameter of the target object point is the gradient of a data point corresponding to the target object point in the collected data, and the data attribute parameter and the corresponding attribute confidence coefficient are in negative correlation.
The eliminating module 63 is configured to eliminate target object points in the initial point cloud data based on the confidence degrees of the target object points to obtain target point cloud data belonging to a target object, and specifically includes: removing target object points with confidence degrees not meeting the confidence degree requirement from the initial point cloud data to obtain target point cloud data; wherein the target spatial region comprises a plurality of subspace regions, the confidence requirement comprising at least one of: the confidence coefficient of the subspace region is greater than or equal to a first confidence coefficient threshold value, the projection region to which the subspace region belongs meets a first preset condition, and the first preset condition comprises that the number of the subspace regions which belong to the projection region is greater than the first number, the confidence coefficient of the projection region is greater than or equal to a second confidence coefficient threshold value, or the confidence coefficient of the projection region is greater than or equal to the second confidence coefficient threshold value; the projection area to which the subspace area belongs is a projection area of the subspace area on a preset plane, the confidence coefficient of the subspace area is obtained based on the confidence coefficient of each target point in the subspace area, and the confidence coefficient of the projection area is obtained based on the confidence coefficient of the subspace area belonging to the projection area.
The confidence coefficient of the subspace region is the sum of the confidence coefficients of all target points in the subspace region, and the confidence coefficient of the projection region is the sum of the confidence coefficients of the subspace regions belonging to the projection region; and/or the plurality of subspace areas comprise at least one group of area groups, each group of area groups comprises at least one subspace area arranged along the vertical direction, and the preset plane is a horizontal plane.
The device 60 for determining point cloud data further includes a selecting module 64, where the selecting module 64 is configured to, after obtaining target point cloud data corresponding to the second number of frames of acquired data, specifically include: and selecting target object points at the same position on the world coordinate system in the target point cloud data of the second number of frames to obtain new target point cloud data.
The eliminating module 63 is further configured to, before determining the confidence of each target point based on the spatial distribution condition of each target point and/or the data condition of each target point in the collected data, specifically include: performing spatial filtering processing on the initial point cloud data to obtain processed point cloud data belonging to a target object; the eliminating module 63 is configured to eliminate the target object points in the initial point cloud data based on the confidence degrees of the target object points to obtain target point cloud data belonging to a target object, and specifically includes: and eliminating the target object points in the processed point cloud data based on the confidence coefficient of each target object point to obtain target point cloud data belonging to the target object.
The obtaining module 61 is configured to obtain initial point cloud data belonging to a target object in a target space region, and specifically includes: acquiring regional point cloud data of a target space region determined based on the acquired data; and selecting data of points with height information meeting first preset requirements from the regional point cloud data as initial point cloud data of the target object.
The first preset requirement is that the robot does not belong to the ground and the height between the robot and the ground is smaller than the height of the robot; and/or, the acquired data is a target depth image; and/or, the device 60 for determining point cloud data further includes a deleting module 65, where the deleting module 65 is configured to, before acquiring the point cloud data of the region of the target space region determined based on the acquired data, specifically include: and deleting data points with gradient larger than a preset gradient value in the collected data.
The determining apparatus 60 for point cloud data further includes a building module 66, where the building module 66 is configured to remove target points in the initial point cloud data based on confidence degrees of the target points, and after obtaining target point cloud data belonging to a target, the method specifically includes: determining position information of a target object and a walkable area in a target space area based on the target point cloud data; and constructing a map of the robot about the target space area by using the position information of the target object and the walkable area.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of an electronic device provided in the present application. The electronic device 70 comprises a memory 71 and a processor 72 coupled to each other, and the processor 72 is configured to execute program instructions stored in the memory 71 to implement the steps of any one of the above-mentioned embodiments of the method for determining point cloud data. In one particular implementation scenario, the electronic device 70 may include, but is not limited to: a microcomputer, a server, and the electronic device 70 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 72 is configured to control itself and the memory 71 to implement the steps of any of the above-described embodiments of the method of determining point cloud data. The processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, the processor 72 may be collectively implemented by an integrated circuit chip.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application. The computer readable storage medium 80 of the embodiments of the present application stores program instructions 81, and the program instructions 81 when executed implement the method provided by any embodiment of the method for determining point cloud data of the present application and any non-conflicting combination. The program instructions 81 may form a program file stored in the computer-readable storage medium 80 in the form of a software product, so that a computer device (which may be a personal computer, a server, or a network device) executes all or part of the steps of the method according to the embodiments of the present application. And the aforementioned computer-readable storage medium 80 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (13)

1. A method for determining point cloud data, the method comprising:
acquiring initial point cloud data belonging to a target object in a target space region; wherein the initial point cloud data is determined based on acquisition data acquired by an acquisition component for the target spatial region, the initial point cloud data comprising a number of target points belonging to the target;
determining the confidence of each target point based on the space distribution condition of each target point and/or the data condition of each target point in the acquired data;
and removing the target object points in the initial point cloud data based on the confidence coefficient of each target object point to obtain target point cloud data belonging to the target object.
2. The method according to claim 1, wherein the determining the confidence level of each of the object points based on the spatial distribution of each of the object points and/or the data condition of each of the object points in the acquired data comprises:
determining at least one attribute parameter of the target object point; wherein the at least one attribute parameter comprises at least one of a distribution attribute parameter characterizing the spatial distribution profile and a data attribute parameter characterizing the data profile;
obtaining attribute confidence corresponding to each attribute parameter of the target object point;
and obtaining the confidence coefficient of the target object point based on the attribute confidence coefficient corresponding to each attribute parameter of the target object point.
3. The method according to claim 2, wherein the distribution attribute parameters of the target object points are the number of the target object points in a subspace region where the target object points are located, the subspace region is obtained by dividing the target space region according to a preset division strategy, and the distribution attribute parameters are positively correlated with the corresponding attribute confidence degrees;
and/or the data attribute parameter of the target object point is the gradient of the data point corresponding to the target object point in the collected data, and the data attribute parameter and the corresponding attribute confidence coefficient are in negative correlation.
4. The method according to any one of claims 1 to 3, wherein the removing the target points from the initial point cloud data based on the confidence level of each target point to obtain target point cloud data belonging to the target comprises:
removing the target object points with the confidence coefficient not meeting the confidence coefficient requirement from the initial point cloud data to obtain target point cloud data; wherein the target spatial region comprises a number of sub-spatial regions, the confidence requirement comprising at least one of: the confidence degree of the located subspace region is greater than or equal to a first confidence degree threshold value, the projection region to which the located subspace region belongs meets a first preset condition, wherein the first preset condition comprises that the number of the subspace regions belonging to the projection region is greater than a first number, and the confidence degree of the projection region is greater than or equal to a second confidence degree threshold value, or the confidence degree of the projection region is greater than or equal to the second confidence degree threshold value; the projection area to which the subspace area belongs is a projection area of the subspace area on a preset plane, the confidence coefficient of the subspace area is obtained based on the confidence coefficient of each target point in the subspace area, and the confidence coefficient of the projection area is obtained based on the confidence coefficient of the subspace area belonging to the projection area.
5. The method of claim 4, wherein the confidence of the subspace region is a sum of the confidences of the object points in the subspace region, and the confidence of the projection region is a sum of the confidences of the subspace regions belonging to the projection region;
and/or the plurality of subspace areas comprise at least one group of area groups, each group of area groups comprise at least one subspace area arranged along the vertical direction, and the preset plane is a horizontal plane.
6. The method according to any one of claims 1 to 5, further comprising: after obtaining the target point cloud data corresponding to the second number of frames of collected data respectively,
and selecting the target object points at the same position on the world coordinate system in the target point cloud data of the second number of frames to obtain new target point cloud data.
7. The method of claim 1, wherein prior to determining the confidence level for each of the object points based on the spatial distribution of each of the object points and/or the data for each of the object points in the acquired data, the method further comprises:
performing spatial filtering processing on the initial point cloud data to obtain processed point cloud data belonging to the target object;
the method for eliminating the target point in the initial point cloud data based on the confidence coefficient of each target point to obtain the target point cloud data belonging to the target comprises the following steps:
and eliminating the target object points in the processed point cloud data based on the confidence coefficient of each target object point to obtain target point cloud data belonging to the target object.
8. The method of any one of claims 1 to 7, wherein the obtaining initial point cloud data pertaining to a target object in a target spatial region comprises:
acquiring regional point cloud data of the target space region determined based on the acquired data;
and selecting data of points with height information meeting first preset requirements from the regional point cloud data as initial point cloud data of the target object.
9. The method according to claim 8, characterized in that said first preset requirement is not belonging to the ground and the height to the ground is less than the height of said robot;
and/or the acquired data is a target depth image;
and/or, prior to said acquiring regional point cloud data for the target spatial region determined based on the acquisition data, the method further comprises:
and deleting data points with gradient larger than a preset gradient value in the collected data.
10. The method according to any one of claims 1 to 9, wherein after the target point in the initial point cloud data is eliminated based on the confidence of each target point to obtain target point cloud data belonging to the target, the method further comprises:
determining position information of a target object and a walkable region in the target space region based on the target point cloud data;
and constructing a map of the robot about the target space area by using the position information of the target object and the walkable area.
11. An apparatus for determining point cloud data, the apparatus comprising:
the acquisition module is used for acquiring initial point cloud data belonging to a target object in a target space region; wherein the initial point cloud data is determined based on acquisition data acquired by an acquisition component for the target spatial region, the initial point cloud data comprising a number of target points belonging to the target;
the determining module is used for determining the confidence of each target point based on the spatial distribution condition of each target point and/or the data condition of each target point in the acquired data;
and the removing module is used for removing the target points in the initial point cloud data based on the confidence coefficient of each target point to obtain target point cloud data belonging to the target, and the target point cloud data is used for constructing a map of the robot about the target space area.
12. An electronic device, comprising a memory storing program instructions and a processor for executing the program instructions to implement the method of determining point cloud data of any one of claims 1-10.
13. A computer-readable storage medium for storing program instructions executable to implement the method of determining point cloud data of any one of claims 1-10.
CN202210752720.8A 2022-06-28 2022-06-28 Method and device for determining point cloud data, related equipment and storage medium thereof Withdrawn CN115164897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210752720.8A CN115164897A (en) 2022-06-28 2022-06-28 Method and device for determining point cloud data, related equipment and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210752720.8A CN115164897A (en) 2022-06-28 2022-06-28 Method and device for determining point cloud data, related equipment and storage medium thereof

Publications (1)

Publication Number Publication Date
CN115164897A true CN115164897A (en) 2022-10-11

Family

ID=83489160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210752720.8A Withdrawn CN115164897A (en) 2022-06-28 2022-06-28 Method and device for determining point cloud data, related equipment and storage medium thereof

Country Status (1)

Country Link
CN (1) CN115164897A (en)

Similar Documents

Publication Publication Date Title
CN109737974B (en) 3D navigation semantic map updating method, device and equipment
CN111220993B (en) Target scene positioning method and device, computer equipment and storage medium
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
CN111680673B (en) Method, device and equipment for detecting dynamic object in grid map
CN110989631A (en) Self-moving robot control method, device, self-moving robot and storage medium
CN112967345B (en) External parameter calibration method, device and system of fish-eye camera
CN114266960A (en) Point cloud information and deep learning combined obstacle detection method
CN111739099B (en) Falling prevention method and device and electronic equipment
US20230334778A1 (en) Generating mappings of physical spaces from point cloud data
CN115185285A (en) Automatic obstacle avoidance method, device and equipment for dust collection robot and storage medium
CN108550134B (en) Method and device for determining map creation effect index
CN112528781B (en) Obstacle detection method, device, equipment and computer readable storage medium
Ortigosa et al. Obstacle-free pathway detection by means of depth maps
CN112578405A (en) Method and system for removing ground based on laser radar point cloud data
CN115164897A (en) Method and device for determining point cloud data, related equipment and storage medium thereof
Pfeiffer et al. Ground truth evaluation of the Stixel representation using laser scanners
CN111742242A (en) Point cloud processing method, system, device and storage medium
CN115170445A (en) Method and device for determining point cloud data, related equipment and storage medium thereof
CN115855086A (en) Indoor scene autonomous reconstruction method, system and medium based on self-rotation
CN111890358B (en) Binocular obstacle avoidance method and device, storage medium and electronic device
CN113379841A (en) Laser SLAM method based on phase correlation method and factor graph and readable storage medium thereof
CN115375713B (en) Ground point cloud segmentation method and device and computer readable storage medium
Stiens et al. Local elevation mapping for automated vehicles using lidar ray geometry and particle filters
CN115016495A (en) Method and device for generating robot map, and related equipment and storage medium thereof
CN113313764B (en) Positioning method, positioning device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20221011

WW01 Invention patent application withdrawn after publication