Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the invention provides a point cloud processing method. The point cloud processing method provided by the embodiment of the invention can be applied to vehicles, such as unmanned vehicles or vehicles with Advanced Driver Assistance Systems (ADAS) systems. It can be understood that the point cloud processing method may also be applied to an unmanned aerial vehicle, for example, an unmanned aerial vehicle equipped with a detection device for acquiring point cloud data. The point cloud processing method provided by the embodiment of the invention can be applied to determining the specific point cloud which is possibly influenced by accurately marking the target object in the three-dimensional point cloud before marking the target object in the three-dimensional point cloud, and removing the specific point cloud in the three-dimensional point cloud. The point cloud processing method provided by the embodiment of the invention is described by taking a vehicle as an example.
Fig. 1 is a flowchart of a point cloud processing method according to an embodiment of the present invention. As shown in fig. 1, the method in this embodiment may include:
s101, acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area.
As shown in fig. 2, a photographing apparatus, which may be a digital camera, a video camera, or the like, and a detecting apparatus, which may be a binocular stereo camera, a TOF camera, and/or a laser radar in particular, are provided in the vehicle 21.
Optionally, the obtaining a two-dimensional image corresponding to the target area and a three-dimensional point cloud including the target area includes: acquiring a two-dimensional image corresponding to a target area around a carrier carrying the shooting equipment and shot by the shooting equipment; and acquiring a three-dimensional point cloud of a target area around the carrier, which is detected by a detection device carried on the carrier.
For example, the vehicle 21 is a carrier on which the imaging device and the detection device are mounted, and the relative positional relationship between the imaging device and the detection device on the vehicle 21 may be predetermined. During the running of the vehicle 21, the photographing device collects image information of the surroundings of the vehicle 21 in real time, for example, image information of an area in front of the vehicle 21, which may be a two-dimensional image. Fig. 3 is a schematic diagram of a two-dimensional image of an area in front of the vehicle 21 captured by a capturing device of the vehicle 21, as shown in fig. 3, the two-dimensional image includes a vehicle in front of the vehicle 21, and the vehicle in front of the vehicle 21 may be the vehicle 22 shown in fig. 2.
In addition, the detection device detects a three-dimensional point cloud of objects around the vehicle 21 in real time while the vehicle 21 is traveling. The detection device may be a binocular stereo camera, a TOF camera and/or a lidar. Taking the laser radar as an example, when a laser beam emitted by the laser radar irradiates the surface of an object, the surface of the object reflects the laser beam, and the laser radar can determine information such as the direction and the distance of the object relative to the laser radar according to the laser beam reflected by the surface of the object. If the laser beam emitted by the laser radar scans according to a certain track, for example, 360-degree rotation scanning, a large number of laser points are obtained, and thus laser point cloud data, i.e., three-dimensional point cloud, of the object can be formed. Fig. 4 shows a scan of the radar of the vehicle 21, wherein one circle of the grain lines represents the ground around the vehicle 21.
Alternatively, fig. 3 shows a two-dimensional image of the area in front of the vehicle 21. As shown in fig. 4, the radar beam may scan along a certain trajectory, for example, 360 degrees of rotation, so that the three-dimensional point cloud shown in fig. 4 includes not only the front area of the vehicle 21 but also the right area, the left area, and the rear area of the vehicle 21.
And step S102, determining a specific area in the two-dimensional image.
In some embodiments, the particular area is a ground area.
As shown in fig. 3, the front area of the vehicle 21 includes not only other vehicles but also a ground area, a building, a tree, a fence, a pedestrian, and the like. In other embodiments, objects such as a traffic sign and the like may be present in the front area of the vehicle 21, and the bottom of the traffic sign is also adjacent to the ground, so that when objects such as a vehicle and a traffic sign in front of the vehicle 21 are marked, it is easy to mark the ground points at the bottom of the vehicle in front and/or the ground points at the bottom of the traffic sign by mistake, and therefore when marking the vehicle, the traffic sign, a building, a tree, a fence, a pedestrian and the like in a three-dimensional point cloud, it is necessary to identify the ground point cloud in the three-dimensional point cloud first, and mark the objects on the ground such as the vehicle, the traffic sign, the building, the tree, the fence, the pedestrian and the like after rejecting the ground point cloud in the three-dimensional point cloud.
Before identifying the ground point cloud in the three-dimensional point cloud, the embodiment determines the ground area in the two-dimensional image. One possible implementation is: first, a vehicle in the acquired two-dimensional image, such as the one in front detected in fig. 3, may be detected by a Convolutional Neural Network (CNN), and may be marked in a box, for example, the one in front of the vehicle 21 is marked in a box 31; then, an area below the vehicle ahead and without other objects, for example, the area 32, is used as a reference road surface area, in some embodiments, the reference road surface area may be a block area of a preset size below the block 31 of the vehicle ahead, and when the reference road surface area is determined, the information corresponding to the partial area image in the two-dimensional image is considered as a road surface; then, the information of the reference road surface area is input into a Support Vector Machine (SVM) classifier and/or a neural network model for classification prediction, so as to determine the ground area in the two-dimensional image. The SVM classifier can be obtained by training a large amount of sample data, and can perform linear classification or nonlinear classification. The sample data may be color information of the reference road surface, such as RGB information, which in some embodiments is RGB information of the image in the box 32 below the vehicle box 31 in fig. 3.
When determining the ground area in the two-dimensional image, the method may further include: the horizon is calculated and the area below the horizon is classified. Specifically, the horizon in the two-dimensional image may be calculated from Inertial Measurement Unit (IMU) information mounted on the vehicle, and the road surface information may be considered to exist only below the horizon. As shown in fig. 3, if the upper left corner of fig. 3 is set as the origin of the two-dimensional image and the horizon straight-line equation ax + by + c is 0, then the parameters of the horizon in the two-dimensional image are:
r=tan(pitch_angle)*focus_length
a=tan(roll_angle)
b=1
c=-tan(roll_angle)*image_width/2
+r*sin(roll_angle)*tan(roll_angle)
-image_height/2+r*cos(roll_angle)
wherein, pitch _ angle represents the pitch angle output by the IMU, focus _ length represents the focal length of the shooting device, roll _ angle represents the roll angle output by the IMU, image _ width represents the width of the two-dimensional image, and image _ height represents the height of the two-dimensional image.
Step S103, determining a specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image.
For example, a specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud as shown in fig. 4 and the ground area in the two-dimensional image as shown in fig. 3. In some embodiments, the particular point cloud is a ground point cloud.
In some embodiments, the determining a particular point cloud of the three-dimensional point clouds from the three-dimensional point cloud and the particular region in the two-dimensional image comprises: determining a corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image; and determining a specific point cloud in the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image.
Optionally, the determining a corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image includes: and projecting the three-dimensional point cloud to the two-dimensional image according to the position relationship between the three-dimensional point cloud acquisition equipment and the two-dimensional image acquisition equipment.
In this embodiment, the acquiring device of the three-dimensional point cloud is specifically a detecting device such as a laser radar, and the acquiring device of the two-dimensional image is specifically a shooting device such as a digital camera, and according to the position relationship between the detecting device and the shooting device, each three-dimensional point in the three-dimensional point cloud shown in fig. 4 can be projected into the two-dimensional image shown in fig. 3. For example, point i represents a three-dimensional point in a three-dimensional point cloud, and the position of the three-dimensional point in the radar coordinate system is denoted as Pi lLet the position where the point i is converted into the camera coordinate system be denoted as Pi c,Pi lAnd Pi cThe relationship (c) is specifically shown in the following formula (1):
wherein,
indicating the rotational relationship of the radar to the camera,
representing the three-dimensional position of the radar, i.e. the translation vector, in the camera coordinate system.
The projection point of the point i in the two-dimensional image can be calculated by the following formulas (2) and (3), and the position of the projection point in the two-dimensional image is marked as pi(μ,υ):
Wherein,
representing the three-dimensional coordinates of point i in the world coordinate system.
Similarly, the projection point of the other three-dimensional point in the three-dimensional point cloud shown in fig. 4 except the point i in the two-dimensional image may be determined, where the projection point is the corresponding two-dimensional point of the three-dimensional point in the two-dimensional image. And determining the ground point cloud in the three-dimensional point cloud according to the projection point of each three-dimensional point in the two-dimensional image and the ground area in the two-dimensional image.
For example, according to the projection point of the point i in the two-dimensional image, whether the projection point is in the ground area in the two-dimensional image is determined, and if the projection point is in the ground area in the two-dimensional image, the point i is marked as the reference point. Similarly, other reference points in the three-dimensional point cloud can be determined, and the reference points are collected together to form the reference point cloud. And further, performing plane fitting according to the reference point cloud in the three-dimensional point cloud, and recording the three-dimensional point cloud falling on the plane as ground point cloud.
And step S104, removing the specific point cloud in the three-dimensional point cloud.
After removing the ground point cloud in the three-dimensional point cloud shown in fig. 4, the objects on the ground such as vehicles, traffic signs, buildings, trees, fences, pedestrians and the like are marked.
It should be noted that, the present embodiment is schematically illustrated by taking a ground area in a two-dimensional image and a ground point cloud in a three-dimensional point cloud as an example, and in other embodiments, the present embodiment is also applicable to other specific areas, such as a sky area, a sidewalk area, and the like.
The specific area in the two-dimensional image corresponding to the target area is identified, the specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud comprising the target area and the specific area in the two-dimensional image, the specific point cloud in the three-dimensional point cloud is removed, when the object in the three-dimensional point cloud is labeled, the influence of the specific point cloud on the object to be labeled can be avoided, the specific point cloud is prevented from being mistakenly labeled as the object to be identified, the accuracy of labeling the object in the three-dimensional point cloud can be improved by removing the specific point cloud in the three-dimensional point cloud, and the accuracy of identifying the object is improved.
The embodiment of the invention provides a point cloud processing method. Fig. 5 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 5, on the basis of the embodiment shown in fig. 1, the determining a specific point cloud in the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image may include:
step S501, taking the point cloud projected to the specific area in the two-dimensional image in the three-dimensional point cloud as a reference point cloud in the three-dimensional point cloud.
For example, according to the projection point of the point i in the two-dimensional image, whether the projection point is in the ground area in the two-dimensional image is determined, and if the projection point is in the ground area in the two-dimensional image, the point i is marked as the reference point. Similarly, other reference points in the three-dimensional point cloud can be determined, and the reference points are collected together to form the reference point cloud.
Step S502, determining specific point clouds in the three-dimensional point clouds according to reference point clouds in the three-dimensional point clouds.
Optionally, the determining a specific point cloud in the three-dimensional point cloud according to a reference point cloud in the three-dimensional point cloud includes: determining a target plane according to a reference point cloud in the three-dimensional point cloud; and taking the point in the three-dimensional point cloud, the distance of which relative to the target plane is less than a distance threshold value, as the specific point cloud in the three-dimensional point cloud.
In some embodiments, the determining a target plane from a reference point cloud of the three-dimensional point cloud comprises: and determining the target plane by adopting a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud.
For example, a plane fitting algorithm is used for performing plane fitting on the reference point cloud in the three-dimensional point cloud, the fitted plane is recorded as a target plane, a distance between each three-dimensional point in the three-dimensional point cloud shown in fig. 4 and the target plane is calculated, when the distance is smaller than a distance threshold, the three-dimensional point can be used as a ground point cloud in the three-dimensional point cloud, and when the distance is larger than the distance threshold, it is determined that the three-dimensional point is not the ground point cloud.
Optionally, the determining the target plane by using a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud includes: removing abnormal points in the reference point cloud to obtain a corrected reference point cloud; and determining the target plane by adopting a plane fitting algorithm according to the corrected reference point cloud.
For example, in order to improve the accuracy of plane fitting, before performing plane fitting on the reference point cloud in the three-dimensional point cloud, whether an abnormal point exists in the reference point cloud in the three-dimensional point cloud may be further detected, and if so, the abnormal point in the reference point cloud is removed to obtain a corrected reference point cloud, for example, the reference point cloud in the three-dimensional point cloud includes 10 three-dimensional points, 3 of the 10 three-dimensional points are abnormal points, 3 of the 10 three-dimensional points are removed, the remaining 7 three-dimensional points are obtained, and the target plane is determined by using a plane fitting algorithm according to the remaining 7 three-dimensional points.
Optionally, before removing the abnormal point in the reference point cloud, the method further includes: determining a reference plane comprising partial points according to the partial points in the reference point cloud; and determining abnormal points in the reference point cloud according to the distance between the points except the partial points in the reference point cloud and the reference plane.
For example, one achievable way to detect outliers in the reference point cloud is: randomly extracting a plurality of three-dimensional points, for example, 3 three-dimensional points, from the 10 three-dimensional points included in the reference point cloud, where the 3 three-dimensional points may determine a plane, the plane is marked as a reference plane, further calculating distances between the remaining 7 three-dimensional points of the 10 three-dimensional points with respect to the reference plane, and if the distances from most of the 7 three-dimensional points to the reference plane are greater than a preset distance, it is determined that an outlier exists in the 3 three-dimensional points. By randomly extracting 3 three-dimensional points from the 10 three-dimensional points a plurality of times, outliers in the reference point cloud can be determined.
In the embodiment, abnormal points of a reference point cloud in the three-dimensional point cloud are removed to correct the reference point cloud, a target plane is determined by adopting a plane fitting algorithm according to the corrected reference point cloud, and points in the three-dimensional point cloud, the distance of which relative to the target plane is smaller than a distance threshold value, are used as the ground point cloud in the three-dimensional point cloud, so that the detection precision of the ground point cloud is improved.
The embodiment of the invention provides a point cloud processing method. Fig. 6 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 6, the method in this embodiment may include:
step S601, acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area.
Fig. 7 shows a two-dimensional image of an intersection acquired by a shooting device during the driving process of the vehicle 21, and fig. 8 shows a three-dimensional point cloud of the intersection detected by a detection device.
And step S602, determining a specific area in the two-dimensional image.
Optionally, the specific area is a ground area. The method and principle for determining the land area in the two-dimensional image as shown in fig. 7 are consistent with the above embodiments and will not be described herein.
Step S603, removing the specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image.
Optionally, the specific point cloud is a ground point cloud.
According to the three-dimensional point cloud shown in fig. 8 and the ground area in the two-dimensional image shown in fig. 7, the ground point cloud in the three-dimensional point cloud can be determined, and the specific process and principle are consistent with those of the above embodiments, and are not described herein again.
And step S604, performing labeling operation on the target object in the three-dimensional point cloud.
And after removing the ground point cloud of the three-dimensional point cloud, performing labeling operation on a target object in the three-dimensional point cloud.
Optionally, the labeling operation on the target object in the three-dimensional point cloud includes: converting the three-dimensional point cloud into a two-dimensional point cloud; and determining a labeling frame for labeling the target object according to the two-dimensional point cloud.
As shown in fig. 8, each three-dimensional point in the three-dimensional point cloud corresponds to a three-dimensional coordinate, and the coordinate value of each three-dimensional point in the three-dimensional point cloud in the Z-axis direction is set to a fixed value, for example, the coordinate value of each three-dimensional point in the three-dimensional point cloud in the Z-axis direction is set to 0, so that the three-dimensional point cloud can be converted into a two-dimensional point cloud, which is specifically shown in fig. 9. The target object is labeled in the two-dimensional point cloud, and one way of labeling is to select the target object in the two-dimensional point cloud to obtain a labeling frame, such as a rectangular frame shown in fig. 9, where the target object in the labeling frame is the labeled target object.
In some embodiments, the determining, from the two-dimensional point cloud, a labeling box for labeling the target object includes: and determining a labeling frame for labeling the target object according to the selection operation of the user on the target object in the plane where the two-dimensional point cloud is located.
For example, the two-dimensional point cloud shown in fig. 9 is displayed in a display component, where the display component may specifically be a touch screen, and a user may perform a selection operation on a target object to be labeled in the two-dimensional point cloud displayed by the display component.
In some embodiments, the label box can be telescopic in the X-axis and/or Y-axis directions.
As shown in fig. 10, the black dots represent a two-dimensional point cloud, and the user marks the two-dimensional point cloud in a frame selection manner, for example, a dashed box is used as a mark box for marking the two-dimensional point cloud, the mark box may be a planar box, and the mark box may be expanded and contracted in the X-axis and/or Y-axis direction, for example, according to the distribution of the two-dimensional point cloud, and simultaneously zoomed in the X-axis and the Y-axis, so as to obtain a solid box shown in fig. 10.
In some embodiments, after determining a labeling box for labeling the target object according to the two-dimensional point cloud, the method further includes: and determining a corresponding cylindrical frame of the marking frame in the three-dimensional point cloud.
For example, after a labeling frame for labeling the target object is determined in the two-dimensional point cloud, the labeling frame may be further projected into the three-dimensional point cloud to obtain a cylindrical frame corresponding to the labeling frame in the three-dimensional point cloud. The cylindrical frame shown in fig. 11 is the corresponding cylindrical frame of the determined labeling frame in the three-dimensional point cloud before the ground point cloud in the three-dimensional point cloud is removed. The cylindrical frame shown in fig. 12 is the corresponding cylindrical frame of the determined labeling frame in the three-dimensional point cloud after the ground point cloud in the three-dimensional point cloud is removed. As can be seen from comparing fig. 11 and fig. 12, after the ground point cloud in the three-dimensional point cloud is removed, the bottom, front, back, left, and right of the cylindrical frame may be empty.
Optionally, the determining a cylindrical frame corresponding to the labeling frame in the three-dimensional point cloud includes: and stretching the marking frame in the three-dimensional point cloud along the direction vertical to the two-dimensional point cloud to obtain the columnar frame.
As shown in fig. 13, the black dots represent three-dimensional point clouds, each of which may be regarded as a point in the three-dimensional coordinate system shown in fig. 13, and the two-dimensional point cloud shown in fig. 10 may be regarded as a projection of the three-dimensional point cloud shown in fig. 13 in the XY plane, and a direction perpendicular to the two-dimensional point cloud is a Z-axis direction shown in fig. 13. The column frame shown in fig. 13 can be obtained by extending and contracting the selection frame shown in fig. 10, i.e., the label frame, in the three-dimensional coordinate system shown in fig. 13 along the Z-axis direction.
The labeling frame shown in fig. 9 is a frame on a plane, and one way to convert the frame on the plane into a columnar frame in a three-dimensional space is: the labeling frame shown in fig. 9 is extended and contracted in the three-dimensional point cloud along a direction perpendicular to the two-dimensional point cloud, and the direction perpendicular to the two-dimensional point cloud may specifically be a Z-axis direction of the three-dimensional point cloud, that is, the labeling frame shown in fig. 9 is extended and contracted in the Z-axis direction of the three-dimensional point cloud to obtain a cylindrical frame in a three-dimensional space, for example, the cylindrical frame in the three-dimensional space shown in fig. 11 or 12.
The specific area in the two-dimensional image corresponding to the target area is identified, the specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud comprising the target area and the specific area in the two-dimensional image, the specific point cloud in the three-dimensional point cloud is removed, when the object in the three-dimensional point cloud is labeled, the influence of the specific point cloud on the object to be labeled can be avoided, the specific point cloud is prevented from being mistakenly labeled as the object to be identified, the accuracy of labeling the object in the three-dimensional point cloud can be improved by removing the specific point cloud in the three-dimensional point cloud, and the accuracy of identifying the object is improved.
The embodiment of the invention provides point cloud processing equipment. The embodiment of the invention does not limit the specific form of the point cloud processing equipment, and the point cloud processing equipment can be a vehicle-mounted terminal, or equipment such as a server and a computer. Fig. 14 is a structural diagram of a point cloud processing apparatus according to an embodiment of the present invention, and as shown in fig. 14, the point cloud processing apparatus 120 includes: a memory 121, a processor 122, a photographing device 123, and a detecting device 124; the shooting device 123 is configured to acquire a two-dimensional image corresponding to the target area; the detection device 124 is used for acquiring a three-dimensional point cloud of the target area; the memory 121 is used to store program codes; the processor 122 invokes the program code, which when executed, performs the following: acquiring a two-dimensional image corresponding to the target area and a three-dimensional point cloud comprising the target area; determining a specific region in the two-dimensional image; determining a specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image; removing the specific point cloud in the three-dimensional point cloud.
Optionally, when determining the specific point cloud in the three-dimensional point cloud according to the three-dimensional point cloud and the specific area in the two-dimensional image, the processor 122 is specifically configured to: determining a corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image; and determining a specific point cloud in the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image.
Optionally, when the processor 122 determines the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image, it is specifically configured to: and projecting the three-dimensional point cloud to the two-dimensional image according to the position relationship between the three-dimensional point cloud acquisition equipment and the two-dimensional image acquisition equipment.
Optionally, when determining the specific point cloud in the three-dimensional point cloud according to the two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image, the processor 122 is specifically configured to: taking the point cloud projected to the specific area in the two-dimensional image in the three-dimensional point cloud as a reference point cloud in the three-dimensional point cloud; and determining a specific point cloud in the three-dimensional point cloud according to a reference point cloud in the three-dimensional point cloud.
Optionally, when determining a specific point cloud in the three-dimensional point cloud according to the reference point cloud in the three-dimensional point cloud, the processor 122 is specifically configured to: determining a target plane according to a reference point cloud in the three-dimensional point cloud; and taking the point in the three-dimensional point cloud, the distance of which relative to the target plane is less than a distance threshold value, as the specific point cloud in the three-dimensional point cloud.
Optionally, when the processor 122 determines the target plane according to the reference point cloud in the three-dimensional point cloud, the processor is specifically configured to: and determining the target plane by adopting a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud.
Optionally, when the processor 122 determines the target plane by using a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud, the processor is specifically configured to: removing abnormal points in the reference point cloud to obtain a corrected reference point cloud; and determining the target plane by adopting a plane fitting algorithm according to the corrected reference point cloud.
Optionally, before removing the abnormal point in the reference point cloud, the processor 122 is further configured to: determining a reference plane comprising partial points according to the partial points in the reference point cloud; and determining abnormal points in the reference point cloud according to the distance between the points except the partial points in the reference point cloud and the reference plane.
Optionally, when the processor 122 acquires the two-dimensional image corresponding to the target area and the three-dimensional point cloud including the target area, the processor is specifically configured to: acquiring a two-dimensional image corresponding to a target area around a carrier carrying the shooting equipment and shot by the shooting equipment; and acquiring a three-dimensional point cloud of a target area around the carrier, which is detected by a detection device carried on the carrier.
Optionally, the detection device includes at least one of: binocular stereo cameras, TOF cameras and lidar.
Optionally, the specific area is a ground area, and the specific point cloud is a ground point cloud.
The specific principle and implementation manner of the point cloud processing device provided by the embodiment of the invention are similar to those of the embodiments shown in fig. 1 and 5, and are not described again here.
The specific area in the two-dimensional image corresponding to the target area is identified, the specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud comprising the target area and the specific area in the two-dimensional image, the specific point cloud in the three-dimensional point cloud is removed, when the object in the three-dimensional point cloud is labeled, the influence of the specific point cloud on the object to be labeled can be avoided, the specific point cloud is prevented from being mistakenly labeled as the object to be identified, the accuracy of labeling the object in the three-dimensional point cloud can be improved by removing the specific point cloud in the three-dimensional point cloud, and the accuracy of identifying the object is improved.
The embodiment of the invention provides point cloud processing equipment. The embodiment of the invention does not limit the specific form of the point cloud processing equipment, and the point cloud processing equipment can be a vehicle-mounted terminal, or equipment such as a server and a computer. Fig. 15 is a structural diagram of a point cloud processing apparatus according to an embodiment of the present invention, and as shown in fig. 15, the point cloud processing apparatus 150 includes: memory 151, processor 152, and display component 153; the display component 153 is used for displaying a two-dimensional image and/or a three-dimensional point cloud; the memory 151 is used to store program codes; the processor 122 invokes the program code, which when executed, performs the following: acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area; determining a specific region in the two-dimensional image; removing the specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image; and marking the target object in the three-dimensional point cloud.
Optionally, the point cloud processing device 120 may further include a communication interface through which the processor 122 receives the two-dimensional image and the three-dimensional point cloud.
Optionally, when the processor 122 performs the labeling operation on the target object in the three-dimensional point cloud, the processor is specifically configured to: converting the three-dimensional point cloud into a two-dimensional point cloud; and determining a labeling frame for labeling the target object according to the two-dimensional point cloud.
Optionally, when determining, by the processor 122, a labeling frame for labeling the target object according to the two-dimensional point cloud, specifically: and determining a labeling frame for labeling the target object according to the selection operation of the user on the target object in the plane where the two-dimensional point cloud is located.
Optionally, after determining, by the processor 122, a labeling frame for labeling the target object according to the two-dimensional point cloud, the processor is further configured to: and determining a corresponding cylindrical frame of the marking frame in the three-dimensional point cloud.
Optionally, when the processor 122 determines that the labeling frame is in the corresponding cylindrical frame in the three-dimensional point cloud, the processor is specifically configured to: and stretching the marking frame in the three-dimensional point cloud along the direction vertical to the two-dimensional point cloud to obtain the columnar frame.
Optionally, the label frame can be extended and retracted in the X-axis and/or Y-axis directions.
Optionally, the specific area is a ground area, and the specific point cloud is a ground point cloud.
The specific principle and implementation of the point cloud processing device provided by the embodiment of the invention are similar to those of the embodiment shown in fig. 6, and are not described herein again.
The specific area in the two-dimensional image corresponding to the target area is identified, the specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud comprising the target area and the specific area in the two-dimensional image, the specific point cloud in the three-dimensional point cloud is removed, when the object in the three-dimensional point cloud is labeled, the influence of the specific point cloud on the object to be labeled can be avoided, the specific point cloud is prevented from being mistakenly labeled as the object to be identified, the accuracy of labeling the object in the three-dimensional point cloud can be improved by removing the specific point cloud in the three-dimensional point cloud, and the accuracy of identifying the object is improved.
It is understood that the point cloud processing device provided by the embodiment of the invention can be combined, for example, the point cloud processing device can simultaneously have a memory, a processor, a shooting device, a detection device and a display component. The form is not limited, and the point cloud processing device may be a vehicle-mounted terminal, or may be a server, a computer, or the like. The shooting equipment is used for acquiring a two-dimensional image corresponding to a target area; the detection equipment is used for acquiring a three-dimensional point cloud of the target area; the display component is used for displaying the two-dimensional image and/or the three-dimensional point cloud. The memory is used for storing program codes; the processor calls the program code, and when the program code is executed, the operation performed by the processor is as described in the foregoing embodiments, which is not described herein again.
In addition, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the point cloud processing method described in the above embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.