CN110869974A - Point cloud processing method, point cloud processing device and storage medium - Google Patents

Point cloud processing method, point cloud processing device and storage medium Download PDF

Info

Publication number
CN110869974A
CN110869974A CN201880041553.8A CN201880041553A CN110869974A CN 110869974 A CN110869974 A CN 110869974A CN 201880041553 A CN201880041553 A CN 201880041553A CN 110869974 A CN110869974 A CN 110869974A
Authority
CN
China
Prior art keywords
point cloud
dimensional
dimensional point
determining
specific
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880041553.8A
Other languages
Chinese (zh)
Other versions
CN110869974B (en
Inventor
周游
蔡剑钊
武志远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuoyu Technology Co ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN110869974A publication Critical patent/CN110869974A/en
Application granted granted Critical
Publication of CN110869974B publication Critical patent/CN110869974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

A point cloud processing method, a device and a storage medium are provided, the method comprises: acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud including the target area (S101); determining a specific region in the two-dimensional image (S102); determining a specific point cloud in the three-dimensional point cloud according to the specific areas in the three-dimensional point cloud and the two-dimensional image (S103); a specific point cloud among the three-dimensional point clouds is removed (S104). The method comprises the steps of identifying a specific area in a two-dimensional image corresponding to a target area, determining specific point clouds in the three-dimensional point clouds according to the three-dimensional point clouds including the target area and the specific area in the two-dimensional image, removing the specific point clouds in the three-dimensional point clouds, avoiding the influence of the specific point clouds on an object to be marked when the object in the three-dimensional point clouds is marked, avoiding the specific point clouds from being mistakenly marked as the object to be identified, improving the accuracy of marking the object in the three-dimensional point clouds, and improving the accuracy of identifying the object.

Description

Point cloud processing method, point cloud processing device and storage medium
Technical Field
The embodiment of the invention relates to the field of point cloud processing, in particular to a point cloud processing method, point cloud processing equipment and a storage medium.
Background
The current deep learning neural network is widely applied, and the deep learning depends on the acquisition, labeling and training of sample data. The accuracy of the neural network is directly influenced by the accuracy and the data volume of the sample data.
In the case of sample data, for example, a three-dimensional sample point cloud acquired by a laser radar, a target object in the sample point cloud needs to be labeled, but a labeling result obtained after labeling the target object may include not only the target object but also objects other than the target object, so that the labeling of the target object is not accurate, and therefore, the accuracy of identifying the target object by the neural network is affected.
Disclosure of Invention
The embodiment of the invention provides a point cloud processing method, a point cloud processing device and a storage medium, which are used for improving the accuracy of marking an object in three-dimensional point cloud and improving the accuracy of identifying the object.
A first aspect of an embodiment of the present invention provides a point cloud processing method, including:
acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area;
determining a specific region in the two-dimensional image;
determining a specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image;
removing the specific point cloud in the three-dimensional point cloud.
A second aspect of an embodiment of the present invention provides a point cloud processing method, including:
acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area;
determining a specific region in the two-dimensional image;
removing the specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image;
and marking the target object in the three-dimensional point cloud.
A third aspect of embodiments of the present invention provides a point cloud processing apparatus, including: a memory, a processor, a photographing apparatus, and a detecting apparatus;
the shooting equipment is used for acquiring a two-dimensional image corresponding to a target area;
the detection equipment is used for acquiring a three-dimensional point cloud of the target area;
the memory is used for storing program codes;
the processor, invoking the program code, when executed, is configured to:
acquiring a two-dimensional image corresponding to the target area and a three-dimensional point cloud comprising the target area;
determining a specific region in the two-dimensional image;
determining a specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image;
removing the specific point cloud in the three-dimensional point cloud.
A fourth aspect of an embodiment of the present invention is to provide a point cloud processing apparatus, including: a memory, a processor, and a display component;
the display component is used for displaying a two-dimensional image and/or a three-dimensional point cloud;
the memory is used for storing program codes;
the processor, invoking the program code, when executed, is configured to:
acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area;
determining a specific region in the two-dimensional image;
removing the specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image;
and marking the target object in the three-dimensional point cloud.
A fifth aspect of embodiments of the present invention is to provide a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method according to the first or second aspect.
According to the point cloud processing method, the point cloud processing equipment and the point cloud storage medium, the specific area in the two-dimensional image corresponding to the target area is identified, the specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud comprising the target area and the specific area in the two-dimensional image, the specific point cloud in the three-dimensional point cloud is removed, when the object in the three-dimensional point cloud is marked, the influence of the specific point cloud on the object to be marked can be avoided, the specific point cloud is prevented from being mistakenly marked as the object to be identified, and the accuracy of marking the object in the three-dimensional point cloud can be improved by removing the specific point cloud in the three-dimensional point cloud, so that the accuracy of identifying the object is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a flowchart of a point cloud processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a two-dimensional image provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a three-dimensional point cloud provided by an embodiment of the invention;
FIG. 5 is a flowchart of a point cloud processing method according to another embodiment of the present invention;
FIG. 6 is a flowchart of a point cloud processing method according to another embodiment of the present invention;
FIG. 7 is a schematic illustration of another two-dimensional image provided by an embodiment of the invention;
FIG. 8 is a schematic diagram of a three-dimensional point cloud provided by an embodiment of the invention;
FIG. 9 is a schematic diagram of a two-dimensional point cloud provided by an embodiment of the invention;
FIG. 10 is a schematic diagram illustrating the expansion and contraction of a label box according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a three-dimensional point cloud provided by an embodiment of the invention;
FIG. 12 is a schematic diagram of a three-dimensional point cloud provided by an embodiment of the invention;
FIG. 13 is a schematic diagram illustrating the expansion and contraction of a label box according to an embodiment of the present invention;
FIG. 14 is a block diagram of a point cloud processing apparatus according to an embodiment of the present invention;
fig. 15 is a structural diagram of a point cloud processing apparatus according to an embodiment of the present invention.
Reference numerals:
21: a vehicle; 22: a vehicle; 120: a point cloud processing device;
121: a memory; 122: a processor; 123: a photographing device;
31: a square frame; 32: a square frame; 124: a detection device;
150: a point cloud processing device; 151: a memory; 152: a processor;
153: and a display component.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the invention provides a point cloud processing method. The point cloud processing method provided by the embodiment of the invention can be applied to vehicles, such as unmanned vehicles or vehicles with Advanced Driver Assistance Systems (ADAS) systems. It can be understood that the point cloud processing method may also be applied to an unmanned aerial vehicle, for example, an unmanned aerial vehicle equipped with a detection device for acquiring point cloud data. The point cloud processing method provided by the embodiment of the invention can be applied to determining the specific point cloud which is possibly influenced by accurately marking the target object in the three-dimensional point cloud before marking the target object in the three-dimensional point cloud, and removing the specific point cloud in the three-dimensional point cloud. The point cloud processing method provided by the embodiment of the invention is described by taking a vehicle as an example.
Fig. 1 is a flowchart of a point cloud processing method according to an embodiment of the present invention. As shown in fig. 1, the method in this embodiment may include:
s101, acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area.
As shown in fig. 2, a photographing apparatus, which may be a digital camera, a video camera, or the like, and a detecting apparatus, which may be a binocular stereo camera, a TOF camera, and/or a laser radar in particular, are provided in the vehicle 21.
Optionally, the obtaining a two-dimensional image corresponding to the target area and a three-dimensional point cloud including the target area includes: acquiring a two-dimensional image corresponding to a target area around a carrier carrying the shooting equipment and shot by the shooting equipment; and acquiring a three-dimensional point cloud of a target area around the carrier, which is detected by a detection device carried on the carrier.
For example, the vehicle 21 is a carrier on which the imaging device and the detection device are mounted, and the relative positional relationship between the imaging device and the detection device on the vehicle 21 may be predetermined. During the running of the vehicle 21, the photographing device collects image information of the surroundings of the vehicle 21 in real time, for example, image information of an area in front of the vehicle 21, which may be a two-dimensional image. Fig. 3 is a schematic diagram of a two-dimensional image of an area in front of the vehicle 21 captured by a capturing device of the vehicle 21, as shown in fig. 3, the two-dimensional image includes a vehicle in front of the vehicle 21, and the vehicle in front of the vehicle 21 may be the vehicle 22 shown in fig. 2.
In addition, the detection device detects a three-dimensional point cloud of objects around the vehicle 21 in real time while the vehicle 21 is traveling. The detection device may be a binocular stereo camera, a TOF camera and/or a lidar. Taking the laser radar as an example, when a laser beam emitted by the laser radar irradiates the surface of an object, the surface of the object reflects the laser beam, and the laser radar can determine information such as the direction and the distance of the object relative to the laser radar according to the laser beam reflected by the surface of the object. If the laser beam emitted by the laser radar scans according to a certain track, for example, 360-degree rotation scanning, a large number of laser points are obtained, and thus laser point cloud data, i.e., three-dimensional point cloud, of the object can be formed. Fig. 4 shows a scan of the radar of the vehicle 21, wherein one circle of the grain lines represents the ground around the vehicle 21.
Alternatively, fig. 3 shows a two-dimensional image of the area in front of the vehicle 21. As shown in fig. 4, the radar beam may scan along a certain trajectory, for example, 360 degrees of rotation, so that the three-dimensional point cloud shown in fig. 4 includes not only the front area of the vehicle 21 but also the right area, the left area, and the rear area of the vehicle 21.
And step S102, determining a specific area in the two-dimensional image.
In some embodiments, the particular area is a ground area.
As shown in fig. 3, the front area of the vehicle 21 includes not only other vehicles but also a ground area, a building, a tree, a fence, a pedestrian, and the like. In other embodiments, objects such as a traffic sign and the like may be present in the front area of the vehicle 21, and the bottom of the traffic sign is also adjacent to the ground, so that when objects such as a vehicle and a traffic sign in front of the vehicle 21 are marked, it is easy to mark the ground points at the bottom of the vehicle in front and/or the ground points at the bottom of the traffic sign by mistake, and therefore when marking the vehicle, the traffic sign, a building, a tree, a fence, a pedestrian and the like in a three-dimensional point cloud, it is necessary to identify the ground point cloud in the three-dimensional point cloud first, and mark the objects on the ground such as the vehicle, the traffic sign, the building, the tree, the fence, the pedestrian and the like after rejecting the ground point cloud in the three-dimensional point cloud.
Before identifying the ground point cloud in the three-dimensional point cloud, the embodiment determines the ground area in the two-dimensional image. One possible implementation is: first, a vehicle in the acquired two-dimensional image, such as the one in front detected in fig. 3, may be detected by a Convolutional Neural Network (CNN), and may be marked in a box, for example, the one in front of the vehicle 21 is marked in a box 31; then, an area below the vehicle ahead and without other objects, for example, the area 32, is used as a reference road surface area, in some embodiments, the reference road surface area may be a block area of a preset size below the block 31 of the vehicle ahead, and when the reference road surface area is determined, the information corresponding to the partial area image in the two-dimensional image is considered as a road surface; then, the information of the reference road surface area is input into a Support Vector Machine (SVM) classifier and/or a neural network model for classification prediction, so as to determine the ground area in the two-dimensional image. The SVM classifier can be obtained by training a large amount of sample data, and can perform linear classification or nonlinear classification. The sample data may be color information of the reference road surface, such as RGB information, which in some embodiments is RGB information of the image in the box 32 below the vehicle box 31 in fig. 3.
When determining the ground area in the two-dimensional image, the method may further include: the horizon is calculated and the area below the horizon is classified. Specifically, the horizon in the two-dimensional image may be calculated from Inertial Measurement Unit (IMU) information mounted on the vehicle, and the road surface information may be considered to exist only below the horizon. As shown in fig. 3, if the upper left corner of fig. 3 is set as the origin of the two-dimensional image and the horizon straight-line equation ax + by + c is 0, then the parameters of the horizon in the two-dimensional image are:
r=tan(pitch_angle)*focus_length
a=tan(roll_angle)
b=1
c=-tan(roll_angle)*image_width/2
+r*sin(roll_angle)*tan(roll_angle)
-image_height/2+r*cos(roll_angle)
wherein, pitch _ angle represents the pitch angle output by the IMU, focus _ length represents the focal length of the shooting device, roll _ angle represents the roll angle output by the IMU, image _ width represents the width of the two-dimensional image, and image _ height represents the height of the two-dimensional image.
Step S103, determining a specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image.
For example, a specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud as shown in fig. 4 and the ground area in the two-dimensional image as shown in fig. 3. In some embodiments, the particular point cloud is a ground point cloud.
In some embodiments, the determining a particular point cloud of the three-dimensional point clouds from the three-dimensional point cloud and the particular region in the two-dimensional image comprises: determining a corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image; and determining a specific point cloud in the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image.
Optionally, the determining a corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image includes: and projecting the three-dimensional point cloud to the two-dimensional image according to the position relationship between the three-dimensional point cloud acquisition equipment and the two-dimensional image acquisition equipment.
In this embodiment, the acquiring device of the three-dimensional point cloud is specifically a detecting device such as a laser radar, and the acquiring device of the two-dimensional image is specifically a shooting device such as a digital camera, and according to the position relationship between the detecting device and the shooting device, each three-dimensional point in the three-dimensional point cloud shown in fig. 4 can be projected into the two-dimensional image shown in fig. 3. For example, point i represents a three-dimensional point in a three-dimensional point cloud, and the position of the three-dimensional point in the radar coordinate system is denoted as Pi lLet the position where the point i is converted into the camera coordinate system be denoted as Pi c,Pi lAnd Pi cThe relationship (c) is specifically shown in the following formula (1):
Figure BDA0002328217420000071
wherein,
Figure BDA0002328217420000072
indicating the rotational relationship of the radar to the camera,
Figure BDA0002328217420000073
representing the three-dimensional position of the radar, i.e. the translation vector, in the camera coordinate system.
The projection point of the point i in the two-dimensional image can be calculated by the following formulas (2) and (3), and the position of the projection point in the two-dimensional image is marked as pi(μ,υ):
Figure BDA0002328217420000074
Figure BDA0002328217420000075
Wherein,
Figure BDA0002328217420000076
representing the three-dimensional coordinates of point i in the world coordinate system.
Similarly, the projection point of the other three-dimensional point in the three-dimensional point cloud shown in fig. 4 except the point i in the two-dimensional image may be determined, where the projection point is the corresponding two-dimensional point of the three-dimensional point in the two-dimensional image. And determining the ground point cloud in the three-dimensional point cloud according to the projection point of each three-dimensional point in the two-dimensional image and the ground area in the two-dimensional image.
For example, according to the projection point of the point i in the two-dimensional image, whether the projection point is in the ground area in the two-dimensional image is determined, and if the projection point is in the ground area in the two-dimensional image, the point i is marked as the reference point. Similarly, other reference points in the three-dimensional point cloud can be determined, and the reference points are collected together to form the reference point cloud. And further, performing plane fitting according to the reference point cloud in the three-dimensional point cloud, and recording the three-dimensional point cloud falling on the plane as ground point cloud.
And step S104, removing the specific point cloud in the three-dimensional point cloud.
After removing the ground point cloud in the three-dimensional point cloud shown in fig. 4, the objects on the ground such as vehicles, traffic signs, buildings, trees, fences, pedestrians and the like are marked.
It should be noted that, the present embodiment is schematically illustrated by taking a ground area in a two-dimensional image and a ground point cloud in a three-dimensional point cloud as an example, and in other embodiments, the present embodiment is also applicable to other specific areas, such as a sky area, a sidewalk area, and the like.
The specific area in the two-dimensional image corresponding to the target area is identified, the specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud comprising the target area and the specific area in the two-dimensional image, the specific point cloud in the three-dimensional point cloud is removed, when the object in the three-dimensional point cloud is labeled, the influence of the specific point cloud on the object to be labeled can be avoided, the specific point cloud is prevented from being mistakenly labeled as the object to be identified, the accuracy of labeling the object in the three-dimensional point cloud can be improved by removing the specific point cloud in the three-dimensional point cloud, and the accuracy of identifying the object is improved.
The embodiment of the invention provides a point cloud processing method. Fig. 5 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 5, on the basis of the embodiment shown in fig. 1, the determining a specific point cloud in the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image may include:
step S501, taking the point cloud projected to the specific area in the two-dimensional image in the three-dimensional point cloud as a reference point cloud in the three-dimensional point cloud.
For example, according to the projection point of the point i in the two-dimensional image, whether the projection point is in the ground area in the two-dimensional image is determined, and if the projection point is in the ground area in the two-dimensional image, the point i is marked as the reference point. Similarly, other reference points in the three-dimensional point cloud can be determined, and the reference points are collected together to form the reference point cloud.
Step S502, determining specific point clouds in the three-dimensional point clouds according to reference point clouds in the three-dimensional point clouds.
Optionally, the determining a specific point cloud in the three-dimensional point cloud according to a reference point cloud in the three-dimensional point cloud includes: determining a target plane according to a reference point cloud in the three-dimensional point cloud; and taking the point in the three-dimensional point cloud, the distance of which relative to the target plane is less than a distance threshold value, as the specific point cloud in the three-dimensional point cloud.
In some embodiments, the determining a target plane from a reference point cloud of the three-dimensional point cloud comprises: and determining the target plane by adopting a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud.
For example, a plane fitting algorithm is used for performing plane fitting on the reference point cloud in the three-dimensional point cloud, the fitted plane is recorded as a target plane, a distance between each three-dimensional point in the three-dimensional point cloud shown in fig. 4 and the target plane is calculated, when the distance is smaller than a distance threshold, the three-dimensional point can be used as a ground point cloud in the three-dimensional point cloud, and when the distance is larger than the distance threshold, it is determined that the three-dimensional point is not the ground point cloud.
Optionally, the determining the target plane by using a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud includes: removing abnormal points in the reference point cloud to obtain a corrected reference point cloud; and determining the target plane by adopting a plane fitting algorithm according to the corrected reference point cloud.
For example, in order to improve the accuracy of plane fitting, before performing plane fitting on the reference point cloud in the three-dimensional point cloud, whether an abnormal point exists in the reference point cloud in the three-dimensional point cloud may be further detected, and if so, the abnormal point in the reference point cloud is removed to obtain a corrected reference point cloud, for example, the reference point cloud in the three-dimensional point cloud includes 10 three-dimensional points, 3 of the 10 three-dimensional points are abnormal points, 3 of the 10 three-dimensional points are removed, the remaining 7 three-dimensional points are obtained, and the target plane is determined by using a plane fitting algorithm according to the remaining 7 three-dimensional points.
Optionally, before removing the abnormal point in the reference point cloud, the method further includes: determining a reference plane comprising partial points according to the partial points in the reference point cloud; and determining abnormal points in the reference point cloud according to the distance between the points except the partial points in the reference point cloud and the reference plane.
For example, one achievable way to detect outliers in the reference point cloud is: randomly extracting a plurality of three-dimensional points, for example, 3 three-dimensional points, from the 10 three-dimensional points included in the reference point cloud, where the 3 three-dimensional points may determine a plane, the plane is marked as a reference plane, further calculating distances between the remaining 7 three-dimensional points of the 10 three-dimensional points with respect to the reference plane, and if the distances from most of the 7 three-dimensional points to the reference plane are greater than a preset distance, it is determined that an outlier exists in the 3 three-dimensional points. By randomly extracting 3 three-dimensional points from the 10 three-dimensional points a plurality of times, outliers in the reference point cloud can be determined.
In the embodiment, abnormal points of a reference point cloud in the three-dimensional point cloud are removed to correct the reference point cloud, a target plane is determined by adopting a plane fitting algorithm according to the corrected reference point cloud, and points in the three-dimensional point cloud, the distance of which relative to the target plane is smaller than a distance threshold value, are used as the ground point cloud in the three-dimensional point cloud, so that the detection precision of the ground point cloud is improved.
The embodiment of the invention provides a point cloud processing method. Fig. 6 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in fig. 6, the method in this embodiment may include:
step S601, acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area.
Fig. 7 shows a two-dimensional image of an intersection acquired by a shooting device during the driving process of the vehicle 21, and fig. 8 shows a three-dimensional point cloud of the intersection detected by a detection device.
And step S602, determining a specific area in the two-dimensional image.
Optionally, the specific area is a ground area. The method and principle for determining the land area in the two-dimensional image as shown in fig. 7 are consistent with the above embodiments and will not be described herein.
Step S603, removing the specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image.
Optionally, the specific point cloud is a ground point cloud.
According to the three-dimensional point cloud shown in fig. 8 and the ground area in the two-dimensional image shown in fig. 7, the ground point cloud in the three-dimensional point cloud can be determined, and the specific process and principle are consistent with those of the above embodiments, and are not described herein again.
And step S604, performing labeling operation on the target object in the three-dimensional point cloud.
And after removing the ground point cloud of the three-dimensional point cloud, performing labeling operation on a target object in the three-dimensional point cloud.
Optionally, the labeling operation on the target object in the three-dimensional point cloud includes: converting the three-dimensional point cloud into a two-dimensional point cloud; and determining a labeling frame for labeling the target object according to the two-dimensional point cloud.
As shown in fig. 8, each three-dimensional point in the three-dimensional point cloud corresponds to a three-dimensional coordinate, and the coordinate value of each three-dimensional point in the three-dimensional point cloud in the Z-axis direction is set to a fixed value, for example, the coordinate value of each three-dimensional point in the three-dimensional point cloud in the Z-axis direction is set to 0, so that the three-dimensional point cloud can be converted into a two-dimensional point cloud, which is specifically shown in fig. 9. The target object is labeled in the two-dimensional point cloud, and one way of labeling is to select the target object in the two-dimensional point cloud to obtain a labeling frame, such as a rectangular frame shown in fig. 9, where the target object in the labeling frame is the labeled target object.
In some embodiments, the determining, from the two-dimensional point cloud, a labeling box for labeling the target object includes: and determining a labeling frame for labeling the target object according to the selection operation of the user on the target object in the plane where the two-dimensional point cloud is located.
For example, the two-dimensional point cloud shown in fig. 9 is displayed in a display component, where the display component may specifically be a touch screen, and a user may perform a selection operation on a target object to be labeled in the two-dimensional point cloud displayed by the display component.
In some embodiments, the label box can be telescopic in the X-axis and/or Y-axis directions.
As shown in fig. 10, the black dots represent a two-dimensional point cloud, and the user marks the two-dimensional point cloud in a frame selection manner, for example, a dashed box is used as a mark box for marking the two-dimensional point cloud, the mark box may be a planar box, and the mark box may be expanded and contracted in the X-axis and/or Y-axis direction, for example, according to the distribution of the two-dimensional point cloud, and simultaneously zoomed in the X-axis and the Y-axis, so as to obtain a solid box shown in fig. 10.
In some embodiments, after determining a labeling box for labeling the target object according to the two-dimensional point cloud, the method further includes: and determining a corresponding cylindrical frame of the marking frame in the three-dimensional point cloud.
For example, after a labeling frame for labeling the target object is determined in the two-dimensional point cloud, the labeling frame may be further projected into the three-dimensional point cloud to obtain a cylindrical frame corresponding to the labeling frame in the three-dimensional point cloud. The cylindrical frame shown in fig. 11 is the corresponding cylindrical frame of the determined labeling frame in the three-dimensional point cloud before the ground point cloud in the three-dimensional point cloud is removed. The cylindrical frame shown in fig. 12 is the corresponding cylindrical frame of the determined labeling frame in the three-dimensional point cloud after the ground point cloud in the three-dimensional point cloud is removed. As can be seen from comparing fig. 11 and fig. 12, after the ground point cloud in the three-dimensional point cloud is removed, the bottom, front, back, left, and right of the cylindrical frame may be empty.
Optionally, the determining a cylindrical frame corresponding to the labeling frame in the three-dimensional point cloud includes: and stretching the marking frame in the three-dimensional point cloud along the direction vertical to the two-dimensional point cloud to obtain the columnar frame.
As shown in fig. 13, the black dots represent three-dimensional point clouds, each of which may be regarded as a point in the three-dimensional coordinate system shown in fig. 13, and the two-dimensional point cloud shown in fig. 10 may be regarded as a projection of the three-dimensional point cloud shown in fig. 13 in the XY plane, and a direction perpendicular to the two-dimensional point cloud is a Z-axis direction shown in fig. 13. The column frame shown in fig. 13 can be obtained by extending and contracting the selection frame shown in fig. 10, i.e., the label frame, in the three-dimensional coordinate system shown in fig. 13 along the Z-axis direction.
The labeling frame shown in fig. 9 is a frame on a plane, and one way to convert the frame on the plane into a columnar frame in a three-dimensional space is: the labeling frame shown in fig. 9 is extended and contracted in the three-dimensional point cloud along a direction perpendicular to the two-dimensional point cloud, and the direction perpendicular to the two-dimensional point cloud may specifically be a Z-axis direction of the three-dimensional point cloud, that is, the labeling frame shown in fig. 9 is extended and contracted in the Z-axis direction of the three-dimensional point cloud to obtain a cylindrical frame in a three-dimensional space, for example, the cylindrical frame in the three-dimensional space shown in fig. 11 or 12.
The specific area in the two-dimensional image corresponding to the target area is identified, the specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud comprising the target area and the specific area in the two-dimensional image, the specific point cloud in the three-dimensional point cloud is removed, when the object in the three-dimensional point cloud is labeled, the influence of the specific point cloud on the object to be labeled can be avoided, the specific point cloud is prevented from being mistakenly labeled as the object to be identified, the accuracy of labeling the object in the three-dimensional point cloud can be improved by removing the specific point cloud in the three-dimensional point cloud, and the accuracy of identifying the object is improved.
The embodiment of the invention provides point cloud processing equipment. The embodiment of the invention does not limit the specific form of the point cloud processing equipment, and the point cloud processing equipment can be a vehicle-mounted terminal, or equipment such as a server and a computer. Fig. 14 is a structural diagram of a point cloud processing apparatus according to an embodiment of the present invention, and as shown in fig. 14, the point cloud processing apparatus 120 includes: a memory 121, a processor 122, a photographing device 123, and a detecting device 124; the shooting device 123 is configured to acquire a two-dimensional image corresponding to the target area; the detection device 124 is used for acquiring a three-dimensional point cloud of the target area; the memory 121 is used to store program codes; the processor 122 invokes the program code, which when executed, performs the following: acquiring a two-dimensional image corresponding to the target area and a three-dimensional point cloud comprising the target area; determining a specific region in the two-dimensional image; determining a specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image; removing the specific point cloud in the three-dimensional point cloud.
Optionally, when determining the specific point cloud in the three-dimensional point cloud according to the three-dimensional point cloud and the specific area in the two-dimensional image, the processor 122 is specifically configured to: determining a corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image; and determining a specific point cloud in the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image.
Optionally, when the processor 122 determines the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image, it is specifically configured to: and projecting the three-dimensional point cloud to the two-dimensional image according to the position relationship between the three-dimensional point cloud acquisition equipment and the two-dimensional image acquisition equipment.
Optionally, when determining the specific point cloud in the three-dimensional point cloud according to the two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image, the processor 122 is specifically configured to: taking the point cloud projected to the specific area in the two-dimensional image in the three-dimensional point cloud as a reference point cloud in the three-dimensional point cloud; and determining a specific point cloud in the three-dimensional point cloud according to a reference point cloud in the three-dimensional point cloud.
Optionally, when determining a specific point cloud in the three-dimensional point cloud according to the reference point cloud in the three-dimensional point cloud, the processor 122 is specifically configured to: determining a target plane according to a reference point cloud in the three-dimensional point cloud; and taking the point in the three-dimensional point cloud, the distance of which relative to the target plane is less than a distance threshold value, as the specific point cloud in the three-dimensional point cloud.
Optionally, when the processor 122 determines the target plane according to the reference point cloud in the three-dimensional point cloud, the processor is specifically configured to: and determining the target plane by adopting a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud.
Optionally, when the processor 122 determines the target plane by using a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud, the processor is specifically configured to: removing abnormal points in the reference point cloud to obtain a corrected reference point cloud; and determining the target plane by adopting a plane fitting algorithm according to the corrected reference point cloud.
Optionally, before removing the abnormal point in the reference point cloud, the processor 122 is further configured to: determining a reference plane comprising partial points according to the partial points in the reference point cloud; and determining abnormal points in the reference point cloud according to the distance between the points except the partial points in the reference point cloud and the reference plane.
Optionally, when the processor 122 acquires the two-dimensional image corresponding to the target area and the three-dimensional point cloud including the target area, the processor is specifically configured to: acquiring a two-dimensional image corresponding to a target area around a carrier carrying the shooting equipment and shot by the shooting equipment; and acquiring a three-dimensional point cloud of a target area around the carrier, which is detected by a detection device carried on the carrier.
Optionally, the detection device includes at least one of: binocular stereo cameras, TOF cameras and lidar.
Optionally, the specific area is a ground area, and the specific point cloud is a ground point cloud.
The specific principle and implementation manner of the point cloud processing device provided by the embodiment of the invention are similar to those of the embodiments shown in fig. 1 and 5, and are not described again here.
The specific area in the two-dimensional image corresponding to the target area is identified, the specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud comprising the target area and the specific area in the two-dimensional image, the specific point cloud in the three-dimensional point cloud is removed, when the object in the three-dimensional point cloud is labeled, the influence of the specific point cloud on the object to be labeled can be avoided, the specific point cloud is prevented from being mistakenly labeled as the object to be identified, the accuracy of labeling the object in the three-dimensional point cloud can be improved by removing the specific point cloud in the three-dimensional point cloud, and the accuracy of identifying the object is improved.
The embodiment of the invention provides point cloud processing equipment. The embodiment of the invention does not limit the specific form of the point cloud processing equipment, and the point cloud processing equipment can be a vehicle-mounted terminal, or equipment such as a server and a computer. Fig. 15 is a structural diagram of a point cloud processing apparatus according to an embodiment of the present invention, and as shown in fig. 15, the point cloud processing apparatus 150 includes: memory 151, processor 152, and display component 153; the display component 153 is used for displaying a two-dimensional image and/or a three-dimensional point cloud; the memory 151 is used to store program codes; the processor 122 invokes the program code, which when executed, performs the following: acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area; determining a specific region in the two-dimensional image; removing the specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image; and marking the target object in the three-dimensional point cloud.
Optionally, the point cloud processing device 120 may further include a communication interface through which the processor 122 receives the two-dimensional image and the three-dimensional point cloud.
Optionally, when the processor 122 performs the labeling operation on the target object in the three-dimensional point cloud, the processor is specifically configured to: converting the three-dimensional point cloud into a two-dimensional point cloud; and determining a labeling frame for labeling the target object according to the two-dimensional point cloud.
Optionally, when determining, by the processor 122, a labeling frame for labeling the target object according to the two-dimensional point cloud, specifically: and determining a labeling frame for labeling the target object according to the selection operation of the user on the target object in the plane where the two-dimensional point cloud is located.
Optionally, after determining, by the processor 122, a labeling frame for labeling the target object according to the two-dimensional point cloud, the processor is further configured to: and determining a corresponding cylindrical frame of the marking frame in the three-dimensional point cloud.
Optionally, when the processor 122 determines that the labeling frame is in the corresponding cylindrical frame in the three-dimensional point cloud, the processor is specifically configured to: and stretching the marking frame in the three-dimensional point cloud along the direction vertical to the two-dimensional point cloud to obtain the columnar frame.
Optionally, the label frame can be extended and retracted in the X-axis and/or Y-axis directions.
Optionally, the specific area is a ground area, and the specific point cloud is a ground point cloud.
The specific principle and implementation of the point cloud processing device provided by the embodiment of the invention are similar to those of the embodiment shown in fig. 6, and are not described herein again.
The specific area in the two-dimensional image corresponding to the target area is identified, the specific point cloud in the three-dimensional point cloud is determined according to the three-dimensional point cloud comprising the target area and the specific area in the two-dimensional image, the specific point cloud in the three-dimensional point cloud is removed, when the object in the three-dimensional point cloud is labeled, the influence of the specific point cloud on the object to be labeled can be avoided, the specific point cloud is prevented from being mistakenly labeled as the object to be identified, the accuracy of labeling the object in the three-dimensional point cloud can be improved by removing the specific point cloud in the three-dimensional point cloud, and the accuracy of identifying the object is improved.
It is understood that the point cloud processing device provided by the embodiment of the invention can be combined, for example, the point cloud processing device can simultaneously have a memory, a processor, a shooting device, a detection device and a display component. The form is not limited, and the point cloud processing device may be a vehicle-mounted terminal, or may be a server, a computer, or the like. The shooting equipment is used for acquiring a two-dimensional image corresponding to a target area; the detection equipment is used for acquiring a three-dimensional point cloud of the target area; the display component is used for displaying the two-dimensional image and/or the three-dimensional point cloud. The memory is used for storing program codes; the processor calls the program code, and when the program code is executed, the operation performed by the processor is as described in the foregoing embodiments, which is not described herein again.
In addition, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the point cloud processing method described in the above embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (37)

1. A point cloud processing method, comprising:
acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area;
determining a specific region in the two-dimensional image;
determining a specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image;
removing the specific point cloud in the three-dimensional point cloud.
2. The method of claim 1, wherein determining a particular point cloud of the three-dimensional point clouds from the three-dimensional point cloud and the particular region of the two-dimensional image comprises:
determining a corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image;
and determining a specific point cloud in the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image.
3. The method of claim 2, wherein said determining the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image comprises:
and projecting the three-dimensional point cloud to the two-dimensional image according to the position relationship between the three-dimensional point cloud acquisition equipment and the two-dimensional image acquisition equipment.
4. The method of claim 2 or 3, wherein determining a specific point cloud of the three-dimensional point clouds according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image comprises:
taking the point cloud projected to the specific area in the two-dimensional image in the three-dimensional point cloud as a reference point cloud in the three-dimensional point cloud;
and determining a specific point cloud in the three-dimensional point cloud according to a reference point cloud in the three-dimensional point cloud.
5. The method of claim 4, wherein determining a particular point cloud of the three-dimensional point clouds from a reference point cloud of the three-dimensional point clouds comprises:
determining a target plane according to a reference point cloud in the three-dimensional point cloud;
and taking the point in the three-dimensional point cloud, the distance of which relative to the target plane is less than a distance threshold value, as the specific point cloud in the three-dimensional point cloud.
6. The method of claim 5, wherein determining a target plane from a reference point cloud of the three-dimensional point cloud comprises:
and determining the target plane by adopting a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud.
7. The method of claim 6, wherein determining the target plane from the reference point cloud in the three-dimensional point cloud using a plane fitting algorithm comprises:
removing abnormal points in the reference point cloud to obtain a corrected reference point cloud;
and determining the target plane by adopting a plane fitting algorithm according to the corrected reference point cloud.
8. The method of claim 7, wherein prior to said removing outliers in said reference point cloud, further comprising:
determining a reference plane comprising partial points according to the partial points in the reference point cloud;
and determining abnormal points in the reference point cloud according to the distance between the points except the partial points in the reference point cloud and the reference plane.
9. The method according to any one of claims 1-8, wherein the obtaining a two-dimensional image corresponding to a target area and a three-dimensional point cloud including the target area comprises:
acquiring a two-dimensional image corresponding to a target area around a carrier carrying the shooting equipment and shot by the shooting equipment;
and acquiring a three-dimensional point cloud of a target area around the carrier, which is detected by a detection device carried on the carrier.
10. The method of claim 9, wherein the detection device comprises at least one of:
binocular stereo cameras, TOF cameras and lidar.
11. The method of any one of claims 1-10, wherein the specific area is a ground area and the specific point cloud is a ground point cloud.
12. A point cloud processing method, comprising:
acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area;
determining a specific region in the two-dimensional image;
removing the specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image;
and marking the target object in the three-dimensional point cloud.
13. The method of claim 12, wherein the labeling of the target object in the three-dimensional point cloud comprises:
converting the three-dimensional point cloud into a two-dimensional point cloud;
and determining a labeling frame for labeling the target object according to the two-dimensional point cloud.
14. The method of claim 13, wherein determining a labeling box for labeling the target object from the two-dimensional point cloud comprises:
and determining a labeling frame for labeling the target object according to the selection operation of the user on the target object in the plane where the two-dimensional point cloud is located.
15. The method of claim 13, wherein after determining a labeling box for labeling the target object according to the two-dimensional point cloud, further comprising:
and determining a corresponding cylindrical frame of the marking frame in the three-dimensional point cloud.
16. The method of claim 15, wherein the determining a corresponding cylinder box of the annotation box in the three-dimensional point cloud comprises:
and stretching the marking frame in the three-dimensional point cloud along the direction vertical to the two-dimensional point cloud to obtain the columnar frame.
17. The method of any one of claims 13-16, wherein the label box is retractable in an X-axis and/or Y-axis direction.
18. The method of any one of claims 12-17, wherein the specific area is a ground area and the specific point cloud is a ground point cloud.
19. A point cloud processing apparatus, comprising: a memory, a processor, a photographing apparatus, and a detecting apparatus;
the shooting equipment is used for acquiring a two-dimensional image corresponding to a target area;
the detection equipment is used for acquiring a three-dimensional point cloud of the target area;
the memory is used for storing program codes; the processor, invoking the program code, when executed, is configured to:
acquiring a two-dimensional image corresponding to the target area and a three-dimensional point cloud comprising the target area;
determining a specific region in the two-dimensional image;
determining a specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image;
removing the specific point cloud in the three-dimensional point cloud.
20. The point cloud processing apparatus of claim 19, wherein the processor is configured to determine a specific point cloud of the three-dimensional point cloud from the three-dimensional point cloud and the specific region of the two-dimensional image, and is further configured to:
determining a corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image;
and determining a specific point cloud in the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image.
21. The point cloud processing apparatus of claim 20, wherein the processor, when determining the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image, is specifically configured to:
and projecting the three-dimensional point cloud to the two-dimensional image according to the position relationship between the three-dimensional point cloud acquisition equipment and the two-dimensional image acquisition equipment.
22. The point cloud processing apparatus of claim 20 or 21, wherein the processor is configured to determine a specific point cloud of the three-dimensional point cloud according to a corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image, and is specifically configured to:
taking the point cloud projected to the specific area in the two-dimensional image in the three-dimensional point cloud as a reference point cloud in the three-dimensional point cloud;
and determining a specific point cloud in the three-dimensional point cloud according to a reference point cloud in the three-dimensional point cloud.
23. The point cloud processing apparatus of claim 22, wherein the processor is configured to determine a specific point cloud of the three-dimensional point clouds based on a reference point cloud of the three-dimensional point clouds, and is further configured to:
determining a target plane according to a reference point cloud in the three-dimensional point cloud;
and taking the point in the three-dimensional point cloud, the distance of which relative to the target plane is less than a distance threshold value, as the specific point cloud in the three-dimensional point cloud.
24. The point cloud processing apparatus of claim 23, wherein the processor is configured to, when determining a target plane from a reference point cloud of the three-dimensional point cloud, in particular:
and determining the target plane by adopting a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud.
25. The point cloud processing apparatus of claim 24, wherein the processor is configured to, when determining the target plane using a plane fitting algorithm based on a reference point cloud of the three-dimensional point cloud, in particular:
removing abnormal points in the reference point cloud to obtain a corrected reference point cloud;
and determining the target plane by adopting a plane fitting algorithm according to the corrected reference point cloud.
26. The point cloud processing apparatus of claim 25, wherein the processor, prior to removing outliers in the reference point cloud, is further configured to:
determining a reference plane comprising partial points according to the partial points in the reference point cloud;
and determining abnormal points in the reference point cloud according to the distance between the points except the partial points in the reference point cloud and the reference plane.
27. The point cloud processing apparatus of any one of claims 19 to 26, wherein the processor, when acquiring the two-dimensional image corresponding to the target area and the three-dimensional point cloud including the target area, is specifically configured to:
acquiring a two-dimensional image corresponding to a target area around a carrier carrying the shooting equipment and shot by the shooting equipment;
and acquiring a three-dimensional point cloud of a target area around the carrier, which is detected by a detection device carried on the carrier.
28. The point cloud processing apparatus of claim 27, wherein the detection apparatus comprises at least one of:
binocular stereo cameras, TOF cameras and lidar.
29. The point cloud processing apparatus of any of claims 19-28, wherein the specific area is a ground area and the specific point cloud is a ground point cloud.
30. A point cloud processing apparatus, comprising: a memory, a processor, and a display component;
the display component is used for displaying a two-dimensional image and/or a three-dimensional point cloud;
the memory is used for storing program codes;
the processor, invoking the program code, when executed, is configured to:
acquiring a two-dimensional image corresponding to a target area and a three-dimensional point cloud comprising the target area;
determining a specific region in the two-dimensional image;
removing the specific point cloud in the three-dimensional point cloud according to the specific area in the three-dimensional point cloud and the two-dimensional image;
and marking the target object in the three-dimensional point cloud.
31. The point cloud processing apparatus of claim 30, wherein the processor is configured to, when performing the labeling operation on the target object in the three-dimensional point cloud, specifically:
converting the three-dimensional point cloud into a two-dimensional point cloud;
and determining a labeling frame for labeling the target object according to the two-dimensional point cloud.
32. The point cloud processing apparatus of claim 31, wherein the processor is configured to, when determining a labeling box for labeling the target object according to the two-dimensional point cloud, specifically:
and determining a labeling frame for labeling the target object according to the selection operation of the user on the target object in the plane where the two-dimensional point cloud is located.
33. The point cloud processing apparatus of claim 31, wherein the processor, after determining a labeling box for labeling the target object from the two-dimensional point cloud, is further configured to:
and determining a corresponding cylindrical frame of the marking frame in the three-dimensional point cloud.
34. The point cloud processing apparatus of claim 33, wherein the processor, when determining the corresponding cylinder frame of the annotation frame in the three-dimensional point cloud, is specifically configured to:
and stretching the marking frame in the three-dimensional point cloud along the direction vertical to the two-dimensional point cloud to obtain the columnar frame.
35. The point cloud processing apparatus of any of claims 31-34, wherein the annotation box is telescopic in an X-axis and/or Y-axis direction.
36. The point cloud processing apparatus of any of claims 30-35, wherein the particular area is a ground area and the particular point cloud is a ground point cloud.
37. A computer-readable storage medium, having stored thereon a computer program for execution by a processor to perform the method of any one of claims 1-18.
CN201880041553.8A 2018-11-19 2018-11-19 Point cloud processing method, equipment and storage medium Active CN110869974B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/116232 WO2020102944A1 (en) 2018-11-19 2018-11-19 Point cloud processing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN110869974A true CN110869974A (en) 2020-03-06
CN110869974B CN110869974B (en) 2024-06-11

Family

ID=69651835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880041553.8A Active CN110869974B (en) 2018-11-19 2018-11-19 Point cloud processing method, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110869974B (en)
WO (1) WO2020102944A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383355A (en) * 2020-04-03 2020-07-07 贝壳技术有限公司 Three-dimensional point cloud completion method and device and computer readable storage medium
CN111476902A (en) * 2020-04-27 2020-07-31 北京小马慧行科技有限公司 Method and device for labeling object in 3D point cloud, storage medium and processor
CN111539361A (en) * 2020-04-28 2020-08-14 北京小马慧行科技有限公司 Noise point identification method and device, storage medium, processor and vehicle
CN112528781A (en) * 2020-11-30 2021-03-19 广州文远知行科技有限公司 Obstacle detection method, device, equipment and computer readable storage medium
CN112700552A (en) * 2020-12-31 2021-04-23 华为技术有限公司 Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN112818748A (en) * 2020-12-31 2021-05-18 北京字节跳动网络技术有限公司 Method and device for determining plane in video, storage medium and electronic equipment
CN113538605A (en) * 2020-04-13 2021-10-22 财团法人工业技术研究院 Electronic device and method for encoding and decoding point clouds
CN113597568A (en) * 2020-10-12 2021-11-02 深圳市大疆创新科技有限公司 Data processing method, control device and storage medium
CN115546749A (en) * 2022-09-14 2022-12-30 武汉理工大学 Road surface depression detection, cleaning and avoidance method based on camera and laser radar
CN116698842A (en) * 2023-03-31 2023-09-05 中国长江电力股份有限公司 System and processing method of hydraulic hoist piston rod rust detection device
CN116704125A (en) * 2023-06-02 2023-09-05 深圳市宗匠科技有限公司 Mapping method, device, chip and module equipment based on three-dimensional point cloud

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951330B (en) * 2020-08-27 2024-09-13 北京小马慧行科技有限公司 Labeling updating method, labeling updating device, storage medium, processor and carrier
CN112630793B (en) * 2020-11-30 2024-05-17 深圳集智数字科技有限公司 Method and related device for determining plane abnormal point
CN112597946A (en) * 2020-12-29 2021-04-02 广州极飞科技有限公司 Obstacle representation method and device, electronic equipment and readable storage medium
CN112785714A (en) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 Point cloud instance labeling method and device, electronic equipment and medium
CN112991455B (en) * 2021-02-01 2022-06-17 武汉光庭信息技术股份有限公司 Method and system for fusing and labeling point cloud and picture
CN113808186B (en) * 2021-03-04 2024-01-16 京东鲲鹏(江苏)科技有限公司 Training data generation method and device and electronic equipment
US11796670B2 (en) * 2021-05-20 2023-10-24 Beijing Baidu Netcom Science And Technology Co., Ltd. Radar point cloud data processing method and device, apparatus, and storage medium
CN113344866B (en) * 2021-05-26 2024-06-14 长江水利委员会水文局长江上游水文水资源勘测局 Point cloud comprehensive precision evaluation method
CN113744323B (en) * 2021-08-11 2023-12-19 深圳蓝因机器人科技有限公司 Point cloud data processing method and device
CN114529610B (en) * 2022-01-11 2024-08-13 浙江零跑科技股份有限公司 Millimeter wave radar data labeling method based on RGB-D camera
CN114937144A (en) * 2022-05-17 2022-08-23 苏州思卡智能科技有限公司 Projection classification method for 3D contour of vehicle
CN117670986A (en) * 2022-08-31 2024-03-08 北京三快在线科技有限公司 Point cloud labeling method
CN115661215B (en) * 2022-10-17 2023-06-09 北京四维远见信息技术有限公司 Vehicle-mounted laser point cloud data registration method and device, electronic equipment and medium
CN115830262B (en) * 2023-02-14 2023-05-26 济南市勘察测绘研究院 Live-action three-dimensional model building method and device based on object segmentation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100209013A1 (en) * 2009-02-13 2010-08-19 Harris Corporation Registration of 3d point cloud data to 2d electro-optical image data
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN105512646A (en) * 2016-01-19 2016-04-20 腾讯科技(深圳)有限公司 Data processing method, data processing device and terminal
CN105719284A (en) * 2016-01-18 2016-06-29 腾讯科技(深圳)有限公司 Data processing method, device and terminal
CN106248003A (en) * 2016-08-24 2016-12-21 电子科技大学 A kind of three-dimensional laser point cloud extracts the method for Vegetation canopy concentration class index
CN106407947A (en) * 2016-09-29 2017-02-15 百度在线网络技术(北京)有限公司 Target object recognition method and device applied to unmanned vehicle
US20180074203A1 (en) * 2016-09-12 2018-03-15 Delphi Technologies, Inc. Lidar Object Detection System for Automated Vehicles
CN108389228A (en) * 2018-03-12 2018-08-10 海信集团有限公司 Ground detection method, apparatus and equipment
CN108734120A (en) * 2018-05-15 2018-11-02 百度在线网络技术(北京)有限公司 Method, device and equipment for labeling image and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9972120B2 (en) * 2012-03-22 2018-05-15 University Of Notre Dame Du Lac Systems and methods for geometrically mapping two-dimensional images to three-dimensional surfaces

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100209013A1 (en) * 2009-02-13 2010-08-19 Harris Corporation Registration of 3d point cloud data to 2d electro-optical image data
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN105719284A (en) * 2016-01-18 2016-06-29 腾讯科技(深圳)有限公司 Data processing method, device and terminal
CN105512646A (en) * 2016-01-19 2016-04-20 腾讯科技(深圳)有限公司 Data processing method, data processing device and terminal
CN106248003A (en) * 2016-08-24 2016-12-21 电子科技大学 A kind of three-dimensional laser point cloud extracts the method for Vegetation canopy concentration class index
US20180074203A1 (en) * 2016-09-12 2018-03-15 Delphi Technologies, Inc. Lidar Object Detection System for Automated Vehicles
CN106407947A (en) * 2016-09-29 2017-02-15 百度在线网络技术(北京)有限公司 Target object recognition method and device applied to unmanned vehicle
CN108389228A (en) * 2018-03-12 2018-08-10 海信集团有限公司 Ground detection method, apparatus and equipment
CN108734120A (en) * 2018-05-15 2018-11-02 百度在线网络技术(北京)有限公司 Method, device and equipment for labeling image and computer readable storage medium

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383355A (en) * 2020-04-03 2020-07-07 贝壳技术有限公司 Three-dimensional point cloud completion method and device and computer readable storage medium
CN113538605A (en) * 2020-04-13 2021-10-22 财团法人工业技术研究院 Electronic device and method for encoding and decoding point clouds
CN111476902A (en) * 2020-04-27 2020-07-31 北京小马慧行科技有限公司 Method and device for labeling object in 3D point cloud, storage medium and processor
CN111476902B (en) * 2020-04-27 2023-10-24 北京小马慧行科技有限公司 Labeling method and device for objects in 3D point cloud, storage medium and processor
CN111539361B (en) * 2020-04-28 2023-09-05 北京小马慧行科技有限公司 Noise identification method, device, storage medium, processor and carrier
CN111539361A (en) * 2020-04-28 2020-08-14 北京小马慧行科技有限公司 Noise point identification method and device, storage medium, processor and vehicle
CN113597568A (en) * 2020-10-12 2021-11-02 深圳市大疆创新科技有限公司 Data processing method, control device and storage medium
WO2022077190A1 (en) * 2020-10-12 2022-04-21 深圳市大疆创新科技有限公司 Data processing method, control device, and storage medium
CN112528781A (en) * 2020-11-30 2021-03-19 广州文远知行科技有限公司 Obstacle detection method, device, equipment and computer readable storage medium
CN112528781B (en) * 2020-11-30 2024-04-26 广州文远知行科技有限公司 Obstacle detection method, device, equipment and computer readable storage medium
CN112700552A (en) * 2020-12-31 2021-04-23 华为技术有限公司 Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN112818748A (en) * 2020-12-31 2021-05-18 北京字节跳动网络技术有限公司 Method and device for determining plane in video, storage medium and electronic equipment
CN115546749A (en) * 2022-09-14 2022-12-30 武汉理工大学 Road surface depression detection, cleaning and avoidance method based on camera and laser radar
CN116698842A (en) * 2023-03-31 2023-09-05 中国长江电力股份有限公司 System and processing method of hydraulic hoist piston rod rust detection device
CN116704125A (en) * 2023-06-02 2023-09-05 深圳市宗匠科技有限公司 Mapping method, device, chip and module equipment based on three-dimensional point cloud
CN116704125B (en) * 2023-06-02 2024-05-17 深圳市宗匠科技有限公司 Mapping method, device, chip and module equipment based on three-dimensional point cloud

Also Published As

Publication number Publication date
WO2020102944A1 (en) 2020-05-28
CN110869974B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
CN110869974B (en) Point cloud processing method, equipment and storage medium
US11320833B2 (en) Data processing method, apparatus and terminal
Choi et al. KAIST multi-spectral day/night data set for autonomous and assisted driving
EP3876141A1 (en) Object detection method, related device and computer storage medium
US8867790B2 (en) Object detection device, object detection method, and program
CN111815707B (en) Point cloud determining method, point cloud screening method, point cloud determining device, point cloud screening device and computer equipment
CN110458112B (en) Vehicle detection method and device, computer equipment and readable storage medium
CN105512646B (en) A kind of data processing method, device and terminal
CN111179358A (en) Calibration method, device, equipment and storage medium
US20160232410A1 (en) Vehicle speed detection
US20180293450A1 (en) Object detection apparatus
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN109583313B (en) Lane line extraction method, device and storage medium
WO2020258297A1 (en) Image semantic segmentation method, movable platform, and storage medium
KR102167835B1 (en) Apparatus and method of processing image
Gerke Using horizontal and vertical building structure to constrain indirect sensor orientation
CN111976601B (en) Automatic parking method, device, equipment and storage medium
CN114463308B (en) Visual inspection method, device and processing equipment for visual angle photovoltaic module of unmanned aerial vehicle
JP7389729B2 (en) Obstacle detection device, obstacle detection system and obstacle detection method
JP2011170599A (en) Outdoor structure measuring instrument and outdoor structure measuring method
US11715261B2 (en) Method for detecting and modeling of object on surface of road
CN112232275A (en) Obstacle detection method, system, equipment and storage medium based on binocular recognition
CN112907746A (en) Method and device for generating electronic map, electronic equipment and storage medium
JP2017181476A (en) Vehicle location detection device, vehicle location detection method and vehicle location detection-purpose computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240522

Address after: Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057, 1634

Applicant after: Shenzhen Zhuoyu Technology Co.,Ltd.

Country or region after: China

Address before: 518057 Shenzhen Nanshan High-tech Zone, Shenzhen, Guangdong Province, 6/F, Shenzhen Industry, Education and Research Building, Hong Kong University of Science and Technology, No. 9 Yuexingdao, South District, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SZ DJI TECHNOLOGY Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant