WO2020102944A1 - 点云处理方法、设备及存储介质 - Google Patents

点云处理方法、设备及存储介质

Info

Publication number
WO2020102944A1
WO2020102944A1 PCT/CN2018/116232 CN2018116232W WO2020102944A1 WO 2020102944 A1 WO2020102944 A1 WO 2020102944A1 CN 2018116232 W CN2018116232 W CN 2018116232W WO 2020102944 A1 WO2020102944 A1 WO 2020102944A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
dimensional
dimensional point
specific
dimensional image
Prior art date
Application number
PCT/CN2018/116232
Other languages
English (en)
French (fr)
Inventor
周游
蔡剑钊
武志远
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880041553.8A priority Critical patent/CN110869974B/zh
Priority to PCT/CN2018/116232 priority patent/WO2020102944A1/zh
Publication of WO2020102944A1 publication Critical patent/WO2020102944A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • Embodiments of the present invention relate to the field of point cloud processing, and in particular, to a point cloud processing method, device, and storage medium.
  • Deep learning depends on the collection, labeling and training of sample data.
  • the accuracy and amount of sample data directly affect the accuracy of the neural network.
  • the target object in the sample point cloud needs to be labeled.
  • the labeling result obtained after labeling the target object may not only include the target object, but also It may include objects other than the target object, resulting in inaccurate labeling of the target object, thereby affecting the accuracy of the neural network in identifying the target object.
  • Embodiments of the present invention provide a point cloud processing method, device, and storage medium to improve the accuracy of labeling an object in the three-dimensional point cloud and improve the accuracy of object recognition.
  • a first aspect of the embodiments of the present invention is to provide a point cloud processing method, including:
  • a second aspect of the embodiments of the present invention is to provide a point cloud processing method, including:
  • a third aspect of the embodiments of the present invention is to provide a point cloud processing device, including: a memory, a processor, a shooting device, and a detection device;
  • the shooting device is used to collect a two-dimensional image corresponding to the target area
  • the detection device is used to collect a three-dimensional point cloud of the target area
  • the memory is used to store program codes
  • the processor calls the program code, and when the program code is executed, it is used to perform the following operations:
  • a fourth aspect of the embodiments of the present invention is to provide a point cloud processing device, including: a memory, a processor, and a display component;
  • the display component is used to display a two-dimensional image and / or a three-dimensional point cloud
  • the memory is used to store program codes
  • the processor calls the program code, and when the program code is executed, it is used to perform the following operations:
  • a fifth aspect of the embodiments of the present invention is to provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the method according to the first aspect or the second aspect.
  • the point cloud processing method, device, and storage medium provided in this embodiment are determined by identifying a specific area in the two-dimensional image corresponding to the target area and determining the specific area in the three-dimensional point cloud including the target area and the two-dimensional image Specific point cloud in the three-dimensional point cloud to remove the specific point cloud in the three-dimensional point cloud.
  • marking the object in the three-dimensional point cloud the influence of the specific point cloud on the object to be marked can be avoided
  • the specific point cloud is mistakenly marked as an object to be identified.
  • the accuracy of labeling the object in the three-dimensional point cloud can be improved to improve the accuracy of identifying the object.
  • FIG. 1 is a flowchart of a point cloud processing method provided by an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a two-dimensional image provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a three-dimensional point cloud provided by an embodiment of the present invention.
  • FIG. 5 is a flowchart of a point cloud processing method according to another embodiment of the present invention.
  • FIG. 6 is a flowchart of a point cloud processing method according to another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of another two-dimensional image provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a three-dimensional point cloud provided by an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a two-dimensional point cloud provided by an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of the expansion and contraction of a labeling frame provided by an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a three-dimensional point cloud provided by an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of a three-dimensional point cloud provided by an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of the expansion and contraction of a labeling frame provided by an embodiment of the present invention.
  • FIG. 14 is a structural diagram of a point cloud processing device provided by an embodiment of the present invention.
  • FIG. 15 is a structural diagram of a point cloud processing device provided by an embodiment of the present invention.
  • 121 memory; 122: processor; 123: shooting equipment;
  • 150 point cloud processing equipment; 151: memory; 152: processor;
  • a component when a component is said to be “fixed” to another component, it can be directly on another component or it can also exist in a centered component. When a component is considered to be “connected” to another component, it can be directly connected to another component or there can be centered components at the same time.
  • An embodiment of the present invention provides a point cloud processing method.
  • the point cloud processing method provided by the embodiments of the present invention may be applied to vehicles, such as unmanned vehicles or vehicles equipped with advanced driver assistance (Advanced Driver Assistance Systems, ADAS) systems. It can be understood that the point cloud processing method can also be applied to drones, such as drones equipped with detection equipment for acquiring point cloud data.
  • the point cloud processing method provided by the embodiment of the present invention can be applied to mark the target object in the 3D point cloud, first determine the specific point cloud in the 3D point cloud that may affect the accurate marking of the target object, and remove the 3D The specific point cloud in the point cloud.
  • the point cloud processing method provided by the embodiment of the present invention will be described below using a vehicle as an example.
  • FIG. 1 is a flowchart of a point cloud processing method according to an embodiment of the present invention. As shown in FIG. 1, the method in this embodiment may include:
  • Step S101 Acquire a two-dimensional image corresponding to the target area and a three-dimensional point cloud including the target area.
  • the vehicle 21 is provided with a shooting device and a detection device.
  • the shooting device may be a digital camera, a video camera, etc.
  • the detection device may specifically be a binocular stereo camera, a TOF camera, and / or a lidar.
  • the acquiring the two-dimensional image corresponding to the target area and the three-dimensional point cloud including the target area include: acquiring the two-dimensional image corresponding to the target area around the carrier on which the photographing device is mounted and captured by the photographing device; Acquire a three-dimensional point cloud of the target area around the carrier detected by the detection device mounted on the carrier.
  • the vehicle 21 is a carrier equipped with a photographing device and a detection device, and the relative positional relationship of the photographing device and the detection device on the vehicle 21 may be predetermined.
  • the shooting device collects image information of the surrounding environment of the vehicle 21 in real time, for example, collects image information of the area in front of the vehicle 21, and the image information may specifically be a two-dimensional image.
  • FIG. 3 is a schematic diagram of a two-dimensional image of the area in front of the vehicle 21 collected by the shooting device of the vehicle 21. As shown in FIG. 3, the two-dimensional image includes the vehicle in front of the vehicle 21, and the vehicle in front of the vehicle 21 can It is the vehicle 22 shown in FIG. 2.
  • the detection device detects the three-dimensional point cloud of objects around the vehicle 21 in real time.
  • the detection device may be a binocular stereo camera, a TOF camera and / or a lidar.
  • lidar when a laser beam emitted by the lidar irradiates the surface of the object, the surface of the object will reflect the laser beam. Based on the laser light reflected by the surface of the object, the lidar can determine that the object is relatively Information about the Lidar's position and distance.
  • FIG. 4 shows a radar scan diagram of the vehicle 21, in which the texture line around the circle represents the ground around the vehicle 21.
  • FIG. 3 shows a two-dimensional image of the area in front of the vehicle 21.
  • the radar beam can be scanned along a certain trajectory, such as a 360-degree rotation scan. Therefore, the three-dimensional point cloud shown in FIG. 4 includes not only the area in front of the vehicle 21 but also the right side of the vehicle 21 Area, left area and rear area.
  • Step S102 Determine a specific area in the two-dimensional image.
  • the specific area is a ground area.
  • the area in front of the vehicle 21 includes not only other vehicles but also ground areas, buildings, trees, fences, pedestrians, and the like. Among them, the bottom of the wheel of the vehicle in front of the vehicle 21 is close to the ground. In other embodiments, there may be objects such as traffic signs in the area in front of the vehicle 21, and the bottom of the traffic sign is also close to the ground. When marking objects such as traffic signs, it is easy to mislabel the ground point at the bottom of the vehicle in front and / or the ground point at the bottom of the traffic sign.
  • this embodiment Before identifying the ground point cloud in the three-dimensional point cloud, this embodiment first determines the ground area in the two-dimensional image.
  • a feasible implementation method is: first, the vehicles in the obtained two-dimensional image can be detected by Convolutional Neural Networks (CNN), as shown in Figure 3, and the vehicle in front can be detected. Marked with a box, for example, a vehicle in front of the vehicle 21 is marked with a box 31; then, the area under the front vehicle and no other objects, such as the area 32, is used as a reference road area.
  • the road surface area is referred to It may be a frame area of a preset size below the frame 31 of the vehicle in front.
  • the information corresponding to the partial area image in the two-dimensional image is considered to be road surface; then, the information of the reference road surface area is input Go to a support vector machine (Support Vector Machine, SVM) classifier and / or neural network model to perform classification prediction to determine the ground area in the two-dimensional image.
  • SVM Support Vector Machine
  • the SVM classifier can be obtained after training with a large number of sample data, and can perform linear classification or non-linear classification.
  • the sample data may be color information of the reference road surface, such as RGB information.
  • the RGB information is RGB information of the image in the box 32 below the vehicle box 31 in FIG. 3.
  • the ground area in the two-dimensional image may further include: calculating the horizon, and classifying the area below the horizon.
  • the horizon in the two-dimensional image can be calculated from the information of the inertial measurement unit (IMU) mounted on the vehicle, and the road surface information is considered to exist only below the horizon.
  • IMU inertial measurement unit
  • pitch_angle represents the pitch angle output by the IMU
  • focus_length represents the focal length of the shooting device
  • roll_angle represents the roll angle output by the IMU
  • image_width represents the width of the two-dimensional image
  • image_height represents the height of the two-dimensional image.
  • Step S103 Determine a specific point cloud in the three-dimensional point cloud according to the three-dimensional point cloud and the specific region in the two-dimensional image.
  • a specific point cloud in the three-dimensional point cloud is determined.
  • the specific point cloud is a ground point cloud.
  • the determining a specific point cloud in the three-dimensional point cloud according to the three-dimensional point cloud and the specific area in the two-dimensional image includes: determining that the three-dimensional point cloud is in the The corresponding two-dimensional point in the two-dimensional image; according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image, determine the specific in the three-dimensional point cloud Point cloud.
  • the determining the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image includes: according to a positional relationship between the acquiring device of the three-dimensional point cloud and the acquiring device of the two-dimensional image , Project the three-dimensional point cloud onto the two-dimensional image.
  • the acquisition device of the three-dimensional point cloud is specifically a detection device such as a lidar
  • the acquisition device of a two-dimensional image is specifically a shooting device such as a digital camera.
  • Each three-dimensional point in the three-dimensional point cloud shown in 4 is projected into the two-dimensional image shown in FIG. 3.
  • point i represents a three-dimensional point in a three-dimensional point cloud
  • the position of the three-dimensional point in the radar coordinate system is recorded as Convert the position of point i to the camera coordinate system as with The relationship is shown in the following formula (1):
  • the projection point of the point i in the two-dimensional image can be calculated by the following formulas (2) and (3), and the position of the projection point in the two-dimensional image is recorded as p i ( ⁇ , ⁇ ):
  • the projection points of other three-dimensional points in the three-dimensional point cloud as shown in FIG. 4 except point i in the two-dimensional image can be determined, and the projection point is the corresponding point of the three-dimensional point in the two-dimensional image. 2D points.
  • the ground point cloud in the three-dimensional point cloud can be determined.
  • the projection point of point i in the two-dimensional image determine whether the projection point is in the ground area in the two-dimensional image, if the projection point is in the ground area in the two-dimensional image, then the point i Record as a reference point.
  • other reference points in the three-dimensional point cloud can be determined, and these reference points are grouped together to form the reference point cloud.
  • plane fitting is performed according to the reference point cloud in the three-dimensional point cloud, and the three-dimensional point cloud falling on the plane is recorded as the ground point cloud.
  • Step S104 Remove the specific point cloud from the three-dimensional point cloud.
  • ground point cloud After removing the ground point cloud in the three-dimensional point cloud shown in FIG. 4, mark objects on the ground such as vehicles, traffic signs, buildings, trees, fences, pedestrians, and so on.
  • this embodiment takes the ground area in the two-dimensional image and the ground point cloud in the three-dimensional point cloud as examples for illustrative description. In other embodiments, the same applies to other specific areas, for example, the sky area, Sidewalk area, etc.
  • a specific point cloud in the three-dimensional point cloud is determined by identifying a specific area in the two-dimensional image corresponding to the target area and according to the specific three-dimensional point cloud including the target area and the specific area in the two-dimensional image, to Remove the specific point cloud in the three-dimensional point cloud, when marking the object in the three-dimensional point cloud, you can avoid the influence of the specific point cloud on the object to be marked, and avoid mistakenly labeling the specific point cloud as the object to be identified. Removal of the specific point cloud in the three-dimensional point cloud can improve the accuracy of labeling the object in the three-dimensional point cloud, so as to improve the accuracy of identifying the object.
  • FIG. 5 is a flowchart of a point cloud processing method according to another embodiment of the present invention.
  • the corresponding two-dimensional point in the two-dimensional image according to the three-dimensional point cloud and the specific region in the two-dimensional image Determining a specific point cloud in the three-dimensional point cloud may include:
  • Step S501 Use a point cloud projected from the three-dimensional point cloud to the specific area in the two-dimensional image as a reference point cloud in the three-dimensional point cloud.
  • the projection point of point i in the two-dimensional image determine whether the projection point is in the ground area in the two-dimensional image, if the projection point is in the ground area in the two-dimensional image, then the point i Record as a reference point.
  • other reference points in the three-dimensional point cloud can be determined, and these reference points are grouped together to form the reference point cloud.
  • Step S502 Determine a specific point cloud in the three-dimensional point cloud according to the reference point cloud in the three-dimensional point cloud.
  • the determining a specific point cloud in the three-dimensional point cloud according to the reference point cloud in the three-dimensional point cloud includes: determining a target plane based on the reference point cloud in the three-dimensional point cloud; A point in the three-dimensional point cloud whose distance to the target plane is less than a distance threshold is used as a specific point cloud in the three-dimensional point cloud.
  • the determining the target plane according to the reference point cloud in the three-dimensional point cloud includes: determining the target plane using a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud.
  • a plane fitting algorithm is used to perform plane fitting on the reference point cloud in the three-dimensional point cloud.
  • the fitted plane is recorded as the target plane, and each three-dimensional point in the three-dimensional point cloud shown in FIG. 4 and the target are calculated.
  • the distance between planes When the distance is less than the distance threshold, the three-dimensional point can be used as the ground point cloud in the three-dimensional point cloud. When the distance is greater than the distance threshold, it means that the three-dimensional point is not a ground point cloud.
  • the determining the target plane using a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud includes: removing abnormal points in the reference point cloud to obtain a modified reference point cloud; According to the corrected reference point cloud, a plane fitting algorithm is used to determine the target plane.
  • the reference point cloud in the three-dimensional point cloud includes 10 three-dimensional points, and three of the ten three-dimensional points are abnormal points. Among the 3 abnormal points among the 10 three-dimensional points, there are 7 remaining three-dimensional points. According to the remaining 7 three-dimensional points, a plane fitting algorithm is used to determine the target plane.
  • the method before removing the abnormal point in the reference point cloud, the method further includes: determining a reference plane including the partial point according to the partial point in the reference point cloud; according to the reference point cloud The distance of a point outside the partial point relative to the reference plane determines an abnormal point in the reference point cloud.
  • one possible way to detect the abnormal point in the reference point cloud is to randomly extract several three-dimensional points from the 10 three-dimensional points included in the reference point cloud, for example, three three-dimensional points, which can be determined A plane, which is denoted as the reference plane, and further calculate the distances of the remaining 7 three-dimensional points of the 10 three-dimensional points relative to the reference plane, if most of the seven three-dimensional points are greater than the pre-reference plane Setting the distance means that there are abnormal points among the three three-dimensional points.
  • an abnormal point in the reference point cloud can be determined.
  • This embodiment corrects the reference point cloud by removing the abnormal point of the reference point cloud in the three-dimensional point cloud.
  • a plane fitting algorithm is used to determine the target plane and the three-dimensional point cloud is relatively
  • a point where the distance to the target plane is less than the distance threshold is used as a ground point cloud in the three-dimensional point cloud, which improves the detection accuracy of the ground point cloud.
  • An embodiment of the present invention provides a point cloud processing method.
  • 6 is a flowchart of a point cloud processing method according to another embodiment of the present invention. As shown in FIG. 6, the method in this embodiment may include:
  • Step S601 Acquire a two-dimensional image corresponding to the target area and a three-dimensional point cloud including the target area.
  • FIG. 7 a two-dimensional image of an intersection collected by the shooting device while the vehicle 21 is traveling
  • FIG. 8 is a three-dimensional point cloud of the intersection detected by the detection device.
  • Step S602 Determine a specific area in the two-dimensional image.
  • the specific area is a ground area.
  • the method and principle for determining the ground area in the two-dimensional image shown in FIG. 7 are consistent with the above embodiment, and will not be repeated here.
  • Step S603 Remove the specific point cloud in the three-dimensional point cloud according to the three-dimensional point cloud and the specific region in the two-dimensional image.
  • the specific point cloud is a ground point cloud.
  • the ground point cloud in the three-dimensional point cloud can be determined.
  • the specific processes and principles are consistent with the above embodiments. I will not repeat them here.
  • Step S604 Perform a labeling operation on the target object in the three-dimensional point cloud.
  • the target object in the three-dimensional point cloud is labeled.
  • the labeling operation of the target object in the three-dimensional point cloud includes: converting the three-dimensional point cloud into a two-dimensional point cloud; and determining to label the target according to the two-dimensional point cloud The callout frame of the object.
  • each 3D point in the 3D point cloud corresponds to a 3D coordinate
  • the coordinate value of each 3D point in the 3D point cloud in the Z-axis direction is set to a fixed value, for example, the 3D point cloud
  • the coordinate value of each three-dimensional point in the Z-axis direction is set to 0, and the three-dimensional point cloud can be converted into a two-dimensional point cloud, as shown in FIG. 9.
  • Annotate the target object in the two-dimensional point cloud is set to select the target object in the two-dimensional point cloud to obtain an annotation frame, such as the rectangular frame shown in FIG. 9, the annotation
  • the target object in the box is the marked target object.
  • the determining a label frame for labeling the target object according to the two-dimensional point cloud includes: according to the user's selection of the target object in the plane where the two-dimensional point cloud is located In operation, determine a labeling frame for labeling the target object.
  • a two-dimensional point cloud as shown in FIG. 9 is displayed in a display component.
  • the display component may specifically be a touch screen, and the user may select a target object to be marked in the two-dimensional point cloud displayed by the display component.
  • the embodiment does not limit the specific selection operation mode, which may be frame selection, point selection, or sliding operation. According to the user's selection operation, the labeling frame for labeling the target object is determined, for example, as shown in FIG. 9 Rectangle.
  • the callout frame can expand and contract in the X-axis and / or Y-axis directions.
  • a black dot represents a two-dimensional point cloud
  • the user marks the two-dimensional point cloud by frame selection.
  • a dotted frame is used as a label box to mark the two-dimensional point cloud.
  • the label box may be a
  • the label frame can expand and contract in the X-axis and / or Y-axis directions.
  • the X-axis and Y-axis are simultaneously scaled to obtain a solid line frame as shown in FIG. 10.
  • the method further includes: determining a corresponding columnar frame of the label frame in the three-dimensional point cloud.
  • the label frame can be projected into the three-dimensional point cloud to obtain the corresponding columnar body of the label frame in the three-dimensional point cloud frame.
  • the columnar frame shown in FIG. 11 is the columnar frame corresponding to the label frame determined in the three-dimensional point cloud before the ground point cloud in the three-dimensional point cloud is removed.
  • the columnar frame shown in FIG. 12 is a columnar frame corresponding to the label frame determined in the three-dimensional point cloud after the ground point cloud in the three-dimensional point cloud is removed. Comparing Fig. 11 and Fig. 12, it can be seen that after removing the ground point cloud in the 3D point cloud, the bottom, front, back, left, and right of the columnar frame may all become empty.
  • the determining the columnar frame corresponding to the label frame in the three-dimensional point cloud includes: scaling the label frame in the direction perpendicular to the two-dimensional point cloud in the three-dimensional point cloud To obtain the columnar frame.
  • black dots represent three-dimensional point clouds.
  • Each three-dimensional point cloud can be regarded as a point in the three-dimensional coordinate system shown in Figure 13, and the two-dimensional point cloud shown in Figure 10 can be regarded as The projection of the three-dimensional point cloud shown in FIG. 13 in the XY plane, and the direction perpendicular to the two-dimensional point cloud is the Z-axis direction shown in FIG. 13.
  • a columnar frame as shown in FIG. 13 can be obtained by expanding and contracting the selection frame shown in FIG. 10 as the annotation frame in the three-dimensional coordinate system shown in FIG. 13 along the Z-axis direction.
  • the annotation frame shown in FIG. 9 is a frame on a plane.
  • One way to convert the frame on a plane into a columnar frame in three-dimensional space is to follow the annotation frame shown in FIG. 9 in a three-dimensional point cloud.
  • the direction perpendicular to the two-dimensional point cloud is expanded and contracted, and the direction perpendicular to the two-dimensional point cloud may specifically be the Z-axis direction of the three-dimensional point cloud, that is, the labeling frame shown in FIG.
  • the expansion and contraction of the cloud in the Z-axis direction can obtain a columnar frame in the three-dimensional space, for example, the columnar frame in the three-dimensional space shown in FIG. 11 or FIG. 12.
  • a specific point cloud in the three-dimensional point cloud is determined by identifying a specific area in the two-dimensional image corresponding to the target area and according to the specific three-dimensional point cloud including the target area and the specific area in the two-dimensional image, to Remove the specific point cloud in the three-dimensional point cloud, when marking the object in the three-dimensional point cloud, you can avoid the influence of the specific point cloud on the object to be marked, and avoid mistakenly labeling the specific point cloud as the object to be identified. Removal of the specific point cloud in the three-dimensional point cloud can improve the accuracy of labeling the object in the three-dimensional point cloud, so as to improve the accuracy of identifying the object.
  • An embodiment of the present invention provides a point cloud processing device.
  • the embodiment of the present invention does not limit the specific form of the point cloud processing device.
  • the point cloud processing device may be a vehicle-mounted terminal, or may be a server, a computer, and other devices.
  • 14 is a structural diagram of a point cloud processing device according to an embodiment of the present invention. As shown in FIG.
  • the point cloud processing device 120 includes: a memory 121, a processor 122, a photographing device 123, and a detection device 124; the photographing device 123 is used for Collect the two-dimensional image corresponding to the target area; the detection device 124 is used to collect the three-dimensional point cloud of the target area; the memory 121 is used to store the program code; the processor 122 calls the program code, and when the program code is executed, it is used to Perform the following operations: obtain a two-dimensional image corresponding to the target area and a three-dimensional point cloud including the target area; determine a specific area in the two-dimensional image; based on the three-dimensional point cloud and the two-dimensional image Determine the specific point cloud in the three-dimensional point cloud; remove the specific point cloud in the three-dimensional point cloud.
  • the processor 122 determines a specific point cloud in the three-dimensional point cloud according to the three-dimensional point cloud and the specific area in the two-dimensional image, it is specifically used to: determine that the three-dimensional point cloud is in A corresponding two-dimensional point in the two-dimensional image; determining the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific region in the two-dimensional image Specific point cloud.
  • the processor 122 determines the corresponding two-dimensional point in the two-dimensional image of the three-dimensional point cloud, it is specifically used to: according to the acquisition device of the three-dimensional point cloud and the acquisition device of the two-dimensional image Project the three-dimensional point cloud onto the two-dimensional image.
  • the processor 122 determines the specific point cloud in the three-dimensional point cloud according to the corresponding two-dimensional point of the three-dimensional point cloud in the two-dimensional image and the specific area in the two-dimensional image , Specifically for: using the point cloud projected from the three-dimensional point cloud to the specific area in the two-dimensional image as a reference point cloud in the three-dimensional point cloud; based on the reference point in the three-dimensional point cloud Cloud, determining a specific point cloud in the three-dimensional point cloud.
  • the processor 122 determines a specific point cloud in the three-dimensional point cloud according to the reference point cloud in the three-dimensional point cloud, it is specifically used to: determine the target according to the reference point cloud in the three-dimensional point cloud Plane; the point in the three-dimensional point cloud with respect to the target plane whose distance is less than the distance threshold is taken as a specific point cloud in the three-dimensional point cloud.
  • the processor 122 determines the target plane according to the reference point cloud in the three-dimensional point cloud, it is specifically used to determine the target plane using a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud .
  • the processor 122 determines the target plane using a plane fitting algorithm according to the reference point cloud in the three-dimensional point cloud, it is specifically used to: remove the abnormal point in the reference point cloud to obtain the corrected Reference point cloud; based on the corrected reference point cloud, a plane fitting algorithm is used to determine the target plane.
  • the processor 122 is also used to: determine a reference plane including the partial point according to the partial point in the reference point cloud; and according to the reference point cloud The distance of points other than the partial points relative to the reference plane determines the abnormal point in the reference point cloud.
  • the processor 122 acquires the two-dimensional image corresponding to the target area and the three-dimensional point cloud including the target area, it is specifically used to acquire the corresponding target area around the carrier on which the photographing device is mounted and photographed by the photographing device A two-dimensional image; acquiring a three-dimensional point cloud of the target area around the carrier detected by the detection device mounted on the carrier.
  • the detection device includes at least one of the following: a binocular stereo camera, a TOF camera, and a lidar.
  • the specific area is a ground area
  • the specific point cloud is a ground point cloud
  • a specific point cloud in the three-dimensional point cloud is determined by identifying a specific area in the two-dimensional image corresponding to the target area and according to the specific three-dimensional point cloud including the target area and the specific area in the two-dimensional image, to Remove the specific point cloud in the three-dimensional point cloud, when marking the object in the three-dimensional point cloud, you can avoid the influence of the specific point cloud on the object to be marked, and avoid mistakenly labeling the specific point cloud as the object to be identified. Removal of the specific point cloud in the three-dimensional point cloud can improve the accuracy of labeling the object in the three-dimensional point cloud, so as to improve the accuracy of identifying the object.
  • An embodiment of the present invention provides a point cloud processing device.
  • the embodiment of the present invention does not limit the specific form of the point cloud processing device.
  • the point cloud processing device may be a vehicle-mounted terminal, or may be a server, a computer, and other devices.
  • FIG. 15 is a structural diagram of a point cloud processing device provided by an embodiment of the present invention. As shown in FIG.
  • the point cloud processing device 150 includes: a memory 151, a processor 152, and a display component 153; 1D image and / or 3D point cloud; the memory 151 is used to store the program code; the processor 122 calls the program code, and when the program code is executed, it is used to perform the following operations: obtain a 2D image corresponding to the target area, and include A three-dimensional point cloud of the target area; determining a specific area in the two-dimensional image; removing the specific area in the three-dimensional point cloud according to the three-dimensional point cloud and the specific area in the two-dimensional image Point cloud; labeling the target object in the three-dimensional point cloud.
  • the point cloud processing device 120 may further include a communication interface, and the processor 122 receives the two-dimensional image and the three-dimensional point cloud through the communication interface.
  • the processor 122 when the processor 122 performs a labeling operation on the target object in the three-dimensional point cloud, it is specifically used to: convert the three-dimensional point cloud into a two-dimensional point cloud; according to the two-dimensional point cloud, determine Annotating the annotation frame of the target object.
  • the processor 122 determines a labeling frame for labeling the target object according to the two-dimensional point cloud, it is specifically used to: according to the user's response to the target object in the plane where the two-dimensional point cloud is located The selection operation to determine the labeling frame for labeling the target object.
  • the processor 122 determines a label frame for labeling the target object according to the two-dimensional point cloud, it is further used to determine a corresponding columnar frame in the three-dimensional point cloud for the label frame.
  • the processor 122 determines that the label frame corresponds to the columnar frame in the three-dimensional point cloud, it is specifically used to: place the label frame in the three-dimensional point cloud perpendicular to the two-dimensional The point cloud expands and contracts to obtain the columnar frame.
  • the labeling frame may expand and contract in the X-axis and / or Y-axis directions.
  • the specific area is a ground area
  • the specific point cloud is a ground point cloud
  • a specific point cloud in the three-dimensional point cloud is determined by identifying a specific area in the two-dimensional image corresponding to the target area and according to the specific three-dimensional point cloud including the target area and the specific area in the two-dimensional image, to Remove the specific point cloud in the three-dimensional point cloud, when marking the object in the three-dimensional point cloud, you can avoid the influence of the specific point cloud on the object to be marked, and avoid mistakenly labeling the specific point cloud as the object to be identified. Removal of the specific point cloud in the three-dimensional point cloud can improve the accuracy of labeling the object in the three-dimensional point cloud, so as to improve the accuracy of identifying the object.
  • the point cloud processing devices may be combined, for example, may have a memory, a processor, a shooting device, a detection device, and a display component at the same time.
  • the form is not limited, and the point cloud processing device may be a vehicle-mounted terminal, or may be a server, a computer, and other devices.
  • the shooting device is used to collect the two-dimensional image corresponding to the target area;
  • the detection device is used to collect the three-dimensional point cloud of the target area;
  • the display component is used to display the two-dimensional image and / or the three-dimensional point cloud.
  • the memory is used to store the program code; the processor calls the program code, and when the program code is executed, the operations performed by the processor are as described in the foregoing embodiment, and are not repeated here.
  • an embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the point cloud processing method described in the foregoing embodiment.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a division of logical functions.
  • there may be other divisions for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the above integrated unit implemented in the form of a software functional unit may be stored in a computer-readable storage medium.
  • the above software functional units are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or processor to execute the method described in each embodiment of the present invention Partial steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种点云处理方法、设备及存储介质,该方法包括:获取目标区域对应的二维图像、以及包括目标区域的三维点云(S101);确定二维图像中的特定区域(S102);根据三维点云以及二维图像中的特定区域,确定三维点云中的特定点云(S103);去除三维点云中的特定点云(S104)。该方法通过识别目标区域对应的二维图像中的特定区域,并根据包括该目标区域的三维点云和该二维图像中的特定区域,确定出该三维点云中的特定点云,以去除该三维点云中的特定点云,在对该三维点云中的物体进行标注时,可避免特定点云对待标注的物体的影响,避免将特定点云误标注为待识别的物体,可提高对该三维点云中的物体进行标注的精确性,以提高对该物体识别的精确性。

Description

点云处理方法、设备及存储介质 技术领域
本发明实施例涉及点云处理领域,尤其涉及一种点云处理方法、设备及存储介质。
背景技术
当前深度学习神经网络被广泛应用,深度学习依赖于样本数据的采集、标注和训练。样本数据的准确性和数据量直接影响着神经网络的准确性。
在对样本数据,例如通过激光雷达采集的三维样本点云,需要对样本点云中的目标物体进行标注,但是,在对目标物体进行标注后得到的标注结果中可能不仅包括该目标物体,还可能包括除该目标物体之外的物体,导致对该目标物体的标注不精准,从而影响该神经网络识别该目标物体的准确性。
发明内容
本发明实施例提供一种点云处理方法、设备及存储介质,以提高对该三维点云中的物体进行标注的精确性,提高对该物体识别的精确性。
本发明实施例的第一方面是提供一种点云处理方法,包括:
获取目标区域对应的二维图像、以及包括所述目标区域的三维点云;
确定所述二维图像中的特定区域;
根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云;
去除所述三维点云中的所述特定点云。
本发明实施例的第二方面是提供一种点云处理方法,包括:
获取目标区域对应的二维图像、以及包括所述目标区域的三维点云;
确定所述二维图像中的特定区域;
根据所述三维点云以及所述二维图像中的所述特定区域,去除所述三维点云中的所述特定点云;
对所述三维点云中的目标物体进行标注操作。
本发明实施例的第三方面是提供一种点云处理设备,包括:存储器、处理器、拍摄设备和探测设备;
所述拍摄设备用于采集目标区域对应的二维图像;
所述探测设备用于采集所述目标区域的三维点云;
所述存储器用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取所述目标区域对应的二维图像、以及包括所述目标区域的三维点云;
确定所述二维图像中的特定区域;
根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云;
去除所述三维点云中的所述特定点云。
本发明实施例的第四方面是提供一种点云处理设备,包括:存储器、处理器和显示组件;
所述显示组件用于显示二维图像和/或三维点云;
所述存储器用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取目标区域对应的二维图像、以及包括所述目标区域的三维点云;
确定所述二维图像中的特定区域;
根据所述三维点云以及所述二维图像中的所述特定区域,去除所述三维点云中的所述特定点云;
对所述三维点云中的目标物体进行标注操作。
本发明实施例的第五方面是提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行以实现如第一方面或第二方面所述的方法。
本实施例提供的点云处理方法、设备及存储介质,通过识别目标区域对应的二维图像中的特定区域,并根据包括该目标区域的三维点云和该二 维图像中的特定区域,确定出该三维点云中的特定点云,以去除该三维点云中的特定点云,在对该三维点云中的物体进行标注时,可避免特定点云对待标注的物体的影响,避免将特定点云误标注为待识别的物体,通过去除该三维点云中的特定点云,可提高对该三维点云中的物体进行标注的精确性,以提高对该物体识别的精确性。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的点云处理方法的流程图;
图2为本发明实施例提供的一种应用场景的示意图;
图3为本发明实施例提供的一个二维图像的示意图;
图4为本发明实施例提供的三维点云的示意图;
图5为本发明另一实施例提供的点云处理方法的流程图;
图6为本发明另一实施例提供的点云处理方法的流程图;
图7为本发明实施例提供的另一个二维图像的示意图;
图8为本发明实施例提供的三维点云的示意图;
图9为本发明实施例提供的二维点云的示意图;
图10为本发明实施例提供的标注框伸缩的示意图;
图11为本发明实施例提供的三维点云的示意图;
图12为本发明实施例提供的三维点云的示意图;
图13为本发明实施例提供的标注框伸缩的示意图;
图14为本发明实施例提供的点云处理设备的结构图;
图15为本发明实施例提供的点云处理设备的结构图。
附图标记:
21:车辆;         22:车辆;      120:点云处理设备;
121:存储器;      122:处理器;   123:拍摄设备;
31:方框;         32:方框;      124:探测设备;
150:点云处理设备;  151:存储器;  152:处理器;
153:显示组件。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本发明实施例提供一种点云处理方法。本发明实施例提供的点云处理方法可以应用于车辆上,例如无人驾驶车辆,或者搭载高级辅助驾驶(Advanced Driver Assistance Systems,ADAS)系统的车辆等。可以理解的是,点云处理方法还可以应用于无人机上,例如搭载有获取点云数据的探测设备的无人机。本发明实施例提供的点云处理方法可以应用于对三维点云中的目标物体进行标注之前,先确定该三维点云中可能对准确标注该目标物体造成影响的特定点云,并去除该三维点云中的该特定点云。下面将以车辆为例说明本发明实施例提供的点云处理方法。
图1为本发明实施例提供的点云处理方法的流程图。如图1所示,本实施例中的方法,可以包括:
步骤S101、获取目标区域对应的二维图像、以及包括所述目标区域的三维点云。
如图2所示,车辆21中设置有拍摄设备和探测设备,该拍摄设备可以是数码相机、摄像机等,该探测设备具体可以是双目立体相机、TOF相机和/或激光雷达。
可选的,所述获取目标区域对应的二维图像、以及包括所述目标区域的三维点云,包括:获取拍摄设备拍摄的搭载有所述拍摄设备的载体周围目标区域对应的二维图像;获取所述载体上搭载的探测设备检测到的所述载体周围目标区域的三维点云。
例如,车辆21为搭载有拍摄设备和探测设备的载体,拍摄设备和探测设备在该车辆21上的相对位置关系可以是预先确定的。车辆21在行驶的过程中,拍摄设备实时采集车辆21周围环境的图像信息,例如,采集该车辆21前方区域的图像信息,该图像信息具体可以是二维图像。图3所示为车辆21的拍摄设备采集到的该车辆21前方区域的一个二维图像的示意图,如图3所示,该二维图像中包括车辆21前方的车辆,车辆21前方的车辆可以是如图2所示的车辆22。
另外,车辆21在行驶的过程中,探测设备实时检测车辆21周围物体的三维点云。探测设备可以是双目立体相机、TOF相机和/或激光雷达。以激光雷达为例,当该激光雷达发射出的一束激光照射到物体表面时,该物体表面将会对该束激光进行反射,该激光雷达根据该物体表面反射的激光,可确定该物体相对于该激光雷达的方位、距离等信息。若该激光雷达发射出的该束激光按照某种轨迹进行扫描,例如360度旋转扫描,将得到大量的激光点,因而就可形成该物体的激光点云数据,也就是三维点云。图4所示为车辆21的雷达的扫描图,其中,一圈一圈的纹理线表示车辆21周围的地面。
可选的,图3所示为车辆21前方区域的二维图像。如图4所示,雷达波束可以沿着某种轨迹进行扫描,例如360度旋转扫描,因此,图4所示的三维点云中不仅包括车辆21的前方区域,还可以包括车辆21的右侧区域、左侧区域和后方区域。
步骤S102、确定所述二维图像中的特定区域。
在一些实施例中,所述特定区域为地面区域。
如图3所示,车辆21的前方区域中不仅包括其他车辆,还可以包括 地面区域、建筑物、树木、围栏、行人等。其中,车辆21的前方车辆的车轮底部紧邻地面,在其他实施例中,车辆21的前方区域可能还会有交通指示牌等物体,交通指示牌的底部也紧邻地面,在对车辆21的前方车辆、交通指示牌等物体进行标注时,很容易将该前方车辆底部的地面点和/或该交通指示牌底部的地面点误标注,因此在三维点云中对车辆、交通指示牌、建筑物、树木、围栏、行人等进行标注时,需要先识别出该三维点云中的地面点云,并将该三维点云中的地面点云剔除后,再标注车辆、交通指示牌、建筑物、树木、围栏、行人等地面上的物体。
在识别该三维点云中的地面点云之前,本实施例先确定该二维图像中的地面区域。一种可行的实现方式是:首先,可以通过卷积神经网络(Convoltional Neural Networks,CNN)检测获取的二维图像中的车辆,如图3中检测出来的前方的一台车辆,并且可以将其以方框标注,例如以方框31标注车辆21前方的一台车辆;接着,将前方车辆的下方且无其他物体的区域,例如区域32作为参考路面区域,在一些实施方式中,参考路面区域可以是前方车辆方框31下方的一个预设大小的方框区域,当确定参考路面区域后,即认为二维图像中该部分区域图像对应的信息是路面;然后,将参考路面区域的信息输入到支持向量机(Support Vector Machine,SVM)分类器和/或神经网络模型中进行分类预测,从而确定该二维图像中的地面区域。其中,该SVM分类器可以是经过大量样本数据训练后得到的,可以进行线性分类,也可以进行非线性分类。样本数据可以是参考路面的颜色信息,例如RGB信息,在一些实施方式中,该RGB信息是图3中车辆方框31下方方框32内的图像的RGB信息。
在确定该二维图像中的地面区域时,还可以包括:计算地平线,并将地平线下方的区域作分类。具体的,可以通过车辆上搭载的惯性测量单元(Inertial measurement unit,IMU)信息来计算二维图像中的地平线,并且认为路面信息只存在于地平线下方。如图3所示,如果将图3的左上角设定为二维图像的原点,地平线直线方程为ax+by+c=0,那么,该二维图像中地平线的参数为:
r=tan(pitch_angle)*focus_length
a=tan(roll_angle)
b=1
c=-tan(roll_angle)*image_width/2
+r*sin(roll_angle)*tan(roll_angle)
-image_height/2+r*cos(roll_angle)
其中,pitch_angle表示IMU输出的俯仰角,focus_length表示拍摄设备的焦距,roll_angle表示IMU输出的横滚角,image_width表示该二维图像的宽度,image_height表示该二维图像的高度。
步骤S103、根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云。
例如,根据如图4所示的三维点云和如图3所示的二维图像中的地面区域,确定该三维点云中的特定点云。在一些实施例中,所述特定点云为地面点云。
在一些实施例中,所述根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云,包括:确定所述三维点云在所述二维图像中对应的二维点;根据所述三维点云在所述二维图像中对应的二维点以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云。
可选的,所述确定所述三维点云在所述二维图像中对应的二维点,包括:根据所述三维点云的获取设备与所述二维图像的获取设备之间的位置关系,将所述三维点云投影至所述二维图像。
在本实施例中,三维点云的获取设备具体为探测设备例如激光雷达,二维图像的获取设备具体为拍摄设备例如数码相机,根据探测设备和拍摄设备之间的位置关系,可将如图4所示的三维点云中的每个三维点投影到如图3所示的二维图像中。例如,点i表示三维点云中的一个三维点,该三维点在雷达坐标系中的位置记为
Figure PCTCN2018116232-appb-000001
将点i转换到相机坐标系下的位置记为
Figure PCTCN2018116232-appb-000002
Figure PCTCN2018116232-appb-000003
的关系具体如下公式(1)所示:
Figure PCTCN2018116232-appb-000004
其中,
Figure PCTCN2018116232-appb-000005
表示雷达到相机的旋转关系,
Figure PCTCN2018116232-appb-000006
表示在相机坐标系下,雷达的三维位置即平移向量。
通过如下公式(2)、(3)可计算出点i在二维图像中的投影点,该 投影点在二维图像中的位置记为p i(μ,υ):
Figure PCTCN2018116232-appb-000007
Figure PCTCN2018116232-appb-000008
其中,
Figure PCTCN2018116232-appb-000009
表示点i在世界坐标系中的三维坐标。
同理,可确定出如图4所示的三维点云中除点i之外的其他三维点在该二维图像中的投影点,该投影点即为该三维点在该二维图像中对应的二维点。根据三维点云中每个三维点在该二维图像中的投影点和该二维图像中的地面区域,可确定出该三维点云中的地面点云。
例如,根据点i在二维图像中的投影点,确定该投影点是否在该二维图像中的地面区域内,如果该投影点在该二维图像中的地面区域内,则将该点i记为参考点。同理,可确定出三维点云中其他的参考点,这些参考点集合在一起构成该参考点云。进一步,根据该三维点云中的参考点云进行平面拟合,将落在该平面上的三维点云记为地面点云。
步骤S104、去除所述三维点云中的所述特定点云。
去除如图4所示的三维点云中的地面点云后,再标注车辆、交通指示牌、建筑物、树木、围栏、行人等地面上的物体。
需要说明的是,本实施例以二维图像中的地面区域和三维点云中的地面点云为例进行示意性说明,在其他实施例中,同样适用于其他特定区域,例如,天空区域、人行道区域等。
本实施例通过识别目标区域对应的二维图像中的特定区域,并根据包括该目标区域的三维点云和该二维图像中的特定区域,确定出该三维点云中的特定点云,以去除该三维点云中的特定点云,在对该三维点云中的物体进行标注时,可避免特定点云对待标注的物体的影响,避免将特定点云误标注为待识别的物体,通过去除该三维点云中的特定点云,可提高对该 三维点云中的物体进行标注的精确性,以提高对该物体识别的精确性。
本发明实施例提供一种点云处理方法。图5为本发明另一实施例提供的点云处理方法的流程图。如图5所示,在图1所示实施例的基础上,所述根据所述三维点云在所述二维图像中对应的二维点以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云,可以包括:
步骤S501、将所述三维点云中投影至所述二维图像中的所述特定区域的点云作为所述三维点云中的参考点云。
例如,根据点i在二维图像中的投影点,确定该投影点是否在该二维图像中的地面区域内,如果该投影点在该二维图像中的地面区域内,则将该点i记为参考点。同理,可确定出三维点云中其他的参考点,这些参考点集合在一起构成该参考点云。
步骤S502、根据所述三维点云中的参考点云,确定所述三维点云中的特定点云。
可选的,所述根据所述三维点云中的参考点云,确定所述三维点云中的特定点云,包括:根据所述三维点云中的参考点云,确定目标平面;将所述三维点云中相对于所述目标平面的距离小于距离阈值的点作为所述三维点云中的特定点云。
在一些实施例中,所述根据所述三维点云中的参考点云,确定目标平面,包括:根据所述三维点云中的参考点云,采用平面拟合算法确定所述目标平面。
例如,对该三维点云中的参考点云采用平面拟合算法进行平面拟合,拟合出的平面记为目标平面,计算如图4所示的三维点云中每个三维点与该目标平面之间的距离,当该距离小于距离阈值时,该三维点可作为该三维点云中的地面点云,当该距离大于该距离阈值时,说明该三维点不是地面点云。
可选的,所述根据所述三维点云中的参考点云,采用平面拟合算法确定所述目标平面,包括:去除所述参考点云中的异常点,得到修正后的参考点云;根据所述修正后的参考点云,采用平面拟合算法确定所述目标平面。
例如,为了提高平面拟合的精准度,在对该三维点云中的参考点云进行平面拟合之前,还可以检测该三维点云中的参考点云中是否存在异常点,如果存在,则去除该参考点云中的异常点,得到修正后的参考点云,例如,该三维点云中的参考点云包括10个三维点,该10个三维点中有3个是异常点,则去除该10个三维点中的3个异常点,剩余7个三维点,根据该剩余的7个三维点,采用平面拟合算法确定所述目标平面。
可选的,所述去除所述参考点云中的异常点之前,还包括:根据所述参考点云中的部分点,确定包括所述部分点的参考平面;根据所述参考点云中除所述部分点之外的点相对于所述参考平面的距离,确定所述参考点云中的异常点。
例如,检测该参考点云中的异常点的一种可实现方式是:从参考点云包括的10个三维点中随机抽取若干个三维点,例如3个三维点,该3个三维点可确定一个平面,该平面记为参考平面,进一步计算该10个三维点中剩余7个三维点分别相对于该参考平面的距离,如果该7个三维点中大部分点到该参考平面的距离大于预设距离,则说明该3个三维点中存在异常点。通过从该10个三维点中多次随机抽取3个三维点,可确定出该参考点云中的异常点。
本实施例通过去除三维点云中的参考点云的异常点,以对该参考点云进行修正,根据修正后的参考点云,采用平面拟合算法确定目标平面,将该三维点云中相对于所述目标平面的距离小于距离阈值的点作为所述三维点云中的地面点云,提高了对地面点云的检测精度。
本发明实施例提供一种点云处理方法。图6为本发明另一实施例提供的点云处理方法的流程图。如图6所示,本实施例中的方法,可以包括:
步骤S601、获取目标区域对应的二维图像、以及包括所述目标区域的三维点云。
如图7所示为车辆21在行驶的过程中,拍摄设备采集的一个路口的二维图像,图8为探测设备检测到的该路口的三维点云。
步骤S602、确定所述二维图像中的特定区域。
可选的,所述特定区域为地面区域。在如图7所示的二维图像中确定 地面区域的方法和原理均与上述实施例一致,此处不再赘述。
步骤S603、根据所述三维点云以及所述二维图像中的所述特定区域,去除所述三维点云中的所述特定点云。
可选的,所述特定点云为地面点云。
根据如图8所示的三维点云以及如图7所示的二维图像中的地面区域,可确定出该三维点云中的地面点云,具体过程和原理均与上述实施例一致,此处不再赘述。
步骤S604、对所述三维点云中的目标物体进行标注操作。
在去除三维点云的地面点云后,对该三维点云中的目标物体进行标注操作。
可选的,所述对所述三维点云中的目标物体进行标注操作,包括:将所述三维点云转换为二维点云;根据所述二维点云,确定用于标注所述目标物体的标注框。
如图8所示,三维点云中的每个三维点对应有一个三维坐标,将三维点云中的每个三维点在Z轴方向的坐标值设置为一个固定值,例如,将三维点云中的每个三维点在Z轴方向的坐标值设置为0,即可将三维点云转换为二维点云,该二维点云具体如图9所示。在该二维点云中对目标物体进行标注,一种标注的方式是,在该二维点云中对该目标物体进行框选,得到标注框,例如图9所示的矩形框,该标注框中的目标物体即为被标注的目标物体。
在一些实施例中,所述根据所述二维点云,确定用于标注所述目标物体的标注框,包括:根据用户在所述二维点云所在的平面中对所述目标物体的选择操作,确定用于标注所述目标物体的标注框。
例如,将如图9所示的二维点云显示在显示组件中,该显示组件具体可以是触摸屏,用户可以在该显示组件显示的二维点云中对待标注的目标物体进行选择操作,本实施例不限定具体的选择操作方式,可以是框选,可以是点选,还可以是滑动等操作,根据用户的选择操作,确定用于标注该目标物体的标注框,例如图9所示的矩形框。
在一些实施例中,所述标注框可在X轴和/或Y轴方向上伸缩。
如图10所示,黑色圆点表示二维点云,用户以框选的方式标注该二 维点云,例如,以虚线框作为标注该二维点云的标注框,该标注框可以是一个平面框,该标注框可以在X轴和/或Y轴方向上伸缩,例如,根据二位点云的分布,同时在X轴和Y轴进行缩放,得到如图10所示的实线框。
在一些实施例中,所述根据所述二维点云,确定用于标注所述目标物体的标注框之后,还包括:确定所述标注框在所述三维点云中对应的柱状体框。
例如,在二维点云中确定出用于标注该目标物体的标注框之后,进一步,还可以将该标注框投影到三维点云中,得到该标注框在该三维点云中对应的柱状体框。如图11所示的柱状体框是在去除三维点云中的地面点云之前,确定的该标注框在该三维点云中对应的柱状体框。如图12所示的柱状体框是在去除三维点云中的地面点云之后,确定的该标注框在该三维点云中对应的柱状体框。比较图11和图12可知,去除三维点云中的地面点云之后,柱状体框的底部、前、后、左、右可能都会变空。
可选的,所述确定所述标注框在所述三维点云中对应的柱状体框,包括:将所述标注框在所述三维点云中沿着垂直于所述二维点云方向伸缩,得到所述柱状体框。
如图13所示,黑色圆点表示三维点云,每一个三维点云可以看成是如图13所示的三维坐标系中的点,如图10所示的二维点云可以看成是如图13所示的三维点云在XY平面内的投影,垂直于该二维点云的方向即为如图13所示的Z轴方向。将如图10所示的选择框即标注框在如图13所示的三维坐标系中沿着Z轴方向进行伸缩即可得到如图13所示的柱状体框。
如图9所示的标注框是平面上的框,将平面上的框转换为三维空间中的柱状体框的一种方式是:将如图9所示的标注框在三维点云中沿着垂直于该二维点云的方向进行伸缩,垂直于该二维点云的方向具体可以是该三维点云的Z轴方向,也就是说,将如图9所示的标注框在该三维点云的Z轴方向上进行伸缩可得到三维空间中的柱状体框,例如图11或图12所示三维空间中的柱状体框。
本实施例通过识别目标区域对应的二维图像中的特定区域,并根据包括该目标区域的三维点云和该二维图像中的特定区域,确定出该三维点云 中的特定点云,以去除该三维点云中的特定点云,在对该三维点云中的物体进行标注时,可避免特定点云对待标注的物体的影响,避免将特定点云误标注为待识别的物体,通过去除该三维点云中的特定点云,可提高对该三维点云中的物体进行标注的精确性,以提高对该物体识别的精确性。
本发明实施例提供一种点云处理设备。本发明实施例不限定该点云处理设备的具体形式,该点云处理设备可以是车载终端,也可以是服务器、计算机等设备。图14为本发明实施例提供的点云处理设备的结构图,如图14所示,点云处理设备120包括:存储器121、处理器122、拍摄设备123和探测设备124;拍摄设备123用于采集目标区域对应的二维图像;探测设备124用于采集所述目标区域的三维点云;存储器121用于存储程序代码;处理器122调用所述程序代码,当程序代码被执行时,用于执行以下操作:获取所述目标区域对应的二维图像、以及包括所述目标区域的三维点云;确定所述二维图像中的特定区域;根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云;去除所述三维点云中的所述特定点云。
可选的,处理器122根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云时,具体用于:确定所述三维点云在所述二维图像中对应的二维点;根据所述三维点云在所述二维图像中对应的二维点以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云。
可选的,处理器122确定所述三维点云在所述二维图像中对应的二维点时,具体用于:根据所述三维点云的获取设备与所述二维图像的获取设备之间的位置关系,将所述三维点云投影至所述二维图像。
可选的,处理器122根据所述三维点云在所述二维图像中对应的二维点以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云时,具体用于:将所述三维点云中投影至所述二维图像中的所述特定区域的点云作为所述三维点云中的参考点云;根据所述三维点云中的参考点云,确定所述三维点云中的特定点云。
可选的,处理器122根据所述三维点云中的参考点云,确定所述三维点 云中的特定点云时,具体用于:根据所述三维点云中的参考点云,确定目标平面;将所述三维点云中相对于所述目标平面的距离小于距离阈值的点作为所述三维点云中的特定点云。
可选的,处理器122根据所述三维点云中的参考点云,确定目标平面时,具体用于:根据所述三维点云中的参考点云,采用平面拟合算法确定所述目标平面。
可选的,处理器122根据所述三维点云中的参考点云,采用平面拟合算法确定所述目标平面时,具体用于:去除所述参考点云中的异常点,得到修正后的参考点云;根据所述修正后的参考点云,采用平面拟合算法确定所述目标平面。
可选的,处理器122去除所述参考点云中的异常点之前,还用于:根据所述参考点云中的部分点,确定包括所述部分点的参考平面;根据所述参考点云中除所述部分点之外的点相对于所述参考平面的距离,确定所述参考点云中的异常点。
可选的,处理器122获取目标区域对应的二维图像、以及包括所述目标区域的三维点云时,具体用于:获取拍摄设备拍摄的搭载有所述拍摄设备的载体周围目标区域对应的二维图像;获取所述载体上搭载的探测设备检测到的所述载体周围目标区域的三维点云。
可选的,所述探测设备包括如下至少一种:双目立体相机、TOF相机和激光雷达。
可选的,所述特定区域为地面区域,所述特定点云为地面点云。
本发明实施例提供的点云处理设备的具体原理和实现方式与图1和图5所示实施例类似,此处不再赘述。
本实施例通过识别目标区域对应的二维图像中的特定区域,并根据包括该目标区域的三维点云和该二维图像中的特定区域,确定出该三维点云中的特定点云,以去除该三维点云中的特定点云,在对该三维点云中的物体进行标注时,可避免特定点云对待标注的物体的影响,避免将特定点云误标注为待识别的物体,通过去除该三维点云中的特定点云,可提高对该三维点云中的物体进行标注的精确性,以提高对该物体识别的精确性。
本发明实施例提供一种点云处理设备。本发明实施例不限定该点云处理设备的具体形式,该点云处理设备可以是车载终端,也可以是服务器、计算机等设备。图15为本发明实施例提供的点云处理设备的结构图,如图15所示,点云处理设备150包括:存储器151、处理器152和显示组件153;其中,显示组件153用于显示二维图像和/或三维点云;存储器151用于存储程序代码;处理器122调用所述程序代码,当程序代码被执行时,用于执行以下操作:获取目标区域对应的二维图像、以及包括所述目标区域的三维点云;确定所述二维图像中的特定区域;根据所述三维点云以及所述二维图像中的所述特定区域,去除所述三维点云中的所述特定点云;对所述三维点云中的目标物体进行标注操作。
可选的,点云处理设备120还可以包括通讯接口,处理器122通过该通讯接口接收该二维图像和三维点云。
可选的,处理器122对所述三维点云中的目标物体进行标注操作时,具体用于:将所述三维点云转换为二维点云;根据所述二维点云,确定用于标注所述目标物体的标注框。
可选的,处理器122根据所述二维点云,确定用于标注所述目标物体的标注框时,具体用于:根据用户在所述二维点云所在的平面中对所述目标物体的选择操作,确定用于标注所述目标物体的标注框。
可选的,处理器122根据所述二维点云,确定用于标注所述目标物体的标注框之后,还用于:确定所述标注框在所述三维点云中对应的柱状体框。
可选的,处理器122确定所述标注框在所述三维点云中对应的柱状体框时,具体用于:将所述标注框在所述三维点云中沿着垂直于所述二维点云方向伸缩,得到所述柱状体框。
可选的,所述标注框可在X轴和/或Y轴方向上伸缩。
可选的,所述特定区域为地面区域,所述特定点云为地面点云。
本发明实施例提供的点云处理设备的具体原理和实现方式均与图6所示实施例类似,此处不再赘述。
本实施例通过识别目标区域对应的二维图像中的特定区域,并根据包括该目标区域的三维点云和该二维图像中的特定区域,确定出该三维点云 中的特定点云,以去除该三维点云中的特定点云,在对该三维点云中的物体进行标注时,可避免特定点云对待标注的物体的影响,避免将特定点云误标注为待识别的物体,通过去除该三维点云中的特定点云,可提高对该三维点云中的物体进行标注的精确性,以提高对该物体识别的精确性。
可以理解的是,本发明实施例提供的点云处理设备可以进行结合,例如,可以同时具有存储器、处理器、拍摄设备、探测设备和显示组件。其形式不作限定,该点云处理设备可以是车载终端,也可以是服务器、计算机等设备。其中,拍摄设备用于采集目标区域对应的二维图像;探测设备用于采集所述目标区域的三维点云;显示组件用于显示二维图像和/或三维点云。存储器用于存储程序代码;处理器调用所述程序代码,当程序代码被执行时,所述处理器执行的操作如前述实施例中所述,此处不再赘述。
另外,本发明实施例还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行以实现上述实施例所述的点云处理方法。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算 机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (37)

  1. 一种点云处理方法,其特征在于,包括:
    获取目标区域对应的二维图像、以及包括所述目标区域的三维点云;
    确定所述二维图像中的特定区域;
    根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云;
    去除所述三维点云中的所述特定点云。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云,包括:
    确定所述三维点云在所述二维图像中对应的二维点;
    根据所述三维点云在所述二维图像中对应的二维点以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云。
  3. 根据权利要求2所述的方法,其特征在于,所述确定所述三维点云在所述二维图像中对应的二维点,包括:
    根据所述三维点云的获取设备与所述二维图像的获取设备之间的位置关系,将所述三维点云投影至所述二维图像。
  4. 根据权利要求2或3所述的方法,其特征在于,所述根据所述三维点云在所述二维图像中对应的二维点以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云,包括:
    将所述三维点云中投影至所述二维图像中的所述特定区域的点云作为所述三维点云中的参考点云;
    根据所述三维点云中的参考点云,确定所述三维点云中的特定点云。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述三维点云中的参考点云,确定所述三维点云中的特定点云,包括:
    根据所述三维点云中的参考点云,确定目标平面;
    将所述三维点云中相对于所述目标平面的距离小于距离阈值的点作为所述三维点云中的特定点云。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述三维点云中的参考点云,确定目标平面,包括:
    根据所述三维点云中的参考点云,采用平面拟合算法确定所述目标平面。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述三维点云中的参考点云,采用平面拟合算法确定所述目标平面,包括:
    去除所述参考点云中的异常点,得到修正后的参考点云;
    根据所述修正后的参考点云,采用平面拟合算法确定所述目标平面。
  8. 根据权利要求7所述的方法,其特征在于,所述去除所述参考点云中的异常点之前,还包括:
    根据所述参考点云中的部分点,确定包括所述部分点的参考平面;
    根据所述参考点云中除所述部分点之外的点相对于所述参考平面的距离,确定所述参考点云中的异常点。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述获取目标区域对应的二维图像、以及包括所述目标区域的三维点云,包括:
    获取拍摄设备拍摄的搭载有所述拍摄设备的载体周围目标区域对应的二维图像;
    获取所述载体上搭载的探测设备检测到的所述载体周围目标区域的三维点云。
  10. 根据权利要求9所述的方法,其特征在于,所述探测设备包括如下至少一种:
    双目立体相机、TOF相机和激光雷达。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述特定区域为地面区域,所述特定点云为地面点云。
  12. 一种点云处理方法,其特征在于,包括:
    获取目标区域对应的二维图像、以及包括所述目标区域的三维点云;
    确定所述二维图像中的特定区域;
    根据所述三维点云以及所述二维图像中的所述特定区域,去除所述三维点云中的所述特定点云;
    对所述三维点云中的目标物体进行标注操作。
  13. 根据权利要求12所述的方法,其特征在于,所述对所述三维点云中的目标物体进行标注操作,包括:
    将所述三维点云转换为二维点云;
    根据所述二维点云,确定用于标注所述目标物体的标注框。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述二维点云,确定用于标注所述目标物体的标注框,包括:
    根据用户在所述二维点云所在的平面中对所述目标物体的选择操作,确定用于标注所述目标物体的标注框。
  15. 根据权利要求13所述的方法,其特征在于,所述根据所述二维点云,确定用于标注所述目标物体的标注框之后,还包括:
    确定所述标注框在所述三维点云中对应的柱状体框。
  16. 根据权利要求15所述的方法,其特征在于,所述确定所述标注框在所述三维点云中对应的柱状体框,包括:
    将所述标注框在所述三维点云中沿着垂直于所述二维点云方向伸缩,得到所述柱状体框。
  17. 根据权利要求13-16任一项所述的方法,其特征在于,所述标注框可在X轴和/或Y轴方向上伸缩。
  18. 根据权利要求12-17任一项所述的方法,其特征在于,所述特定区域为地面区域,所述特定点云为地面点云。
  19. 一种点云处理设备,其特征在于,包括:存储器、处理器、拍摄设备和探测设备;
    所述拍摄设备用于采集目标区域对应的二维图像;
    所述探测设备用于采集所述目标区域的三维点云;
    所述存储器用于存储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    获取所述目标区域对应的二维图像、以及包括所述目标区域的三维点云;
    确定所述二维图像中的特定区域;
    根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云;
    去除所述三维点云中的所述特定点云。
  20. 根据权利要求19所述的点云处理设备,其特征在于,所述处理 器根据所述三维点云以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云时,具体用于:
    确定所述三维点云在所述二维图像中对应的二维点;
    根据所述三维点云在所述二维图像中对应的二维点以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云。
  21. 根据权利要求20所述的点云处理设备,其特征在于,所述处理器确定所述三维点云在所述二维图像中对应的二维点时,具体用于:
    根据所述三维点云的获取设备与所述二维图像的获取设备之间的位置关系,将所述三维点云投影至所述二维图像。
  22. 根据权利要求20或21所述的点云处理设备,其特征在于,所述处理器根据所述三维点云在所述二维图像中对应的二维点以及所述二维图像中的所述特定区域,确定所述三维点云中的特定点云时,具体用于:
    将所述三维点云中投影至所述二维图像中的所述特定区域的点云作为所述三维点云中的参考点云;
    根据所述三维点云中的参考点云,确定所述三维点云中的特定点云。
  23. 根据权利要求22所述的点云处理设备,其特征在于,所述处理器根据所述三维点云中的参考点云,确定所述三维点云中的特定点云时,具体用于:
    根据所述三维点云中的参考点云,确定目标平面;
    将所述三维点云中相对于所述目标平面的距离小于距离阈值的点作为所述三维点云中的特定点云。
  24. 根据权利要求23所述的点云处理设备,其特征在于,所述处理器根据所述三维点云中的参考点云,确定目标平面时,具体用于:
    根据所述三维点云中的参考点云,采用平面拟合算法确定所述目标平面。
  25. 根据权利要求24所述的点云处理设备,其特征在于,所述处理器根据所述三维点云中的参考点云,采用平面拟合算法确定所述目标平面时,具体用于:
    去除所述参考点云中的异常点,得到修正后的参考点云;
    根据所述修正后的参考点云,采用平面拟合算法确定所述目标平面。
  26. 根据权利要求25所述的点云处理设备,其特征在于,所述处理器去除所述参考点云中的异常点之前,还用于:
    根据所述参考点云中的部分点,确定包括所述部分点的参考平面;
    根据所述参考点云中除所述部分点之外的点相对于所述参考平面的距离,确定所述参考点云中的异常点。
  27. 根据权利要求19-26任一项所述的点云处理设备,其特征在于,所述处理器获取目标区域对应的二维图像、以及包括所述目标区域的三维点云时,具体用于:
    获取拍摄设备拍摄的搭载有所述拍摄设备的载体周围目标区域对应的二维图像;
    获取所述载体上搭载的探测设备检测到的所述载体周围目标区域的三维点云。
  28. 根据权利要求27所述的点云处理设备,其特征在于,所述探测设备包括如下至少一种:
    双目立体相机、TOF相机和激光雷达。
  29. 根据权利要求19-28任一项所述的点云处理设备,其特征在于,所述特定区域为地面区域,所述特定点云为地面点云。
  30. 一种点云处理设备,其特征在于,包括:存储器、处理器和显示组件;
    所述显示组件用于显示二维图像和/或三维点云;
    所述存储器用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    获取目标区域对应的二维图像、以及包括所述目标区域的三维点云;
    确定所述二维图像中的特定区域;
    根据所述三维点云以及所述二维图像中的所述特定区域,去除所述三维点云中的所述特定点云;
    对所述三维点云中的目标物体进行标注操作。
  31. 根据权利要求30所述的点云处理设备,其特征在于,所述处理器对所述三维点云中的目标物体进行标注操作时,具体用于:
    将所述三维点云转换为二维点云;
    根据所述二维点云,确定用于标注所述目标物体的标注框。
  32. 根据权利要求31所述的点云处理设备,其特征在于,所述处理器根据所述二维点云,确定用于标注所述目标物体的标注框时,具体用于:
    根据用户在所述二维点云所在的平面中对所述目标物体的选择操作,确定用于标注所述目标物体的标注框。
  33. 根据权利要求31所述的点云处理设备,其特征在于,所述处理器根据所述二维点云,确定用于标注所述目标物体的标注框之后,还用于:
    确定所述标注框在所述三维点云中对应的柱状体框。
  34. 根据权利要求33所述的点云处理设备,其特征在于,所述处理器确定所述标注框在所述三维点云中对应的柱状体框时,具体用于:
    将所述标注框在所述三维点云中沿着垂直于所述二维点云方向伸缩,得到所述柱状体框。
  35. 根据权利要求31-34任一项所述的点云处理设备,其特征在于,所述标注框可在X轴和/或Y轴方向上伸缩。
  36. 根据权利要求30-35任一项所述的点云处理设备,其特征在于,所述特定区域为地面区域,所述特定点云为地面点云。
  37. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被处理器执行以实现如权利要求1-18任一项所述的方法。
PCT/CN2018/116232 2018-11-19 2018-11-19 点云处理方法、设备及存储介质 WO2020102944A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880041553.8A CN110869974B (zh) 2018-11-19 点云处理方法、设备及存储介质
PCT/CN2018/116232 WO2020102944A1 (zh) 2018-11-19 2018-11-19 点云处理方法、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/116232 WO2020102944A1 (zh) 2018-11-19 2018-11-19 点云处理方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020102944A1 true WO2020102944A1 (zh) 2020-05-28

Family

ID=69651835

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116232 WO2020102944A1 (zh) 2018-11-19 2018-11-19 点云处理方法、设备及存储介质

Country Status (1)

Country Link
WO (1) WO2020102944A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951330A (zh) * 2020-08-27 2020-11-17 北京小马慧行科技有限公司 标注的更新方法、装置、存储介质、处理器以及运载工具
CN112630793A (zh) * 2020-11-30 2021-04-09 深圳集智数字科技有限公司 一种确定平面异常点的方法和相关装置
CN112785714A (zh) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 点云实例标注方法及装置、电子设备和介质
CN112991455A (zh) * 2021-02-01 2021-06-18 武汉光庭信息技术股份有限公司 一种点云与图片融合标注的方法及系统
CN113344866A (zh) * 2021-05-26 2021-09-03 长江水利委员会水文局长江上游水文水资源勘测局 一种点云综合精度评价方法
CN113744323A (zh) * 2021-08-11 2021-12-03 深圳蓝因机器人科技有限公司 点云数据处理方法和装置
CN113808186A (zh) * 2021-03-04 2021-12-17 京东鲲鹏(江苏)科技有限公司 训练数据生成方法、装置与电子设备
CN114529610A (zh) * 2022-01-11 2022-05-24 浙江零跑科技股份有限公司 一种基于rgb-d相机的毫米波雷达数据标注方法
CN115661215A (zh) * 2022-10-17 2023-01-31 北京四维远见信息技术有限公司 一种车载激光点云数据配准方法、装置、电子设备及介质
CN115830262A (zh) * 2023-02-14 2023-03-21 济南市勘察测绘研究院 一种基于对象分割的实景三维模型建立方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102317972A (zh) * 2009-02-13 2012-01-11 哈里公司 3d点云数据对于2d电光图像数据的对准
US20130249901A1 (en) * 2012-03-22 2013-09-26 Christopher Richard Sweet Systems and methods for geometrically mapping two-dimensional images to three-dimensional surfaces
CN104766058A (zh) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 一种获取车道线的方法和装置
CN105512646A (zh) * 2016-01-19 2016-04-20 腾讯科技(深圳)有限公司 一种数据处理方法、装置及终端
CN105719284A (zh) * 2016-01-18 2016-06-29 腾讯科技(深圳)有限公司 一种数据处理方法、装置及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102317972A (zh) * 2009-02-13 2012-01-11 哈里公司 3d点云数据对于2d电光图像数据的对准
US20130249901A1 (en) * 2012-03-22 2013-09-26 Christopher Richard Sweet Systems and methods for geometrically mapping two-dimensional images to three-dimensional surfaces
CN104766058A (zh) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 一种获取车道线的方法和装置
CN105719284A (zh) * 2016-01-18 2016-06-29 腾讯科技(深圳)有限公司 一种数据处理方法、装置及终端
CN105512646A (zh) * 2016-01-19 2016-04-20 腾讯科技(深圳)有限公司 一种数据处理方法、装置及终端

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951330A (zh) * 2020-08-27 2020-11-17 北京小马慧行科技有限公司 标注的更新方法、装置、存储介质、处理器以及运载工具
CN112630793A (zh) * 2020-11-30 2021-04-09 深圳集智数字科技有限公司 一种确定平面异常点的方法和相关装置
CN112630793B (zh) * 2020-11-30 2024-05-17 深圳集智数字科技有限公司 一种确定平面异常点的方法和相关装置
CN112785714A (zh) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 点云实例标注方法及装置、电子设备和介质
CN112991455A (zh) * 2021-02-01 2021-06-18 武汉光庭信息技术股份有限公司 一种点云与图片融合标注的方法及系统
CN112991455B (zh) * 2021-02-01 2022-06-17 武汉光庭信息技术股份有限公司 一种点云与图片融合标注的方法及系统
CN113808186A (zh) * 2021-03-04 2021-12-17 京东鲲鹏(江苏)科技有限公司 训练数据生成方法、装置与电子设备
CN113808186B (zh) * 2021-03-04 2024-01-16 京东鲲鹏(江苏)科技有限公司 训练数据生成方法、装置与电子设备
CN113344866A (zh) * 2021-05-26 2021-09-03 长江水利委员会水文局长江上游水文水资源勘测局 一种点云综合精度评价方法
CN113744323A (zh) * 2021-08-11 2021-12-03 深圳蓝因机器人科技有限公司 点云数据处理方法和装置
CN113744323B (zh) * 2021-08-11 2023-12-19 深圳蓝因机器人科技有限公司 点云数据处理方法和装置
CN114529610A (zh) * 2022-01-11 2022-05-24 浙江零跑科技股份有限公司 一种基于rgb-d相机的毫米波雷达数据标注方法
CN115661215A (zh) * 2022-10-17 2023-01-31 北京四维远见信息技术有限公司 一种车载激光点云数据配准方法、装置、电子设备及介质
CN115661215B (zh) * 2022-10-17 2023-06-09 北京四维远见信息技术有限公司 一种车载激光点云数据配准方法、装置、电子设备及介质
CN115830262A (zh) * 2023-02-14 2023-03-21 济南市勘察测绘研究院 一种基于对象分割的实景三维模型建立方法及装置

Also Published As

Publication number Publication date
CN110869974A (zh) 2020-03-06

Similar Documents

Publication Publication Date Title
WO2020102944A1 (zh) 点云处理方法、设备及存储介质
Choi et al. KAIST multi-spectral day/night data set for autonomous and assisted driving
CN110163930B (zh) 车道线生成方法、装置、设备、系统及可读存储介质
EP3876141A1 (en) Object detection method, related device and computer storage medium
WO2018049998A1 (zh) 交通标志牌信息获取方法及装置
CN110458112B (zh) 车辆检测方法、装置、计算机设备和可读存储介质
JP2006252473A (ja) 障害物検出装置、キャリブレーション装置、キャリブレーション方法およびキャリブレーションプログラム
WO2020258297A1 (zh) 图像语义分割方法、可移动平台及存储介质
KR102200299B1 (ko) 3d-vr 멀티센서 시스템 기반의 도로 시설물 관리 솔루션을 구현하는 시스템 및 그 방법
CN112232275B (zh) 基于双目识别的障碍物检测方法、系统、设备及存储介质
KR102167835B1 (ko) 영상 처리 방법 및 장치
CN114692720B (zh) 基于鸟瞰图的图像分类方法、装置、设备及存储介质
CN113240734B (zh) 一种基于鸟瞰图的车辆跨位判断方法、装置、设备及介质
JP7389729B2 (ja) 障害物検知装置、障害物検知システム及び障害物検知方法
JP2017181476A (ja) 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラム
CN114761997A (zh) 目标检测方法、终端设备和介质
CN112907746A (zh) 电子地图的生成方法、装置、电子设备及存储介质
CN111724432B (zh) 物体三维检测方法和装置
KR102249381B1 (ko) 3차원 영상 정보를 이용한 모바일 디바이스의 공간 정보 생성 시스템 및 방법
CN110827340B (zh) 地图的更新方法、装置及存储介质
CN109598199B (zh) 车道线生成方法和装置
CN112639822A (zh) 一种数据处理方法及装置
CN111656404A (zh) 图像处理方法、系统及可移动平台
EP4078087B1 (en) Method and mobile entity for detecting feature points in an image
KR102516450B1 (ko) 맵 생성 방법 및 이를 이용한 이미지 기반 측위 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18940834

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18940834

Country of ref document: EP

Kind code of ref document: A1