CN117670986A - Point cloud labeling method - Google Patents

Point cloud labeling method Download PDF

Info

Publication number
CN117670986A
CN117670986A CN202211052700.6A CN202211052700A CN117670986A CN 117670986 A CN117670986 A CN 117670986A CN 202211052700 A CN202211052700 A CN 202211052700A CN 117670986 A CN117670986 A CN 117670986A
Authority
CN
China
Prior art keywords
point
reference surface
point cloud
height
horizontal plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211052700.6A
Other languages
Chinese (zh)
Inventor
王安琦
李冲冲
潘睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202211052700.6A priority Critical patent/CN117670986A/en
Publication of CN117670986A publication Critical patent/CN117670986A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices With Unspecified Measuring Means (AREA)

Abstract

The application discloses a point cloud labeling method, which is applied to computer equipment and belongs to the technical field of surveying and mapping science. The method comprises the following steps: acquiring a three-dimensional point cloud to be marked; determining a frame of the marked point cloud in the three-dimensional point cloud; determining the height difference between the frame and a reference surface where the point cloud is located; and automatically adjusting the frame body according to the height difference between the frame body and the reference surface, and obtaining the labeling result of the point cloud according to the adjusted frame body. The height of the reference surface where the point cloud is located is automatically determined according to the coordinates of the three-dimensional point cloud, the frame body is automatically adjusted according to the height difference between the frame body and the reference surface, the marking result of the point cloud is obtained, the marking efficiency is improved, and meanwhile, the accuracy of the marking result is improved.

Description

Point cloud labeling method
Technical Field
The embodiment of the application relates to the technical field of surveying and mapping science, in particular to a point cloud labeling method.
Background
With the rapid development of mapping science and technology, information of objects needs to be collected for computer learning and recognition in many scenes. For example, in an automatic driving scene, a vehicle needs to automatically identify and avoid an object encountered in the driving process, so that the object needs to be marked by using a three-dimensional point cloud marking method to obtain relevant information of the object.
In the related art, a radar sensor scans an object to obtain related data of the scanned object. And processing the scanned related data to obtain the three-dimensional point cloud of the object. After the frame is marked in the three-dimensional point cloud by manpower, the frame is attached to the object by manually adjusting the frame. In addition, since the object is generally on the ground, it is also necessary to manually confirm whether the z-axis of the frame is in contact with the ground. If the z axis of the frame body is not attached to the ground, the z axis of the frame body needs to be manually adjusted to be attached to the ground.
Because whether the z-axis of the frame body is attached to the ground or not needs to be confirmed manually, the marking efficiency is low; in addition, even if the z-axis of the frame body is manually adjusted to be attached to the ground, the z-axis cannot be completely attached to the ground, so that the labeling result is not accurate enough.
Disclosure of Invention
The embodiment of the application provides a point cloud labeling method, a point cloud labeling device, point cloud labeling equipment and a storage medium, which can be used for solving the problems in the related technology. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a point cloud labeling method, where the method includes:
acquiring a three-dimensional point cloud to be marked;
determining a frame of the marked point cloud in the three-dimensional point cloud;
Determining the height difference between the frame and a reference surface where the point cloud is located;
and automatically adjusting the frame body according to the height difference between the frame body and the reference surface, and obtaining the labeling result of the point cloud according to the adjusted frame body.
In one possible implementation manner, the determining the height difference between the frame and the reference plane where the point cloud is located includes:
determining the height of a reference surface where the point cloud is located according to the coordinates of the three-dimensional point cloud;
and determining the height difference between the frame body and the reference surface based on the height of the reference surface.
In one possible implementation manner, the determining the height of the reference surface where the point cloud is located according to the coordinates of the three-dimensional point cloud includes:
determining points belonging to a reference plane in the three-dimensional point cloud;
the height of the reference surface is determined from the coordinates of the point on the reference surface.
In a possible implementation manner, the determining the point belonging to the reference plane in the three-dimensional point cloud includes:
traversing the three-dimensional point cloud, and calculating the included angles between connecting lines between the first point and adjacent points of the first point and the horizontal plane respectively for the traversed first point, wherein the first point is any traversed point;
And determining whether the first point belongs to a point on the reference surface or not based on the included angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane.
In one possible implementation manner, the calculating the included angle between the connecting line between the first point and the adjacent point of the first point and the horizontal plane includes:
calculating a first included angle between a connecting line between the first point and the second point and a horizontal plane, and a second included angle between a connecting line between the first point and the third point and the horizontal plane, wherein the second point and the third point are adjacent points of the first point respectively;
the determining whether the first point belongs to a point on the reference plane based on the included angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane respectively comprises the following steps:
if the first included angle and the second included angle are smaller than a first threshold value, and the difference value between the first included angle and the second included angle is smaller than a second threshold value, the first point belongs to a point on the reference surface.
In one possible implementation manner, after traversing the three-dimensional point cloud, the method further includes:
calculating the included angles between the connecting lines of a fourth point and the adjacent points of the fourth point and the horizontal plane respectively, wherein the fourth point is a point which belongs to the same scanning radius as the first point;
Calculating a first distance between the first point and an origin of coordinates and a second distance between the fourth point and the origin of coordinates;
the determining whether the first point belongs to a point on the reference plane based on the included angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane respectively comprises the following steps:
and determining whether the first point belongs to a point on the reference surface based on the included angles between the connecting lines between the first point and the adjacent points of the first point and the horizontal plane and the included angles between the connecting lines between the fourth point and the adjacent points of the fourth point and the horizontal plane.
In one possible implementation manner, the calculating the included angle between the connecting line between the first point and the adjacent point of the first point and the horizontal plane includes: calculating a fourth included angle between a connecting line between the first point and a sixth point and a horizontal plane, wherein the sixth point is an adjacent point of the first point;
calculating the included angles between the connecting lines between the fourth point and the adjacent points of the fourth point and the horizontal plane respectively, wherein the included angles comprise: calculating a third included angle between a connecting line between the fourth point and a fifth point and a horizontal plane, wherein the fifth point is an adjacent point of the fourth point;
The determining whether the first point belongs to a point on the reference surface based on the included angles between the connecting lines between the first point and the adjacent points of the first point and the horizontal plane and the included angles between the connecting lines between the fourth point and the adjacent points of the fourth point and the horizontal plane respectively comprises:
if the difference between the first distance and the second distance is smaller than a third threshold value, and the difference between the third included angle and the fourth included angle is smaller than a fourth threshold value, the first point belongs to a point on the reference surface.
In one possible implementation, the determining the height of the reference surface according to the coordinates of the point on the reference surface includes:
dividing the three-dimensional point cloud into a plurality of grids according to the x-axis coordinate and the y-axis coordinate of the three-dimensional point cloud;
and determining the height of the reference surface according to the coordinates of the three-dimensional point cloud corresponding to the points belonging to the reference surface in each grid in the multiple grids on the z-axis.
In a possible implementation manner, the determining the height of the reference surface according to the coordinates of the three-dimensional point cloud corresponding to the points belonging to the reference surface in each grid in the multiple grids in the z-axis includes:
Averaging the values of the three-dimensional point clouds corresponding to the points belonging to the reference plane in each grid in the plurality of grids on the z-axis coordinate;
the average value is taken as the height of the reference surface.
In one possible implementation manner, the determining the height difference between the frame body and the reference surface based on the height of the reference surface includes:
determining the height of a center point of the frame body according to coordinates of the three-dimensional point cloud corresponding to the frame body;
and taking the height difference between the central point of the frame body and the reference surface as the height difference between the frame body and the reference surface.
In another aspect, a point cloud labeling apparatus is provided, the apparatus including:
the acquisition module is used for acquiring the three-dimensional point cloud to be marked;
the determining module is used for determining a frame body of the marked point cloud in the three-dimensional point cloud;
the determining module is further used for determining the height difference between the frame body and the reference surface where the point cloud is located;
and the adjusting module is used for automatically adjusting the frame body according to the height difference between the frame body and the reference surface, and obtaining the labeling result of the point cloud according to the adjusted frame body.
In a possible implementation manner, the determining module is configured to determine, according to coordinates of the three-dimensional point cloud, a height of a reference plane where the point cloud is located; and determining the height difference between the frame body and the reference surface based on the height of the reference surface.
In a possible implementation manner, the determining module is configured to determine a point belonging to the reference plane in the three-dimensional point cloud; the height of the reference surface is determined from the coordinates of the point on the reference surface.
In a possible implementation manner, the determining module is configured to traverse the three-dimensional point cloud, and for a first traversed point, calculate angles between connecting lines between the first point and adjacent points of the first point and a horizontal plane, where the first point is any traversed point; and determining whether the first point belongs to a point on the reference surface or not based on the included angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane.
In a possible implementation manner, the determining module is configured to calculate a first included angle between a line between the first point and a second point and a horizontal plane, and a second included angle between a line between the first point and a third point and a horizontal plane, where the second point and the third point are adjacent points of the first point respectively;
the determining module is configured to, if the first included angle and the second included angle are both smaller than a first threshold, and a difference between the first included angle and the second included angle is smaller than a second threshold, make the first point belong to a point on the reference plane.
In a possible implementation manner, the determining module is further configured to calculate angles between a connecting line between a fourth point and a point adjacent to the fourth point and a horizontal plane, where the fourth point is a point that belongs to the same scanning radius as the first point; calculating a first distance between the first point and an origin of coordinates and a second distance between the fourth point and the origin of coordinates;
the determining module is configured to determine whether the first point belongs to a point on the reference plane based on an included angle between a connecting line between the first point and an adjacent point of the first point and a horizontal plane, and an included angle between a connecting line between the fourth point and an adjacent point of the fourth point and a horizontal plane.
In a possible implementation manner, the determining module is configured to calculate a fourth included angle between a connecting line between the first point and a sixth point and a horizontal plane, where the sixth point is a neighboring point of the first point; calculating a third included angle between a connecting line between the fourth point and a fifth point and a horizontal plane, wherein the fifth point is an adjacent point of the fourth point;
the determining module is configured to, if a difference between the first distance and the second distance is smaller than a third threshold, and a difference between the third included angle and the fourth included angle is smaller than a fourth threshold, determine that the first point belongs to a point on the reference plane.
In one possible implementation manner, the determining module is configured to divide the three-dimensional point cloud into a plurality of grids according to x-axis coordinates and y-axis coordinates of the three-dimensional point cloud; and determining the height of the reference surface according to the coordinates of the three-dimensional point cloud corresponding to the points belonging to the reference surface in each grid in the multiple grids on the z-axis.
In a possible implementation manner, the determining module is configured to average values of three-dimensional point clouds corresponding to points belonging to the reference plane in each grid in the multiple grids on a z-axis coordinate; the average value is taken as the height of the reference surface.
In a possible implementation manner, the determining module is configured to determine a height of a center point of the frame according to coordinates of a three-dimensional point cloud corresponding to the frame; and taking the height difference between the central point of the frame body and the reference surface as the height difference between the frame body and the reference surface.
In another aspect, a computer device is provided, where the computer device includes a processor and a memory, where at least one computer program is stored in the memory, and the at least one computer program is loaded and executed by the processor, so that the computer device implements any one of the point cloud labeling methods described above.
In another aspect, there is further provided a computer readable storage medium, where at least one computer program is stored, where the at least one computer program is loaded and executed by a processor, so that a computer implements any one of the point cloud labeling methods described above.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs any of the point cloud labeling methods described above.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects:
in the point cloud labeling method, the height difference of the frame body and the reference surface where the point cloud is located is automatically determined according to the coordinates of the three-dimensional point cloud, and the attachment of the frame body and the reference surface is automatically adjusted according to the height difference of the frame body and the reference surface, so that a point cloud labeling result is obtained. Compared with the method that the z axis of the frame body is manually adjusted to be attached to the reference surface, the method improves the labeling efficiency and improves the accuracy of the labeling result by automatically adjusting the frame body.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by embodiments of the present application;
fig. 2 is a flowchart of a point cloud labeling method provided in an embodiment of the present application;
FIG. 3 is a schematic view of a longitudinal traversal of a three-dimensional point cloud provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a lateral traversal of a three-dimensional point cloud provided by an embodiment of the present application;
FIG. 5 is a schematic view of a ground level grid provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a point cloud labeling apparatus provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a point cloud labeling method which is executed by computer equipment. The computer device is a terminal or a server. Referring to fig. 1, a schematic diagram of a method implementation environment provided in an embodiment of the present application is shown. The implementation environment may include: a terminal 11 and a server 12.
The terminal 11 is provided with an application program or a web page capable of labeling the point cloud, and when the application program or the web page needs to label the point cloud, the method provided by the embodiment of the application can be used for labeling. The server 12 may store three-dimensional point cloud data obtained by scanning the radar sensor, and the terminal 11 may obtain the three-dimensional point cloud data obtained by scanning the radar sensor from the server 12. Of course, the terminal 11 may store the acquired three-dimensional point cloud data.
Alternatively, the terminal 11 may be a smart device such as a tablet computer, a personal computer, or the like. The server 12 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center. The terminal 11 establishes a communication connection with the server 12 through a wired or wireless network.
Alternatively, the terminal 11 may be any electronic product that can perform man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, a voice interaction or handwriting device, such as a PC (Personal Computer ), a PPC (Pocket PC), a tablet PC, etc. The server 12 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center. The terminal 11 establishes a communication connection with the server 12 through a wired or wireless network.
Those skilled in the art will appreciate that the above-described terminal 11 and server 12 are by way of example only, and that other terminals or servers, either now present or later, may be suitable for use in the present application, and are intended to be within the scope of the present application and are incorporated herein by reference.
The embodiment of the application provides a point cloud labeling method, which can be applied to the implementation environment shown in the above-mentioned fig. 1. As shown in fig. 2, taking the example that the method is applied to a computer device, the method includes steps 201-204.
In step 201, a computer device obtains a three-dimensional point cloud to be annotated.
In the method provided by the embodiment of the application, the three-dimensional point cloud can be obtained through the radar sensor, for example, after the radar sensor scans the area by emitting the laser beam, the three-dimensional point cloud data to be marked can be obtained, the radar sensor uploads the obtained three-dimensional point cloud data to the computer equipment, the computer equipment can receive the three-dimensional point cloud, and the three-dimensional point cloud can be used for marking the frame body of the point cloud and determining the height of the reference surface. It should be noted that, after the radar sensor scans the object, the three-dimensional point cloud to be marked corresponding to the object can be obtained.
The number of radar sensors is not limited, and may be one or a plurality of radar sensors when scanning the region. The installation position of the radar sensor is not fixed, and the radar sensor can be installed on a moving vehicle, for example, as long as the area can be scanned according to the requirement and the related data of the point cloud to be marked can be acquired. The type of the point cloud to be marked is not limited, and the point cloud to be marked can be an automobile or a pedestrian.
In addition, the coordinates of the data obtained by the scanning of the radar sensor may be based on the coordinates of the radar sensor, that is, the position where the radar sensor is located is the origin of coordinates.
In step 202, a computer device determines a box of a point cloud that is annotated in a three-dimensional point cloud.
After the computer equipment receives the three-dimensional point cloud data uploaded by the radar sensor, the point cloud can be marked with the frame body.
When the frame is marked by the point cloud, the following two methods can be adopted, but are not limited to:
first kind: manual labeling. In the method, the frame bodies are marked on the point clouds of different continuous frames manually.
When the radar sensor scans the area, three-dimensional point clouds of different frames can be obtained at different moments. And after the three-dimensional point clouds of the continuous multiframes are synthesized, forming a three-dimensional point cloud. When the area is scanned, the number of the radar sensors can be one or more, and after the area is scanned by one radar sensor, a three-dimensional point cloud can be obtained, wherein the three-dimensional point cloud records the x, y and z coordinates of each point, the distance and the angle of the relative radar.
Second kind: differential labeling. In the method, after point clouds in two non-adjacent frames are marked manually, the computer equipment can automatically predict the interval frames in a time difference mode, and then mark frame bodies for the point clouds in the interval frames respectively. The differential labeling is based on the principle of uniform motion, that is, it is assumed that the motion of an object corresponding to the labeled point cloud conforms to uniform motion in a short time.
When the frame body is marked on the three-dimensional point cloud, the frame body needs to be attached to the point cloud, namely, compared with the corresponding three-dimensional point cloud, the size of the frame body cannot be too large or too small, so that the frame body can accurately embody the information of the size, the position and the like of an object corresponding to the point cloud. Therefore, after the three-dimensional point cloud is marked with the frame, the x-y axis of the frame can be manually adjusted to be attached to the point cloud. The x-y axes of the frame bodies can be manually adjusted no matter which marking mode is used for marking the point cloud, so that the frame bodies can accurately embody the information such as the size, the position and the like of the object corresponding to the point cloud on the x-y plane.
In step 203, the computer device determines a height difference between the frame and a reference surface on which the point cloud is located.
In one possible implementation, determining the height difference between the frame and the reference plane where the point cloud is located includes: determining the height of a reference surface where the point cloud is located according to the coordinates of the three-dimensional point cloud; the height difference between the frame and the reference surface is determined based on the height of the reference surface.
In one possible implementation, determining the height of the reference surface on which the object is located according to the coordinates of the three-dimensional point cloud includes: determining points belonging to a reference plane in the three-dimensional point cloud; the height of the reference surface is determined from the coordinates of the point on the reference surface.
In one possible implementation, determining a point in the three-dimensional point cloud that belongs to the reference plane includes: traversing the three-dimensional point cloud, and calculating the included angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane respectively for the traversed first point, wherein the first point is any traversed point; based on the angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane, whether the first point belongs to a point on the reference plane is determined.
In one possible implementation manner, calculating angles between connecting lines between the first point and adjacent points of the first point and a horizontal plane respectively includes: calculating a first included angle between a connecting line between the first point and the second point and the horizontal plane, and a second included angle between a connecting line between the first point and the third point and the horizontal plane, wherein the second point and the third point are adjacent points of the first point respectively; determining whether the first point belongs to a point on the reference plane based on angles between connecting lines between the first point and adjacent points of the first point and the horizontal plane respectively, including: if the first included angle and the second included angle are smaller than the first threshold value and the difference between the first included angle and the second included angle is smaller than the second threshold value, the first point belongs to the point on the reference surface.
FIG. 3 is a schematic view of traversing a three-dimensional point cloud longitudinally. The figure has three points A, B and C, a rear view of the vehicle and a schematic diagram of the roof radar. The radar sensor is located at the top of the vehicle and scans the area in the map by emitting a laser beam. As shown, points a, B, and C are any adjacent three points in the three-dimensional point cloud, and it is unknown whether points a, B, and C are points on the reference plane. In this case, if the point B is a point on the reference plane, the angle alpha between the line between the point A and the point B and the horizontal plane is calculated AB And calculating the included angle alpha of the connecting line between the point B and the point C and the horizontal plane BC
Wherein if the A point coordinate is known to be (x A ,y A ,z A ) The coordinates of the point B are (x B ,y B ,z B ) The C point coordinates are (x C ,y C ,z C ) Then alpha can be calculated according to the following equation (1) AB
Alpha is calculated according to the following formula (2) BC
If alpha AB And alpha BC Are all less than a first threshold value, and alpha BCAB If the value is smaller than the second threshold value, the point B can be judged as the point on the reference plane. The first threshold value and the second threshold value may be empirical values, or may be values determined according to application scenarios. And the same applies to the judgment of whether the point C is a point on the reference surface or not. If the first point which does not belong to the reference plane is found according to the method, the backward search is stopped, and the points before the first point are all points on the reference plane.
In one possible implementation manner, after traversing the three-dimensional point cloud, the method further includes: calculating the included angles between the connecting lines of the fourth point and the adjacent points of the fourth point and the horizontal plane respectively, wherein the fourth point is a point which belongs to the same scanning radius as the first point; calculating a first distance between the first point and the origin of coordinates and a second distance between the fourth point and the origin of coordinates; determining whether the first point belongs to a point on the reference plane based on angles between connecting lines between the first point and adjacent points of the first point and the horizontal plane respectively, including: and determining whether the first point belongs to a point on the reference surface based on the included angles between the connecting lines between the first point and the adjacent points of the first point and the horizontal plane and the included angles between the connecting lines between the fourth point and the adjacent points of the fourth point and the horizontal plane.
In one possible implementation manner, calculating angles between connecting lines between the first point and adjacent points of the first point and a horizontal plane respectively includes: calculating a fourth included angle between a connecting line between the first point and a sixth point and a horizontal plane, wherein the sixth point is an adjacent point of the first point; calculating the included angles between the connecting lines between the fourth point and the adjacent points of the fourth point and the horizontal plane respectively, wherein the included angles comprise: calculating a third included angle between a connecting line between the fourth point and a fifth point and a horizontal plane, wherein the fifth point is an adjacent point of the fourth point; determining whether the first point belongs to a point on the reference plane based on the included angles between the connecting lines between the first point and the adjacent points of the first point and the horizontal plane and the included angles between the connecting lines between the adjacent points of the fourth point and the horizontal plane, respectively, includes: if the difference between the first distance and the second distance is smaller than the third threshold value and the difference between the third included angle and the fourth included angle is smaller than the fourth threshold value, the first point belongs to a point on the reference plane.
FIG. 4 is a schematic diagram of traversing a three-dimensional point cloud in a lateral direction. The figure has four points A, B, D and E, a top view of the vehicle and a schematic diagram of a roof radar. The radar sensor is located at the top of the vehicle, the area in the figure is scanned by emitting a laser beam, and points a, B, E are known to belong to points on the reference plane. If it is to be determined whether the point D is a point belonging to the reference plane, the distance L between the point A and the origin of coordinates is calculated A Calculating the distance L between the point D and the origin of coordinates D . Then calculating the included angle alpha between the connecting line between the point A and the point B and the horizontal plane AB And calculating the angle alpha between the connecting line between the point D and the point E and the horizontal plane DE . Wherein, the point A and the point D are two points located on the same scanning radius. If the A point coordinate is known to be (x A ,y A ,z A ) The coordinates of the point B are (x B ,y B ,z B ) The D point coordinates are (x D ,y D ,z D ) The E point coordinates are (x E ,y E ,z E ) Then L A The calculation method of (2) is shown in the formula (3):
L D the calculation method of (2) is shown in the formula (4):
α AB the calculation method of (2) is shown in the formula (5):
α DE the calculation method of (2) is shown in the formula (6):
if L D And L A Is less than a third threshold value, and alpha AB And alpha DE If the difference of (c) is smaller than the fourth threshold, it can be determined that the point D is a point on the reference plane. The third threshold value and the fourth threshold value may be empirical values, or may be values determined according to application scenarios.
It should be noted that, compared to the method for determining the point on the reference plane shown in fig. 3, the precondition of using the point on the reference plane determined by the second determination method is that at least three points belonging to the reference plane are known, and one point of the three points belonging to the reference plane is required to be located on the same scanning radius as the point to be determined.
Whichever method described above is used to determine the point on the reference surface, the height of the reference surface is determined from the coordinates of the point on the reference surface, including, but not limited to: dividing the three-dimensional point cloud into a plurality of grids according to the x-axis coordinates and the y-axis coordinates of the three-dimensional point cloud; the height of the reference plane is determined for the coordinates in the z-axis of the three-dimensional point cloud corresponding to points belonging to the reference plane within each of the plurality of grids.
In one possible implementation, determining the height of the reference surface from the coordinates in the z-axis of points belonging to the reference surface within each of the plurality of grids includes: averaging the values of the three-dimensional point clouds corresponding to the points belonging to the reference plane in each grid in the plurality of grids on the z-axis coordinate; the average value was taken as the height of the reference plane.
After the computer equipment obtains the three-dimensional point cloud to be marked, dividing the three-dimensional point cloud into a plurality of grids on an x-y plane, wherein the size of the grids is not limited in the embodiment of the application. And calculating the average value of the points which fall into each grid and belong to the reference plane on the z axis according to the coordinates of the points which belong to the reference plane in the three-dimensional point cloud determined in the step, wherein the average value is the height of the reference plane of the area represented by each grid. And outputting the height grids of the reference surface according to the grids divided on the x-y plane and the height of the reference surface of each grid, wherein the terminal can automatically search the height of the reference surface of the position where the point cloud is located through the height grids of the reference surface.
Taking a schematic view of a ground height grid as shown in fig. 5 as an example, in this figure, the reference surface is set to be the ground, and it should be noted that the category of the reference surface includes, but is not limited to, the ground. The point a, i.e. the origin of coordinates, represents the position of the radar sensor, and the size of the divided grid is 1.0 x 1.0 meter. Each grid in fig. 5 contains two parts of data, coordinate values and z-axis values. Wherein the coordinate values represent the coordinates of the point at the upper left corner of each grid in the x-y plane, for example, (1, 0) represents the coordinates of the point B in the x-y plane, i.e., the coordinates of the point B in the x-axis are 1 and the coordinates in the y-axis are 0. The z-axis value represents the height of the ground within the grid, one grid corresponding to each ground. For example, for grid C, z= -1 represents the ground height of the area corresponding to grid C as-1.
The area scanned by the radar sensor is different, and the coordinates of the points on the ground are different, so that the output grid of the ground height is also different after the three-dimensional point cloud corresponding to the points on the ground in the grid is averaged according to the value on the z-axis coordinate. The size of the mesh included in the mesh of the ground height is not limited, and may be 1.0×1.0 m or 0.5×0.5 m, as long as the size of each mesh is the same.
After the grid of the ground height is obtained, the computer equipment searches the grid of the ground height according to the coordinates of any point in the three-dimensional point cloud on the x-y plane, and the ground height where the point cloud is located can be obtained. For example, the coordinates of the point D in the three-dimensional point cloud on the x-y plane are (1.5 ), and then the height z= -2 of the ground on which the point cloud is located can be found according to fig. 5.
It should be noted that, because the radar sensor is typically located on a moving vehicle, the ground point collected by the radar sensor is always located below the radar sensor in the z-axis direction, and therefore, the value of z in the ground height grid is always negative.
After the computer equipment obtains the height of the reference surface where the object is located, the height difference between the frame body of the point cloud and the reference surface where the point cloud is located can be further obtained through calculation, and then the frame body is automatically adjusted to be attached to the reference surface according to the height difference.
In one possible implementation, determining the height difference between the frame and the reference surface based on the height of the reference surface includes: determining the height of a central point of the frame body according to coordinates of the three-dimensional point cloud corresponding to the frame body; the difference between the height of the center point of the frame and the height of the reference surface is used as the difference between the height of the frame and the height of the reference surface.
For example, after determining the height of the reference surface where the point cloud is located, the computer device further needs to determine the height difference between the frame of the point cloud and the reference surface, so as to adjust the fitting between the frame and the reference surface. After the frame body of the marked point cloud is determined, the coordinates of the three-dimensional point cloud corresponding to the frame body are also determined, so that the coordinates of the center point of the frame body on the z axis can be known, and the height difference between the center point of the frame body and the reference plane can be obtained. The height difference between the frame of the point cloud and the reference plane where the point cloud is located may be the height difference between the center point of the frame and the reference plane.
In step 204, the computer device automatically adjusts the frame according to the height difference between the frame and the reference surface, and obtains the labeling result of the point cloud according to the adjusted frame.
The reference surface is the ground, and the object corresponding to the point cloud is a vehicle running on the ground. Since the vehicle is attached to the ground, if the frame of the vehicle is not attached to the ground, the frame of the vehicle cannot accurately indicate the information such as the size and position of the vehicle, and therefore, the frame needs to be adjusted to attach to the ground, and accurate marking results of the vehicle are obtained.
In the embodiment of the application, the height difference of the frame body and the reference surface where the point cloud is located is automatically determined according to the coordinates of the three-dimensional point cloud, and the attachment of the frame body and the reference surface is automatically adjusted according to the height difference of the frame body and the reference surface, so that the labeling result of the point cloud is obtained. Compared with the method that the z axis of the frame body is manually adjusted to be attached to the reference surface, the method improves the labeling efficiency and improves the accuracy of the labeling result by automatically adjusting the frame body.
Referring to fig. 6, an embodiment of the present application provides a point cloud labeling apparatus, which is configured to execute the point cloud labeling method shown in fig. 2. As shown in fig. 6, the apparatus includes:
the acquisition module 601 is configured to acquire a three-dimensional point cloud to be labeled;
a determining module 602, configured to determine a frame of a point cloud marked in the three-dimensional point cloud;
the determining module 602 is further configured to determine a height difference between the frame and a reference plane where the point cloud is located;
the adjusting module 603 is configured to automatically adjust the frame according to the height difference between the frame and the reference plane, and obtain a labeling result of the point cloud according to the adjusted frame.
In a possible implementation manner, the determining module 602 is configured to determine, according to coordinates of the three-dimensional point cloud, a height of a reference plane where the point cloud is located; the height difference between the frame and the reference surface is determined based on the height of the reference surface.
In one possible implementation, the determining module 602 is configured to determine a point belonging to the reference plane in the three-dimensional point cloud; the height of the reference surface is determined from the coordinates of the point on the reference surface.
In a possible implementation manner, the determining module 602 is configured to traverse the three-dimensional point cloud, and for a first traversed point, calculate angles between connecting lines between the first point and adjacent points of the first point and a horizontal plane, where the first point is any traversed point; based on the angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane, whether the first point belongs to a point on the reference plane is determined.
In a possible implementation manner, the determining module 602 is configured to calculate a first angle between a line between the first point and the second point and a horizontal plane, and a second angle between a line between the first point and the third point and a horizontal plane, where the second point and the third point are adjacent to the first point, respectively;
the determining module 602 is configured to determine that the first point belongs to a point on the reference plane if the first included angle and the second included angle are both smaller than a first threshold and a difference between the first included angle and the second included angle is smaller than a second threshold.
In a possible implementation manner, the determining module 602 is further configured to calculate angles between the connecting lines between the fourth point and the adjacent points of the fourth point and the horizontal plane, where the fourth point is a point that belongs to the same scanning radius as the first point; calculating a first distance between the first point and the origin of coordinates and a second distance between the fourth point and the origin of coordinates;
a determining module 602, configured to determine whether the first point belongs to a point on the reference plane based on an included angle between a line between the first point and a neighboring point of the first point and a horizontal plane, and an included angle between a line between a neighboring point of the fourth point and a horizontal plane, respectively.
In a possible implementation manner, the determining module 602 is configured to calculate a fourth angle between a line between the first point and a sixth point and the horizontal plane, where the sixth point is an adjacent point to the first point; calculating a third included angle between a connecting line between the fourth point and a fifth point and a horizontal plane, wherein the fifth point is an adjacent point of the fourth point;
The determining module 602 is configured to, if the difference between the first distance and the second distance is smaller than the third threshold value and the difference between the third included angle and the fourth included angle is smaller than the fourth threshold value, determine that the first point belongs to a point on the reference plane.
In one possible implementation, the determining module 602 is configured to divide the three-dimensional point cloud into a plurality of grids according to x-axis coordinates and y-axis coordinates of the three-dimensional point cloud; and determining the height of the reference surface according to the coordinates of the three-dimensional point cloud corresponding to the points belonging to the reference surface in each grid in the multiple grids on the z-axis.
In a possible implementation manner, the determining module 602 is configured to average values of a three-dimensional point cloud corresponding to points belonging to the reference plane in each of the multiple grids on a z-axis coordinate; the average value was taken as the height of the reference plane.
In a possible implementation manner, the determining module 602 is configured to determine a height of a center point of the frame according to coordinates of a three-dimensional point cloud corresponding to the frame; the difference between the height of the center point of the frame and the height of the reference surface is used as the difference between the height of the frame and the height of the reference surface.
In the embodiment of the application, the height difference of the frame body and the reference surface where the point cloud is located is automatically determined according to the coordinates of the three-dimensional point cloud, and the attachment of the frame body and the reference surface is automatically adjusted according to the height difference of the frame body and the reference surface, so that the labeling result of the point cloud is obtained. Compared with the method that the z axis of the frame body is manually adjusted to be attached to the reference surface, the method improves the labeling efficiency and improves the accuracy of the labeling result by automatically adjusting the frame body.
It should be noted that, when the apparatus provided in the foregoing embodiment performs the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 7 is a schematic structural diagram of a server according to an embodiment of the present application, where the server may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 701 and one or more memories 702, where at least one computer program is stored in the one or more memories 702, and the at least one computer program is loaded and executed by the one or more processors 701, so that the server implements the point cloud labeling method provided in each method embodiment described above. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
Fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application. For example, the terminal may be: tablet, notebook or desktop. Terminals may also be referred to by other names as user equipment, portable terminals, laptop terminals, desktop terminals, etc.
Generally, the terminal includes: a processor 1501 and a memory 1502.
The processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1501 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 1501 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1502 may include one or more computer-readable storage media, which may be non-transitory. Memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1502 is configured to store at least one instruction for execution by the processor 1501 to cause the terminal to implement the point cloud labeling method provided by the method embodiments in the present application.
In some embodiments, the terminal may further optionally include: a peripheral interface 1503 and at least one peripheral device. The processor 1501, memory 1502 and peripheral interface 1503 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1503 via a bus, signal lines, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1504, a display screen 1505, a camera assembly 1506, audio circuitry 1507, a positioning assembly 1508, and a power supply 1509.
A peripheral interface 1503 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1501 and the memory 1502. In some embodiments, processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication, short range wireless communication) related circuits, which are not limited in this application.
Display 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display screen 1505 is a touch display screen, display screen 1505 also has the ability to collect touch signals at or above the surface of display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. At this point, display 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1505 may be one, disposed on the front panel of the terminal; in other embodiments, the display 1505 may be at least two, respectively disposed on different surfaces of the terminal or in a folded design; in other embodiments, the display 1505 may be a flexible display disposed on a curved surface or a folded surface of the terminal. Even more, the display 1505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1505 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 1501 for processing, or inputting the electric signals to the radio frequency circuit 1504 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones can be respectively arranged at different parts of the terminal. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1507 may also include a headphone jack.
The positioning component 1508 is for positioning a current geographic location of a terminal to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1508 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
The power supply 1509 is used to power the various components in the terminal. The power supply 1509 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal further includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyroscope sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 can detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with a terminal. For example, the acceleration sensor 1511 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1501 may control the display screen 1505 to display the user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1512 may detect a body direction and a rotation angle of the terminal, and the gyro sensor 1512 may collect a 3D motion of the user on the terminal in cooperation with the acceleration sensor 1511. The processor 1501, based on the data collected by the gyro sensor 1512, may implement the following functions: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1513 may be disposed on a side frame of the terminal and/or below the display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal, a grip signal of the terminal by the user can be detected, and the processor 1501 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at the lower layer of the display screen 1505, the processor 1501 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1514 is used for collecting the fingerprint of the user, and the processor 1501 recognizes the identity of the user according to the collected fingerprint of the fingerprint sensor 1514, or the fingerprint sensor 1514 recognizes the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1514 may be provided on the front, back or side of the terminal. When a physical key or a vendor Logo (trademark) is provided on the terminal, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect the ambient light intensity. In one embodiment, processor 1501 may control the display brightness of display screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1505 is turned up; when the ambient light intensity is low, the display luminance of the display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also referred to as a distance sensor, is typically provided on the front panel of the terminal. The proximity sensor 1516 is used to collect the distance between the user and the front face of the terminal. In one embodiment, when the proximity sensor 1516 detects a gradual decrease in the distance between the user and the front face of the terminal, the processor 1501 controls the display 1505 to switch from the on-screen state to the off-screen state; when the proximity sensor 1516 detects that the distance between the user and the front face of the terminal gradually increases, the processor 1501 controls the display screen 1505 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 8 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer device is also provided, the computer device comprising a processor and a memory, the memory having at least one computer program stored therein. The at least one computer program is loaded and executed by one or more processors to cause the computer apparatus to implement any of the point cloud labeling methods described above.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one computer program that is loaded and executed by a processor of a computer device to cause the computer to implement any one of the point cloud labeling methods described above.
In one possible implementation, the computer readable storage medium may be a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), a compact disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs any one of the point cloud labeling methods described above.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the three-dimensional point clouds of the region containing the object to be marked referred to in the present application are all acquired with sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
It should be noted that the terms "first," "second," and the like in the description and in the claims of this application (if any) are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to any modification, equivalents, or improvements made within the principles of the present application.

Claims (10)

1. A point cloud labeling method, wherein the method is applied to a computer device, and the method comprises:
acquiring a three-dimensional point cloud to be marked;
determining a frame of the marked point cloud in the three-dimensional point cloud;
determining the height difference between the frame and a reference surface where the point cloud is located;
and automatically adjusting the frame body according to the height difference between the frame body and the reference surface, and obtaining the labeling result of the point cloud according to the adjusted frame body.
2. The method of claim 1, wherein determining the difference in height of the frame from a reference surface on which the point cloud is located comprises:
determining the height of a reference surface where the point cloud is located according to the coordinates of the three-dimensional point cloud;
and determining the height difference between the frame body and the reference surface based on the height of the reference surface.
3. The method of claim 2, wherein determining the height of the reference surface on which the point cloud is located according to the coordinates of the three-dimensional point cloud comprises:
Determining points belonging to a reference plane in the three-dimensional point cloud;
the height of the reference surface is determined from the coordinates of the point on the reference surface.
4. A method according to claim 3, wherein said determining points in said three-dimensional point cloud that belong to a reference plane comprises:
traversing the three-dimensional point cloud, and calculating the included angles between connecting lines between the first point and adjacent points of the first point and the horizontal plane respectively for the traversed first point, wherein the first point is any traversed point;
and determining whether the first point belongs to a point on the reference surface or not based on the included angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane.
5. The method of claim 4, wherein calculating the respective angles of the lines between the first point and the adjacent points of the first point to the horizontal plane comprises:
calculating a first included angle between a connecting line between the first point and the second point and a horizontal plane, and a second included angle between a connecting line between the first point and the third point and the horizontal plane, wherein the second point and the third point are adjacent points of the first point respectively;
the determining whether the first point belongs to a point on the reference plane based on the included angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane respectively comprises the following steps:
If the first included angle and the second included angle are smaller than a first threshold value, and the difference value between the first included angle and the second included angle is smaller than a second threshold value, the first point belongs to a point on the reference surface.
6. The method of claim 4, wherein after traversing the three-dimensional point cloud, further comprising:
calculating the included angles between the connecting lines of a fourth point and the adjacent points of the fourth point and the horizontal plane respectively, wherein the fourth point is a point which belongs to the same scanning radius as the first point;
calculating a first distance between the first point and an origin of coordinates and a second distance between the fourth point and the origin of coordinates;
the determining whether the first point belongs to a point on the reference plane based on the included angles between the connecting lines of the first point and the adjacent points of the first point and the horizontal plane respectively comprises the following steps:
and determining whether the first point belongs to a point on the reference surface based on the included angles between the connecting lines between the first point and the adjacent points of the first point and the horizontal plane and the included angles between the connecting lines between the fourth point and the adjacent points of the fourth point and the horizontal plane.
7. The method of claim 6, wherein calculating the respective angles of the lines between the first point and the adjacent points of the first point to the horizontal plane comprises: calculating a fourth included angle between a connecting line between the first point and a sixth point and a horizontal plane, wherein the sixth point is an adjacent point of the first point;
Calculating the included angles between the connecting lines between the fourth point and the adjacent points of the fourth point and the horizontal plane respectively, wherein the included angles comprise: calculating a third included angle between a connecting line between the fourth point and a fifth point and a horizontal plane, wherein the fifth point is an adjacent point of the fourth point;
the determining whether the first point belongs to a point on the reference surface based on the included angles between the connecting lines between the first point and the adjacent points of the first point and the horizontal plane and the included angles between the connecting lines between the fourth point and the adjacent points of the fourth point and the horizontal plane respectively comprises:
if the difference between the first distance and the second distance is smaller than a third threshold value, and the difference between the third included angle and the fourth included angle is smaller than a fourth threshold value, the first point belongs to a point on the reference surface.
8. A method according to claim 3, wherein said determining the height of the reference surface from the coordinates of the point on the reference surface comprises:
dividing the three-dimensional point cloud into a plurality of grids according to the x-axis coordinate and the y-axis coordinate of the three-dimensional point cloud;
and determining the height of the reference surface according to the coordinates of the three-dimensional point cloud corresponding to the points belonging to the reference surface in each grid in the multiple grids on the z-axis.
9. The method of claim 8, wherein determining the height of the reference surface from the z-axis coordinates of the three-dimensional point cloud corresponding to points belonging to the reference surface within each of the plurality of grids comprises:
averaging the values of the three-dimensional point clouds corresponding to the points belonging to the reference plane in each grid in the plurality of grids on the z-axis coordinate;
the average value is taken as the height of the reference surface.
10. The method of claim 2, wherein the determining a height difference of the frame from the reference surface based on the height of the reference surface comprises:
determining the height of a center point of the frame body according to coordinates of the three-dimensional point cloud corresponding to the frame body;
and taking the height difference between the central point of the frame body and the reference surface as the height difference between the frame body and the reference surface.
CN202211052700.6A 2022-08-31 2022-08-31 Point cloud labeling method Pending CN117670986A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211052700.6A CN117670986A (en) 2022-08-31 2022-08-31 Point cloud labeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211052700.6A CN117670986A (en) 2022-08-31 2022-08-31 Point cloud labeling method

Publications (1)

Publication Number Publication Date
CN117670986A true CN117670986A (en) 2024-03-08

Family

ID=90075735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211052700.6A Pending CN117670986A (en) 2022-08-31 2022-08-31 Point cloud labeling method

Country Status (1)

Country Link
CN (1) CN117670986A (en)

Similar Documents

Publication Publication Date Title
CN111126182B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN110095128B (en) Method, device, equipment and storage medium for acquiring missing road information
CN111768454B (en) Pose determination method, pose determination device, pose determination equipment and storage medium
CN111126276A (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN111127541A (en) Vehicle size determination method and device and storage medium
CN111754564B (en) Video display method, device, equipment and storage medium
CN111444749B (en) Method and device for identifying road surface guide mark and storage medium
CN112365088B (en) Method, device and equipment for determining travel key points and readable storage medium
CN111859549B (en) Method and related equipment for determining weight and gravity center information of single-configuration whole vehicle
CN111984755A (en) Method and device for determining target parking point, electronic equipment and storage medium
CN117670986A (en) Point cloud labeling method
CN114566064B (en) Method, device, equipment and storage medium for determining position of parking space
CN113689484B (en) Method and device for determining depth information, terminal and storage medium
CN112804481B (en) Method and device for determining position of monitoring point and computer storage medium
CN112241662B (en) Method and device for detecting drivable area
CN112000337B (en) Method and device for adjusting vehicle identification, electronic equipment and readable storage medium
CN113734199B (en) Vehicle control method, device, terminal and storage medium
CN112686942B (en) Method and device for determining target address of drilling platform
CN112200689B (en) Method and device for determining potential dispersity of oil reservoir seepage field
CN112214645B (en) Method and device for storing track data
CN111650637B (en) Seismic horizon interpretation method and device
CN117173520A (en) Method and device for determining three-dimensional fusion data
CN116502382A (en) Sensor data processing method, device, equipment and storage medium
CN117670785A (en) Ghost detection method of point cloud map
CN115880212A (en) Binocular camera evaluation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination