CN107784038B - Sensor data labeling method - Google Patents

Sensor data labeling method Download PDF

Info

Publication number
CN107784038B
CN107784038B CN201610799115.0A CN201610799115A CN107784038B CN 107784038 B CN107784038 B CN 107784038B CN 201610799115 A CN201610799115 A CN 201610799115A CN 107784038 B CN107784038 B CN 107784038B
Authority
CN
China
Prior art keywords
dimensional
data
labeling
target
annotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610799115.0A
Other languages
Chinese (zh)
Other versions
CN107784038A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fafa Automobile China Co ltd
Original Assignee
Fafa Automobile China Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fafa Automobile China Co ltd filed Critical Fafa Automobile China Co ltd
Priority to CN201610799115.0A priority Critical patent/CN107784038B/en
Publication of CN107784038A publication Critical patent/CN107784038A/en
Application granted granted Critical
Publication of CN107784038B publication Critical patent/CN107784038B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Abstract

The embodiment of the invention provides a sensor data labeling method, which comprises the following steps: calling a two-dimensional sensor and a three-dimensional sensor to respectively acquire two-dimensional data and three-dimensional data; labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional labeling data and three-dimensional labeling data; and carrying out mutual verification according to the two-dimensional labeling data and the three-dimensional labeling data to obtain two-dimensional target labeling data and/or three-dimensional target labeling data. The embodiment of the invention performs mutual verification on the labeled data by utilizing the complementarity between different source data, can make up the limitation of the other party by utilizing the respective advantages of the two-dimensional data and the three-dimensional data, and performs verification and correction on the other party, thereby not only improving the accuracy of the two-dimensional labeled data and the three-dimensional labeled data, but also reducing the time consumed by labeling.

Description

Sensor data labeling method
Technical Field
The invention relates to the technical field of unmanned vehicles, in particular to a sensor data labeling method.
Background
The unmanned vehicle is the information development direction of the automobile, and the environmental perception is one of the key technologies of the unmanned vehicle.
The environmental perception is to detect static and dynamic environment around the unmanned vehicle, on the road surface, static obstacles include lane lines, traffic lights, traffic signboards, road markings and the like, and dynamic obstacles include pedestrians, vehicles, bicycle lights and the like.
A route is planned based on surrounding static and dynamic environments, so that the unmanned vehicle can be guaranteed to follow road traffic signs in the driving process, and obstacles such as pedestrians and vehicles are avoided.
For environmental perception, unmanned vehicles are typically equipped with a variety of sensor devices, typically including two-dimensional (2D) sensors and three-dimensional (3D) sensors, for acquiring 2D data and 3D data.
In the environment sensing process, a machine learning method is generally used, and a labeled data training model is obtained by labeling mass data, so that tasks such as detection, tracking and identification of a specific object are completed.
The inventor finds in the course of implementing the invention that in the annotation process, the 2D data and the 3D data are generally annotated separately.
On one hand, the 2D data is clearly and visually marked, but the visual angle is single, and the detection range is limited.
On the other hand, the 3D data has a large annotation perception range, and can directly model the 3D environment, but the annotation complexity is high.
Labeling 2D data and 3D data has limitations, not only is the accuracy of the labeled data low, but also the labeling takes a long time, and the labeled data as training data has low accuracy, resulting in poor performance of models trained with these data.
Disclosure of Invention
The embodiment of the invention provides a sensor data labeling method, which is used for solving the problems of low accuracy and time consumption of labeled data.
The embodiment of the invention provides a sensor data labeling method, which comprises the following steps:
calling a two-dimensional sensor and a three-dimensional sensor to respectively acquire two-dimensional data and three-dimensional data;
labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional labeling data and three-dimensional labeling data;
and carrying out mutual verification according to the two-dimensional labeling data and the three-dimensional labeling data to obtain two-dimensional target labeling data and/or three-dimensional target labeling data.
Preferably, the step of calling the two-dimensional sensor and the three-dimensional sensor to respectively acquire the two-dimensional data and the three-dimensional data includes:
and respectively calling the two-dimensional sensor and the three-dimensional sensor to acquire two-dimensional data and three-dimensional data at the same time.
Preferably, the step of labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional labeling data and three-dimensional labeling data includes:
respectively projecting the two-dimensional data and the three-dimensional data into the same coordinate system to obtain two-dimensional coordinate data and three-dimensional coordinate data;
labeling the two-dimensional coordinate data and the three-dimensional coordinate data respectively in the same coordinate system to obtain two-dimensional labeling data and three-dimensional labeling data;
alternatively, the first and second electrodes may be,
labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional original labeling data and three-dimensional original labeling data;
and respectively projecting the two-dimensional original data and the three-dimensional original data into the same coordinate system to obtain two-dimensional labeling data and three-dimensional labeling data.
Preferably, the step of performing mutual verification according to the two-dimensional labeling data and the three-dimensional labeling data to obtain two-dimensional target labeling data or three-dimensional target labeling data includes:
projecting the two-dimensional labeling data into a three-dimensional space to which the three-dimensional labeling data belongs to obtain three-dimensional reference labeling data;
in the three-dimensional space, checking the three-dimensional annotation data based on the three-dimensional reference annotation data to obtain three-dimensional target annotation data;
and/or the presence of a gas in the gas,
projecting the three-dimensional labeling data into a two-dimensional space to which the two-dimensional labeling data belongs to obtain two-dimensional reference labeling data;
and in the two-dimensional space, verifying the two-dimensional annotation data based on the two-dimensional reference annotation data to obtain two-dimensional target annotation data.
Preferably, the step of projecting the two-dimensional annotation data into a three-dimensional space to which the three-dimensional annotation data belongs to obtain three-dimensional reference annotation data includes:
when a two-dimensional labeling object of the two-dimensional labeling data is a preset first target object, projecting the two-dimensional labeling data into a three-dimensional space to which the three-dimensional labeling data belongs to obtain three-dimensional reference labeling data;
the step of projecting the three-dimensional labeling data into a two-dimensional space to which the two-dimensional labeling data belongs to obtain three-dimensional reference labeling data comprises:
and when the three-dimensional labeling object of the three-dimensional labeling data is a preset second target object, projecting the three-dimensional labeling data into a two-dimensional space to which the two-dimensional labeling data belongs to obtain three-dimensional reference labeling data.
Preferably, the step of verifying the three-dimensional annotation data based on the three-dimensional reference annotation data to obtain three-dimensional target annotation data includes:
when a two-dimensional labeling object of the two-dimensional labeling data and a three-dimensional labeling target of the three-dimensional labeling data are the same object, calculating a first similarity between the three-dimensional reference labeling data and the three-dimensional labeling data;
when the first similarity is lower than a preset first similarity threshold, verifying the three-dimensional annotation data based on the three-dimensional reference annotation data to obtain three-dimensional target annotation data;
the step of verifying the two-dimensional annotation data based on the two-dimensional reference annotation data to obtain two-dimensional target annotation data comprises:
when the three-dimensional labeling object of the three-dimensional labeling data and the two-dimensional labeling object of the two-dimensional labeling data are the same object, calculating a second similarity between the two-dimensional reference labeling data and the two-dimensional labeling data;
and when the second similarity is lower than a preset second similarity threshold, verifying the two-dimensional annotation data based on the two-dimensional reference annotation data to obtain two-dimensional target annotation data.
Preferably, the step of verifying the three-dimensional annotation data based on the three-dimensional reference annotation data to obtain three-dimensional target annotation data includes:
when the two-dimensional labeling object of the two-dimensional labeling data and the three-dimensional labeling object of the three-dimensional labeling data are the same object, generating three-dimensional target labeling data by adopting the three-dimensional reference labeling data and the three-dimensional labeling data;
wherein the weight of the three-dimensional reference annotation data is greater than the weight of the three-dimensional annotation data;
the step of verifying the two-dimensional annotation data based on the two-dimensional reference annotation data to obtain two-dimensional target annotation data comprises:
when the three-dimensional labeling object of the three-dimensional labeling data and the two-dimensional labeling object of the two-dimensional labeling data are the same object, generating two-dimensional target labeling data by adopting the two-dimensional reference labeling data and the two-dimensional labeling data;
wherein the weight of the two-dimensional reference annotation data is greater than the weight of the two-dimensional annotation data.
Preferably, the verification comprises one or more of the following operations:
stretching, dragging, zooming and rotating.
Preferably, the method further comprises the following steps:
and training a model by adopting the two-dimensional target labeling data and/or the three-dimensional target labeling data.
According to the sensor data labeling method provided by the embodiment of the invention, the two-dimensional sensor and the three-dimensional sensor are called to collect the two-dimensional data and the three-dimensional data for labeling, the two-dimensional labeling data and the three-dimensional labeling data obtained by labeling are mutually verified, the mutual verification of the labeling data is carried out by utilizing the complementarity between different source data, the limitations of the two-dimensional data and the three-dimensional data can be made up by utilizing the respective advantages of the two-dimensional data and the three-dimensional data, and the verification and correction are carried out on the other party, so that the accuracy of the two-dimensional labeling data and the three-dimensional labeling data is improved, the time consumed by labeling is reduced, and the performance of a model obtained by subsequent training is improved due to the high accuracy of.
Drawings
FIG. 1 is a flow chart illustrating steps of an embodiment of a method for annotating sensor data according to the present invention;
FIG. 2A is an exemplary diagram of two-dimensional annotation data in accordance with an embodiment of the present invention;
FIG. 2B is an exemplary diagram of three-dimensional annotation data in accordance with an embodiment of the present invention;
3A-3B are exemplary diagrams illustrating a three-dimensional labeling of data according to an embodiment of the present invention;
FIGS. 4A-4B are exemplary diagrams illustrating a two-dimensional annotation process according to an embodiment of the invention;
fig. 5 is a block diagram of an embodiment of the sensor data labeling apparatus according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a method for labeling sensor data according to the present invention is shown, which may specifically include the following steps:
step 101, calling a two-dimensional sensor and a three-dimensional sensor to respectively acquire two-dimensional data and three-dimensional data.
In practical application, the embodiment of the invention can be applied to unmanned control machines, such as unmanned vehicles, unmanned robots, sweeping robots, unmanned planes and the like.
In these unmanned machines, a two-dimensional (2D) sensor and a three-dimensional (3D) sensor are installed, wherein the 2D sensor may include a camera or the like, and the 3D sensor may be a laser radar, a millimeter wave radar, an ultrasonic radar, or the like.
The mounting positions and orientations of the 2D sensor and the 3D sensor are generally dependent on the range of objects that it is required to sense, for example, in an unmanned vehicle, it is required to detect an obstacle in front of the vehicle, and the 2D sensor and the 3D sensor are generally mounted above or in front of the vehicle with the scanning direction directed directly in front of the vehicle.
In the embodiment of the invention, the two-dimensional sensor and the three-dimensional sensor can be respectively called to acquire two-dimensional data and three-dimensional data at the same time, for example, a camera and a laser radar are called to simultaneously acquire image data and laser point cloud.
In a synchronized manner, instructions may be issued to the 2D sensor and the 3D sensor simultaneously such that the 2D sensor and the 3D sensor begin acquiring 2D data and 3D data simultaneously.
In another synchronization manner, the 2D sensor and the 3D sensor may add time stamps to the 2D data and the 3D data when acquiring the 2D data and the 3D data, and synchronize the 2D data and the 3D data by the time stamps.
And 102, labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional labeling data and three-dimensional labeling data.
Labeling in 2D space generally refers to the process of manually or automatically labeling objects of particular interest (e.g., obstacles, lane lines, etc.) in 2D data (e.g., image data).
For example, as shown in fig. 2A, an obstacle (e.g., a vehicle) may be labeled with a rectangular frame in the image data by a 2D labeling tool.
Labeling in 3D space generally refers to the process of manually or automatically labeling objects of particular interest (e.g., travelable areas, obstacles, etc.) in 3D data (e.g., laser point clouds).
For example, as shown in fig. 2B, an obstacle (e.g., a vehicle) may be marked with a cube in a laser point cloud by a 3D marking tool.
In the embodiment of the invention, the 2D data and the 3D data can be labeled to obtain the two-dimensional labeling data and the three-dimensional labeling data in the same coordinate system.
The same coordinate system is convenient for verifying the labeled data of the sensors of different types.
In specific implementation, the 2D sensor and the 3D sensor may be calibrated in advance to obtain calibration information, and the 2D data and the 3D data are projected into a same coordinate system by using the calibration information.
The calibration refers to a process of analyzing imaging characteristics of different sensors, determining a spatial geometrical relationship of an imaging model of the sensors, and characterizing the spatial geometrical relationship by solving a projection matrix. The solving process mainly comprises the steps of searching corresponding point sets on data collected by different sensors at the same moment, and solving the projection matrix through the constraint of the point sets.
For example, in the aspect of combined calibration of a laser radar and a camera, a right-angled triangular flat plate can be used as a target, and lines are used as matching adjustment to obtain a calibration result; converting the distance information graph of the laser radar Egypt to enable natural edges of a scene to be more clear and prominent, and extracting edge lines to be matched with edges detected in a picture shot by a camera; and acquiring data for a plurality of times aiming at the fixed target in the process that the sensor platform moves along any track, and obtaining a calibration result by minimizing Euclidean projection deviation of scene points among multiple frames shot at different visual angles.
The projection by using the calibration information refers to a process of projecting the labeling result of a certain type of sensor in the expression space of the sensor to another type of sensor expression space, and the projection process generally comprises a series of coordinate transformation processes.
As an example of the world coordinate system as the same coordinate system, the 3D data is initially taken as an object coordinate system.
If there are two coordinate systems C and C ', C' are obtained by rotating the Z axis around C by theta, then the transformation of the respective coordinate axes is as follows:
Cx(1,0,0)→C’x(cosθ,sinθ,0)
Cy(0,1,0)→C’y(-sinθ,cosθ,0)
Cz(0,0,1)→C’z(0,0,z)
if point P (x, y, z) of the C coordinate system, it is denoted as P '(xcos θ + ysin θ, -xsin θ + ycos θ,0) at C'.
In one embodiment of the present invention, step 102 may include the following sub-steps:
a substep S11, projecting the two-dimensional data and the three-dimensional data into the same coordinate system respectively to obtain two-dimensional coordinate data and three-dimensional coordinate data;
and a substep S12, labeling the two-dimensional coordinate data and the three-dimensional coordinate data respectively in the same coordinate system, so as to obtain two-dimensional labeling data and three-dimensional labeling data.
In the embodiment of the present invention, the 2D data and the 3D data may be projected into the same coordinate system, and then the 2D data and the 3D data may be labeled.
In another embodiment of the present invention, step 102 may include the following sub-steps:
substep S21, labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional original labeling data and three-dimensional original labeling data;
and a substep S22, projecting the two-dimensional original data and the three-dimensional original data into the same coordinate system respectively to obtain two-dimensional labeling data and three-dimensional labeling data.
In the embodiment of the present invention, the 2D data and the 3D data may be labeled first, and then the 2D labeling data and the 3D labeling data may be projected into the same coordinate system.
It should be noted that the 2D data and the 3D data may be labeled independently or may be labeled in a mixed manner, and the embodiment of the present invention is not limited thereto.
And 103, performing mutual verification according to the two-dimensional labeling data and the three-dimensional labeling data to obtain two-dimensional target labeling data and/or three-dimensional target labeling data.
In practical application, the 2D data and the 3D data have different physical properties and are complementary to each other, and can be verified and corrected.
In one embodiment of the present invention, step 103 may comprise the following sub-steps:
substep S31, projecting the two-dimensional labeling data into a three-dimensional space to which the three-dimensional labeling data belongs to obtain three-dimensional reference labeling data;
and a substep S32, verifying the three-dimensional annotation data based on the three-dimensional reference annotation data in the three-dimensional space to obtain three-dimensional target annotation data.
In the embodiment of the invention, based on the calibration information of the external reference and the calibration information of the internal reference of the 2D sensor, 2D annotation data is projected into a 3D space through corresponding coordinate transformation, for example, pixels of image data are projected into a 3D control to color the 3D data, so that environmental information can be obtained more intuitively.
By performing visualization operation on the 2D labeling data and the 3D labeling data, the 2D labeling data and the 3D labeling data can be compared for verification, for example, whether a rectangular frame of a vehicle label in the 2D data corresponds to a real vehicle position in the 3D laser point cloud data or not can be verified.
In projection, since 3D data and 2D data are different in dimension and measure (the coordinates of 2D data are generally in pixel units, and 3D data may be in real world meters and centimeters), and different in XY direction, projection of 2D annotation data into 3D space generally involves conversion between a statistical coordinate system (e.g., a world coordinate system), a camera coordinate system, a projection coordinate system, and an image (pixel) coordinate system.
It should be noted that, projecting 2D annotation data into 3D space generally requires a priori assumptions.
In one example of an embodiment of the present invention, the checking may include one or more of the following operations:
stretching, dragging, zooming, rotating, and the like.
For example, as shown in fig. 3A, if 2D annotation data is projected into a 3D space, and the 3D annotation data (data in a cube) is found not to coincide with a detection target (irregular figure), the 3D annotation data can be stretched, dragged, scaled or rotated in the direction of the arrow, and the 3D annotation data shown in fig. 3B, which more closely coincides with the detection target, can be obtained.
In the embodiment of the present invention, in order to improve the automation degree of labeling and improve the efficiency of labeling, first target objects may be preset, and the first target objects may be objects, such as roads and pedestrians, with higher quality (such as complexity and accuracy) than quality (such as complexity and accuracy) of labeling in a three-dimensional space (such as low complexity and high accuracy).
And when the two-dimensional labeling object of the two-dimensional labeling data is a preset first target object, projecting the two-dimensional labeling data into a three-dimensional space to which the three-dimensional labeling data belongs to obtain three-dimensional reference labeling data.
Of course, in addition to automatically projecting the two-dimensional labeling data into the three-dimensional space to which the three-dimensional labeling data belongs, when implementing the embodiment of the present invention, a person skilled in the art can project the two-dimensional labeling data into the three-dimensional space to which the three-dimensional labeling data belongs according to the actual situation, and the embodiment of the present invention is not limited to this.
In the embodiment of the invention, in order to reduce the calculation amount and improve the labeling efficiency, when the two-dimensional labeling object of the two-dimensional labeling data and the three-dimensional labeling object of the three-dimensional labeling data are the same object, the first similarity between the three-dimensional reference labeling data and the three-dimensional labeling data can be calculated.
When the first similarity is lower than a preset first similarity threshold, the similarity between the three-dimensional reference labeling data and the three-dimensional labeling data is low, the three-dimensional labeling data may have a large labeling deviation, and the three-dimensional labeling data can be verified based on the three-dimensional reference labeling data to obtain three-dimensional target labeling data.
On the contrary, when the first similarity is equal to or higher than the preset first similarity threshold, it indicates that the similarity between the three-dimensional reference marking data and the three-dimensional marking data is high, the marking deviation of the three-dimensional marking data is small, and generally within the error range, the three-dimensional marking data may not be verified.
Of course, in addition to verifying the three-dimensional annotation data by the first similarity, when implementing the embodiment of the present invention, a person skilled in the art may verify the three-dimensional annotation data based on the three-dimensional reference annotation data according to an actual situation, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, in order to improve the automation degree of labeling and improve the efficiency of labeling, when a two-dimensional labeling object of two-dimensional labeling data and a three-dimensional labeling object of three-dimensional labeling data are the same object, three-dimensional reference labeling data and three-dimensional labeling data can be adopted to generate three-dimensional target labeling data.
And the weight of the three-dimensional reference marking data is greater than that of the three-dimensional marking data.
Namely, the importance degree of the three-dimensional reference marking data in generating the new marking data is higher than that of the three-dimensional marking data.
It should be noted that the rule for generating the three-dimensional target labeling data may be set by a person skilled in the art according to actual needs, and the embodiment of the present invention is not limited thereto.
In another embodiment of the present invention, step 103 may comprise the following sub-steps:
substep S41, projecting the three-dimensional labeling data into a two-dimensional space to which the two-dimensional labeling data belong, so as to obtain two-dimensional reference labeling data;
and a substep S42, verifying the two-dimensional annotation data based on the two-dimensional reference annotation data in the two-dimensional space to obtain two-dimensional target annotation data.
In the embodiment of the invention, the 3D labeling data are projected into the 2D space through corresponding coordinate transformation based on the external reference calibration information and the internal reference calibration information of the 3D sensor, so that the environment around the whole unmanned control machine can be observed more comprehensively, and the 2D labeling data are verified.
The 2D labeling data and the 3D labeling are visually operated, the 2D labeling data and the 3D labeling data are compared for verification, and if the rectangular frame of the vehicle labeling in the 2D data corresponds to a real vehicle position in the 3D laser point cloud data, the verification is carried out.
In projection, the 3D data and the 2D data are not only different in dimension and different in measurement (the coordinates of the 2D data are generally in units of pixels, and the 3D data may be in units of meters and centimeters in the real world), but also different in XY direction, and therefore, the projection of the 2D annotation data into the 3D space generally involves conversion between a statistical coordinate system (such as a world coordinate system), a camera coordinate system, a projection coordinate system, and an image (pixel) coordinate system.
In one example of an embodiment of the present invention, the checking may include one or more of the following operations:
stretch, drag, zoom, rotate, etc.
For example, as shown in fig. 4A, if 3D annotation data is projected into a 2D space, and the 2D annotation data (data in a rectangular box) is found not to coincide with a detection target (irregular figure), the 2D annotation data can be stretched, dragged, scaled, or rotated in the direction of the arrow, and the 2D annotation data shown in fig. 4B, which more coincides with the detection target, can be obtained.
In the embodiment of the present invention, in order to improve the automation degree of the labeling and improve the efficiency of the labeling, second target objects may be preset, and these second target objects may be objects, such as vehicles, with higher quality (such as complexity, accuracy, and the like) than quality (such as complexity, accuracy, and the like) of labeling in a two-dimensional space (such as low complexity, high accuracy, and the like).
And when the three-dimensional labeling object of the three-dimensional labeling data is a preset second target object, projecting the three-dimensional labeling data into a two-dimensional space to which the two-dimensional labeling data belongs to obtain three-dimensional reference labeling data.
Of course, except for automatically projecting the three-dimensional labeling data into the two-dimensional space to which the two-dimensional labeling data belongs, when implementing the embodiment of the present invention, a person skilled in the art may project the three-dimensional labeling data into the two-dimensional space to which the two-dimensional labeling data belongs according to the actual situation, and the embodiment of the present invention is not limited thereto.
In the embodiment of the invention, in order to reduce the calculation amount and improve the labeling efficiency, when the three-dimensional labeling object of the three-dimensional labeling data and the two-dimensional labeling object of the two-dimensional labeling data are the same object, the second similarity between the two-dimensional reference labeling data and the two-dimensional labeling data is calculated.
When the second similarity is lower than a preset second similarity threshold, the similarity between the two-dimensional reference labeling data and the two-dimensional labeling data is low, the two-dimensional labeling data may have a large labeling deviation, and the two-dimensional labeling data can be verified based on the two-dimensional reference labeling data to obtain two-dimensional target labeling data.
On the contrary, when the second similarity is equal to or higher than the preset second similarity threshold, it indicates that the similarity between the two-dimensional reference marking data and the two-dimensional marking data is high, the marking deviation of the two-dimensional marking data is small, and generally within the error range, the two-dimensional marking data may not be verified.
Of course, in addition to verifying the three-dimensional labeling data through the second similarity, when implementing the embodiment of the present invention, a person skilled in the art may verify the two-dimensional labeling data based on the two-dimensional reference labeling data according to an actual situation, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, in order to improve the automation degree of labeling and improve the efficiency of labeling, when the three-dimensional labeling object of the three-dimensional labeling data and the two-dimensional labeling object of the two-dimensional labeling data are the same object, the two-dimensional reference labeling data and the two-dimensional labeling data can be adopted to generate the two-dimensional target labeling data.
Wherein the weight of the two-dimensional reference marking data is greater than the weight of the two-dimensional marking data.
That is, the two-dimensional reference annotation data is more important than the two-dimensional annotation data in generating new annotation data.
It should be noted that the rule for generating the two-dimensional target annotation data may be set by a person skilled in the art according to actual needs, and the embodiment of the present invention is not limited thereto.
In one embodiment of the present invention, the two-dimensional object labeling data and/or the three-dimensional object labeling data may be converted into a data format effective for 2D data and 3D data, such as representing the two-dimensional object labeling data by position data, and further, the two-dimensional object labeling data and/or the three-dimensional object labeling data may be used to train a model.
For example, 2D two-dimensional target labeling data may be used to train target detection, recognition, and tracking models; the three-dimensional target labeling data may be used to train a target segmentation and detection model.
According to the sensor data labeling method and device provided by the embodiment of the invention, the two-dimensional data and the three-dimensional data acquired by the two-dimensional sensor and the three-dimensional sensor are called for labeling, the two-dimensional labeled data and the three-dimensional labeled data obtained by labeling are mutually verified, and the labeled data are mutually verified by utilizing the complementarity between different source data, so that the limitations of the two-dimensional data and the three-dimensional data can be made up by utilizing the respective advantages of the two-dimensional data and the three-dimensional data, and the verification and correction of the opposite party are carried out, thereby not only improving the accuracy of the two-dimensional labeled data and the three-dimensional labeled data, but also reducing the time consumed by labeling.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 5, a block diagram of a structure of an embodiment of the apparatus for labeling sensor data of the present invention is shown, which may specifically include the following modules:
the data acquisition module 501 is used for calling a two-dimensional sensor and a three-dimensional sensor to respectively acquire two-dimensional data and three-dimensional data;
a data labeling module 502, configured to label the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional labeling data and three-dimensional labeling data;
and the data verification module 503 is configured to perform mutual verification according to the two-dimensional labeling data and the three-dimensional labeling data to obtain two-dimensional target labeling data and/or three-dimensional target labeling data.
In one embodiment of the present invention, the data acquisition module 501 may include the following sub-modules:
and the synchronous acquisition submodule is used for respectively calling the two-dimensional sensor and the three-dimensional sensor to acquire two-dimensional data and three-dimensional data at the same time.
In one embodiment of the present invention, the data annotation module 502 may include the following sub-modules:
the first projection submodule is used for projecting the two-dimensional data and the three-dimensional data into the same coordinate system respectively to obtain two-dimensional coordinate data and three-dimensional coordinate data;
the first labeling submodule is used for labeling the two-dimensional coordinate data and the three-dimensional coordinate data respectively in the same coordinate system to obtain two-dimensional labeling data and three-dimensional labeling data;
alternatively, the first and second electrodes may be,
the second labeling submodule is used for labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional original labeling data and three-dimensional original labeling data;
and the second projection submodule is used for respectively projecting the two-dimensional original data and the three-dimensional original data into the same coordinate system to obtain two-dimensional labeling data and three-dimensional labeling data.
In an embodiment of the present invention, the data checking module 503 may include the following sub-modules:
the two-dimensional data projection submodule is used for projecting the two-dimensional labeling data into a three-dimensional space to which the three-dimensional labeling data belongs to obtain three-dimensional reference labeling data;
the three-dimensional data verification submodule is used for verifying the three-dimensional labeling data based on the three-dimensional reference labeling data in the three-dimensional space to obtain three-dimensional target labeling data;
and/or the presence of a gas in the gas,
the three-dimensional data projection submodule is used for projecting the three-dimensional labeling data into a two-dimensional space to which the two-dimensional labeling data belongs to obtain two-dimensional reference labeling data;
and the two-dimensional data verification submodule is used for verifying the two-dimensional labeling data based on the two-dimensional reference labeling data in the two-dimensional space to obtain two-dimensional target labeling data.
In one embodiment of the present invention, the two-dimensional data projection submodule may include the following units:
the first target projection unit is used for projecting the two-dimensional labeling data into a three-dimensional space to which the three-dimensional labeling data belongs to obtain three-dimensional reference labeling data when a two-dimensional labeling object of the two-dimensional labeling data is a preset first target object;
the three-dimensional data projection sub-module may include the following elements:
and the second target projection unit is used for projecting the three-dimensional labeling data into a two-dimensional space to which the two-dimensional labeling data belongs to obtain three-dimensional reference labeling data when a three-dimensional labeling object of the three-dimensional labeling data is a preset second target object.
In an embodiment of the present invention, the three-dimensional data verification sub-module may include the following units:
the first similarity calculation unit is used for calculating the first similarity of the three-dimensional reference labeling data and the three-dimensional labeling data when a two-dimensional labeling object of the two-dimensional labeling data and a three-dimensional labeling target of the three-dimensional labeling data are the same object;
the first similarity checking unit is used for checking the three-dimensional annotation data based on the three-dimensional reference annotation data to obtain three-dimensional target annotation data when the first similarity is lower than a preset first similarity threshold;
the two-dimensional data check submodule may include the following units:
a second similarity calculation unit, configured to calculate a second similarity between the two-dimensional reference labeling data and the two-dimensional labeling data when the three-dimensional labeling object of the three-dimensional labeling data and the two-dimensional labeling object of the two-dimensional labeling data are the same object;
and the second similarity checking unit is used for checking the two-dimensional annotation data based on the two-dimensional reference annotation data to obtain two-dimensional target annotation data when the second similarity is lower than a preset second similarity threshold.
In an embodiment of the present invention, the three-dimensional data verification sub-module may include the following units:
the first labeling data generation submodule is used for generating three-dimensional target labeling data by adopting the three-dimensional reference labeling data and the three-dimensional labeling data when a two-dimensional labeling object of the two-dimensional labeling data and a three-dimensional labeling target of the three-dimensional labeling data are the same object;
wherein the weight of the three-dimensional reference annotation data is greater than the weight of the three-dimensional annotation data;
the two-dimensional data check submodule may include the following units:
the second labeling data generation submodule is used for generating two-dimensional target labeling data by adopting the two-dimensional reference labeling data and the two-dimensional labeling data when a three-dimensional labeling object of the three-dimensional labeling data and a two-dimensional labeling object of the two-dimensional labeling data are the same object;
wherein the weight of the two-dimensional reference annotation data is greater than the weight of the two-dimensional annotation data.
In practical applications, the verification may include one or more of the following operations:
stretching, dragging, zooming and rotating.
In one embodiment of the present invention, the apparatus may further include the following modules:
and the model training module is used for training a model by adopting the two-dimensional target marking data and/or the three-dimensional target marking data.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (7)

1. A method for annotating sensor data, comprising:
calling a two-dimensional sensor and a three-dimensional sensor to respectively acquire two-dimensional data and three-dimensional data;
labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional labeling data and three-dimensional labeling data;
performing mutual verification according to the two-dimensional labeling data and the three-dimensional labeling data to obtain two-dimensional target labeling data and/or three-dimensional target labeling data, including
When a two-dimensional labeling object of the two-dimensional labeling data is a preset first target object, projecting the two-dimensional labeling data into a three-dimensional space to which the three-dimensional labeling data belongs to obtain three-dimensional reference labeling data, wherein the first target object comprises a road;
in the three-dimensional space, checking the three-dimensional annotation data based on the three-dimensional reference annotation data to obtain three-dimensional target annotation data;
when the three-dimensional labeling object of the three-dimensional labeling data is a preset second target object, projecting the three-dimensional labeling data into a two-dimensional space to which the two-dimensional labeling data belongs to obtain two-dimensional reference labeling data, wherein the second target object comprises a vehicle;
and in the two-dimensional space, verifying the two-dimensional annotation data based on the two-dimensional reference annotation data to obtain two-dimensional target annotation data.
2. The method of claim 1, wherein the step of invoking the two-dimensional sensor and the three-dimensional sensor to collect two-dimensional data and three-dimensional data, respectively, comprises:
and respectively calling the two-dimensional sensor and the three-dimensional sensor to acquire two-dimensional data and three-dimensional data at the same time.
3. The method of claim 1, wherein the labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional labeling data and three-dimensional labeling data comprises:
respectively projecting the two-dimensional data and the three-dimensional data into the same coordinate system to obtain two-dimensional coordinate data and three-dimensional coordinate data;
labeling the two-dimensional coordinate data and the three-dimensional coordinate data respectively in the same coordinate system to obtain two-dimensional labeling data and three-dimensional labeling data;
alternatively, the first and second electrodes may be,
labeling the two-dimensional data and the three-dimensional data respectively to obtain two-dimensional original labeling data and three-dimensional original labeling data;
and respectively projecting the two-dimensional original data and the three-dimensional original data into the same coordinate system to obtain two-dimensional labeling data and three-dimensional labeling data.
4. The method of claim 1,
the step of verifying the three-dimensional labeling data based on the three-dimensional reference labeling data to obtain three-dimensional target labeling data comprises the following steps:
when a two-dimensional labeling object of the two-dimensional labeling data and a three-dimensional labeling target of the three-dimensional labeling data are the same object, calculating a first similarity between the three-dimensional reference labeling data and the three-dimensional labeling data;
when the first similarity is lower than a preset first similarity threshold, verifying the three-dimensional annotation data based on the three-dimensional reference annotation data to obtain three-dimensional target annotation data;
the step of verifying the two-dimensional annotation data based on the two-dimensional reference annotation data to obtain two-dimensional target annotation data comprises:
when the three-dimensional labeling object of the three-dimensional labeling data and the two-dimensional labeling object of the two-dimensional labeling data are the same object, calculating a second similarity between the two-dimensional reference labeling data and the two-dimensional labeling data;
and when the second similarity is lower than a preset second similarity threshold, verifying the two-dimensional annotation data based on the two-dimensional reference annotation data to obtain two-dimensional target annotation data.
5. The method of claim 1,
the step of verifying the three-dimensional labeling data based on the three-dimensional reference labeling data to obtain three-dimensional target labeling data comprises the following steps:
when the two-dimensional labeling object of the two-dimensional labeling data and the three-dimensional labeling object of the three-dimensional labeling data are the same object, generating three-dimensional target labeling data by adopting the three-dimensional reference labeling data and the three-dimensional labeling data;
wherein the weight of the three-dimensional reference annotation data is greater than the weight of the three-dimensional annotation data;
the step of verifying the two-dimensional annotation data based on the two-dimensional reference annotation data to obtain two-dimensional target annotation data comprises:
when the three-dimensional labeling object of the three-dimensional labeling data and the two-dimensional labeling object of the two-dimensional labeling data are the same object, generating two-dimensional target labeling data by adopting the two-dimensional reference labeling data and the two-dimensional labeling data;
wherein the weight of the two-dimensional reference annotation data is greater than the weight of the two-dimensional annotation data.
6. The method of any of claims 1-5, wherein the verifying comprises one or more of:
stretching, dragging, zooming and rotating.
7. The method according to any one of claims 1-5, further comprising:
and training a model by adopting the two-dimensional target labeling data and/or the three-dimensional target labeling data.
CN201610799115.0A 2016-08-31 2016-08-31 Sensor data labeling method Active CN107784038B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610799115.0A CN107784038B (en) 2016-08-31 2016-08-31 Sensor data labeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610799115.0A CN107784038B (en) 2016-08-31 2016-08-31 Sensor data labeling method

Publications (2)

Publication Number Publication Date
CN107784038A CN107784038A (en) 2018-03-09
CN107784038B true CN107784038B (en) 2021-03-19

Family

ID=61451749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610799115.0A Active CN107784038B (en) 2016-08-31 2016-08-31 Sensor data labeling method

Country Status (1)

Country Link
CN (1) CN107784038B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163904B (en) * 2018-09-11 2022-04-22 腾讯大地通途(北京)科技有限公司 Object labeling method, movement control method, device, equipment and storage medium
CN109766793B (en) * 2018-12-25 2021-05-28 百度在线网络技术(北京)有限公司 Data processing method and device
CN109740487B (en) * 2018-12-27 2021-06-15 广州文远知行科技有限公司 Point cloud labeling method and device, computer equipment and storage medium
CN110197148B (en) * 2019-05-23 2020-12-01 北京三快在线科技有限公司 Target object labeling method and device, electronic equipment and storage medium
CN110176078B (en) * 2019-05-26 2022-06-10 魔门塔(苏州)科技有限公司 Method and device for labeling training set data
CN110276793A (en) * 2019-06-05 2019-09-24 北京三快在线科技有限公司 A kind of method and device for demarcating three-dimension object
CN111460199B (en) * 2020-03-02 2024-02-23 广州文远知行科技有限公司 Data association method, device, computer equipment and storage medium
CN111508020A (en) * 2020-03-23 2020-08-07 北京国电富通科技发展有限责任公司 Cable three-dimensional position calculation method and device fusing image and laser radar
CN112017241A (en) * 2020-08-20 2020-12-01 广州小鹏汽车科技有限公司 Data processing method and device
CN114067091B (en) * 2022-01-17 2022-08-16 深圳慧拓无限科技有限公司 Multi-source data labeling method and system, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101006933A (en) * 2006-01-23 2007-08-01 西门子公司 Method and device for displaying 3d objects
CN102147919A (en) * 2010-02-10 2011-08-10 昆明医学院第一附属医院 Intraoperative registration method for correcting preoperative three-dimensional image and device
CN104517280A (en) * 2013-11-14 2015-04-15 广东朗呈医疗器械科技有限公司 Three-dimensional imaging method
CN105139030A (en) * 2015-08-18 2015-12-09 青岛海信医疗设备股份有限公司 Method for sorting hepatic vessels
CN105761304A (en) * 2016-02-02 2016-07-13 飞依诺科技(苏州)有限公司 Three-dimensional visceral organ model construction method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101006933A (en) * 2006-01-23 2007-08-01 西门子公司 Method and device for displaying 3d objects
CN102147919A (en) * 2010-02-10 2011-08-10 昆明医学院第一附属医院 Intraoperative registration method for correcting preoperative three-dimensional image and device
CN104517280A (en) * 2013-11-14 2015-04-15 广东朗呈医疗器械科技有限公司 Three-dimensional imaging method
CN105139030A (en) * 2015-08-18 2015-12-09 青岛海信医疗设备股份有限公司 Method for sorting hepatic vessels
CN105761304A (en) * 2016-02-02 2016-07-13 飞依诺科技(苏州)有限公司 Three-dimensional visceral organ model construction method and device

Also Published As

Publication number Publication date
CN107784038A (en) 2018-03-09

Similar Documents

Publication Publication Date Title
CN107784038B (en) Sensor data labeling method
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
Shin et al. Vision-based navigation of an unmanned surface vehicle with object detection and tracking abilities
CN111191600B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
JP6031554B2 (en) Obstacle detection method and apparatus based on monocular camera
CN111046743B (en) Barrier information labeling method and device, electronic equipment and storage medium
TW202001786A (en) Systems and methods for updating highly automated driving maps
Bruls et al. The right (angled) perspective: Improving the understanding of road scenes using boosted inverse perspective mapping
Fang et al. A point cloud-vision hybrid approach for 3D location tracking of mobile construction assets
CN104021588A (en) System and method for recovering three-dimensional true vehicle model in real time
CN113096003B (en) Labeling method, device, equipment and storage medium for multiple video frames
Famouri et al. A novel motion plane-based approach to vehicle speed estimation
WO2021120574A1 (en) Obstacle positioning method and apparatus for autonomous driving system
Esteban et al. Closed form solution for the scale ambiguity problem in monocular visual odometry
Zhang et al. Bundle adjustment for monocular visual odometry based on detections of traffic signs
CN107798010A (en) A kind of annotation equipment of sensing data
CN115984766A (en) Rapid monocular vision three-dimensional target detection method for underground coal mine
CN113706633B (en) Three-dimensional information determination method and device for target object
CN112446915A (en) Picture-establishing method and device based on image group
Wang et al. Preliminary research on vehicle speed detection using traffic cameras
Müller et al. Multi-camera system for traffic light detection: About camera setup and mapping of detections
CN116978010A (en) Image labeling method and device, storage medium and electronic equipment
CN115004273A (en) Digital reconstruction method, device and system for traffic road
CN116245960A (en) BEV top view generation method, system, electronic equipment and storage medium
CN116259001A (en) Multi-view fusion three-dimensional pedestrian posture estimation and tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100026 8 floor 909, 105 building 3, Yao Yuan Road, Chaoyang District, Beijing.

Applicant after: Lexus Automobile (Beijing) Co.,Ltd.

Address before: 100026 8 floor 909, 105 building 3, Yao Yuan Road, Chaoyang District, Beijing.

Applicant before: FARADAY (BEIJING) NETWORK TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20180830

Address after: 511458 9, Nansha District Beach Road, Guangzhou, Guangdong, 9

Applicant after: Evergrande Faraday Future Smart Car (Guangdong) Co.,Ltd.

Address before: 100026 8 floor 909, 105 building 3, Yao Yuan Road, Chaoyang District, Beijing.

Applicant before: Lexus Automobile (Beijing) Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190318

Address after: 100015 Building No. 7, 74, Jiuxianqiao North Road, Chaoyang District, Beijing, 001

Applicant after: FAFA Automobile (China) Co.,Ltd.

Address before: 511458 9, Nansha District Beach Road, Guangzhou, Guangdong, 9

Applicant before: Evergrande Faraday Future Smart Car (Guangdong) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant