CN112990293A - Point cloud marking method and device and electronic equipment - Google Patents

Point cloud marking method and device and electronic equipment Download PDF

Info

Publication number
CN112990293A
CN112990293A CN202110260628.5A CN202110260628A CN112990293A CN 112990293 A CN112990293 A CN 112990293A CN 202110260628 A CN202110260628 A CN 202110260628A CN 112990293 A CN112990293 A CN 112990293A
Authority
CN
China
Prior art keywords
point cloud
frame
labeling frame
obstacle
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110260628.5A
Other languages
Chinese (zh)
Other versions
CN112990293B (en
Inventor
黎明慧
李恒
刘明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yiqing Innovation Technology Co ltd
Original Assignee
Shenzhen Yiqing Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yiqing Innovation Technology Co ltd filed Critical Shenzhen Yiqing Innovation Technology Co ltd
Priority to CN202110260628.5A priority Critical patent/CN112990293B/en
Publication of CN112990293A publication Critical patent/CN112990293A/en
Application granted granted Critical
Publication of CN112990293B publication Critical patent/CN112990293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, and discloses a point cloud labeling method and device, electronic equipment and a nonvolatile computer readable storage medium. The method comprises the following steps: acquiring point cloud data; processing point cloud data by using a preset algorithm model to obtain a pre-labeling frame; detecting the pre-labeling frame to obtain a detection result; based on the detection result, the point cloud data are recorded into the database, and semi-automatic labeling of the point cloud data is realized through a preset algorithm model, so that the labeling speed can be increased, and the cost can be reduced.

Description

Point cloud marking method and device and electronic equipment
Technical Field
The invention relates to the technical field of data processing, in particular to a point cloud labeling method and device, electronic equipment and a nonvolatile computer readable storage medium.
Background
The point cloud 3D marking refers to a task of marking a common obstacle 3D frame on point cloud data of a single laser radar or a plurality of laser radars which are collected by a vehicle end, a road end or other platforms under the same scene.
The point cloud 3D is divided into pure point cloud labeling (the original sensor data is only laser radar data) and multi-sensor fusion labeling (for example, labeling after image and point cloud fusion), but at present, the 3D point cloud labeling is too dependent on manual labeling, each frame of point cloud data needs to be labeled manually, and the speed is low and the price is extremely high.
Disclosure of Invention
The embodiment of the invention provides a point cloud labeling method, a point cloud labeling device, electronic equipment and a nonvolatile computer readable storage medium, which can not only improve the labeling speed, but also reduce the cost.
In a first aspect, an embodiment of the present invention provides a point cloud annotation method, where the method includes:
acquiring point cloud data;
processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
detecting the pre-labeling frame to obtain a detection result;
and recording the point cloud data into a database based on the detection result.
In some embodiments, the method further comprises:
training an algorithm model in advance to obtain a preset algorithm model;
and importing the preset algorithm model into a deep learning inference optimizer to optimize the preset algorithm model.
In some embodiments, after the acquiring point cloud data, the method further comprises:
and preprocessing the point cloud data, wherein the preprocessing comprises cleaning, screening and/or segmentation.
In some embodiments, after the processing the point cloud data by using the preset algorithm model to obtain the pre-labeled box, the method further includes:
receiving an adjusting instruction input by a user;
and adjusting the parameters of the pre-labeling frame according to the adjusting instruction to obtain the adjusted pre-labeling frame.
In some embodiments, the detecting the pre-labeling box includes:
and carrying out automatic quality inspection on the adjusted pre-labeling frame by utilizing constraint conditions.
In some embodiments, the performing an automated quality inspection on the adjusted pre-labeling box by using the constraint condition includes:
obtaining the type of the obstacle;
determining the number of preset point clouds corresponding to the obstacle categories according to the obstacle categories;
determining whether the point cloud number in the adjusted pre-labeling frame is smaller than the preset point cloud number, and if so, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
obtaining the type of the obstacle;
determining a preset size corresponding to the obstacle type according to the obstacle type;
determining whether the difference value between the size of the adjusted pre-labeling frame and the preset size corresponding to the obstacle category exceeds a preset difference value range, and if so, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
acquiring a preset position interval of a marking frame;
determining whether the position interval of the adjusted pre-labeling frame is outside the preset position interval of the labeling frame, if so, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
clustering the point cloud data in the adjusted pre-marked frame to obtain a clustering frame;
acquiring a course angle of the clustering frame;
determining the error between the course angle of the adjusted pre-labeling frame and the course angle of the clustering frame, and if the error exceeds an error threshold value, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
acquiring position coordinates of a surrounding obstacle marking frame;
converting the position coordinates of the surrounding obstacle marking frame to obtain the movement direction of the obstacle;
acquiring a course angle of the obstacle according to the movement direction of the obstacle;
determining the error between the course angle of the pre-labeling frame and the course angle of the obstacle, and if the error exceeds an error threshold value, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
acquiring a central guide line of the moving lane;
sampling at least one adjusted pre-labeling frame to obtain a position sampling point;
and analyzing the position sampling point, determining the distance between the position sampling point and a central guide line of the moving lane, and if the distance is greater than a preset distance threshold value, determining that the adjusted pre-labeling frame is unqualified.
In some embodiments, the method further comprises:
and when the preset time is reached, updating the preset algorithm model.
In a second aspect, an embodiment of the present invention further provides a point cloud annotation device, where the device includes:
the acquisition module is used for acquiring point cloud data;
the processing module is used for processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
the detection module is used for detecting the pre-labeling frame to obtain a detection result;
and the recording module is used for recording the point cloud data into a database based on the detection result.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the point cloud annotation methods described above.
In a fourth aspect, the embodiment of the present invention further provides a non-volatile computer-readable storage medium, which stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the processor is caused to execute the point cloud annotation method.
Compared with the prior art, the invention has the beneficial effects that: different from the situation of the prior art, the point cloud labeling method in the embodiment of the invention obtains the point cloud data, then processes the point cloud data by using the preset algorithm model to obtain the pre-labeling frame, then detects the pre-labeling frame to obtain the detection result, finally records the point cloud data into the database based on the detection result, and realizes semi-automatic labeling of the point cloud data by using the preset algorithm model, thereby not only improving the labeling speed, but also reducing the cost.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a schematic flow chart of a point cloud annotation method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating the constraint on the number of point clouds in a pre-labeled box for automated quality inspection according to an embodiment of the invention;
FIG. 3 is a flow chart illustrating the constraint on the size of the pre-labeled box for automated quality inspection according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating preset dimensions corresponding to obstacle categories in accordance with an embodiment of the present invention;
FIG. 5 is a flow chart illustrating the position constraint of the pre-marked frame for the automated quality inspection according to one embodiment of the present invention;
FIG. 6 is a schematic flow chart of the single frame course angle constraint of the automated quality inspection according to one embodiment of the present invention;
FIG. 7 is a flow chart illustrating the constraint on the heading angle of successive frames for automated quality inspection in accordance with one embodiment of the present invention;
FIG. 8 is a flow chart illustrating the constraint on motion states of successive frames for automated quality inspection according to an embodiment of the present invention;
FIG. 9 is a schematic flow chart of a point cloud annotation method according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a point cloud annotation device according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a hardware structure of an electronic device in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the invention. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts. The terms "first", "second", "third", and the like used in the present invention do not limit data and execution order, but distinguish the same items or similar items having substantially the same function and action.
As shown in fig. 1, an embodiment of the present invention provides a point cloud annotation method, where the method is executed by an electronic device, and the method includes:
step 102, point cloud data is obtained.
In the embodiment of the present invention, the point cloud data is not limited to the point cloud data acquired by scanning the target object with the lidar, and the point cloud data further includes corresponding meta data, such as an acquisition platform, acquisition time, a single lidar system or a multi-lidar system, external parameter calibration data corresponding to each lidar, a point cloud line number, a point cloud channel number, an accurate timestamp of each frame of data, and the like. Specifically, the electronic device acquires point cloud data acquired by a laser radar, the point cloud data existing in the form of a point cloud data frame.
In some other embodiments, after acquiring the point cloud data, the method further comprises: and preprocessing the point cloud data, wherein the preprocessing comprises cleaning, screening and/or segmentation.
In the embodiment of the invention, after the point cloud data is acquired, the point cloud data needs to be cleaned, screened and/or segmented. Specifically, firstly, the process sequence is used for carrying out range limitation and angle limitation on the point cloud data, removing redundant invalid points, removing NAN points and removing invalid frames. And removing repeated scene data, and screening out valuable rich scene point cloud data for marking.
Further, after the point cloud data are cleaned and screened, sub-packaging is carried out according to the original time stamp of the point cloud data frame, and the sub-packaging of the point cloud data frame is mainly used for segmenting the long-sequence point cloud data and facilitating subsequent parallelization marking. Specifically, the detection frequency of the laser radar is calculated, for example, the detection frequency of the laser radar is 10HZ, 10s of the detection frequency is calculated to obtain a time sequence, and the time sequence comprises 100 frames of point cloud data. It should be noted that the length of the bale breaking time sequence may be changed according to the requirements of the point cloud data of different scenes, and is not limited to the limitation in this embodiment.
And 104, processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame.
The preset algorithm model is the most critical part of the whole point cloud labeling method, and the quality of the preset algorithm model directly determines the labeling quality of the pre-labeling frame, wherein the preset algorithm model is obtained by learning and training based on a large amount of sample point cloud data.
In the embodiment of the present invention, after the point cloud data is obtained, each frame of point cloud data needs to be labeled, and specifically, basic information of an obstacle in each frame of point cloud data, such as a category, a tracking number in a continuous frame, three-dimensional coordinate information, three-dimensional size information, visibility, and the like, is defined by a labeling frame. And after all point cloud data frames are labeled, training the model by using the labeled sample point cloud data, and aiming at improving the accuracy of model training and further improving the labeling quality of the pre-labeling frame. The more sample point cloud data, the more situations are covered, and the higher the identification capability of the algorithm model is.
Further, in order to optimize the preset model, that is, to accelerate the inference speed of the preset algorithm model, after the preset algorithm model is trained, the preset algorithm model may be introduced into the deep learning inference optimizer, so that the data flow blockage caused by too long algorithm processing time can be avoided. In addition, when the preset time is reached, the preset algorithm model is updated, and the preset time is determined according to the quantity accumulated in the database. After the algorithm model is trained, the algorithm model can be directly loaded on electronic equipment to run. After the electronic equipment acquires the point cloud data, the acquired point cloud data is processed by using a preset algorithm model, and a pre-labeling frame is acquired.
In some other embodiments, after the processing the point cloud data by using the preset algorithm model to obtain the pre-labeled box, the method further includes: receiving an adjusting instruction input by a user; and adjusting the parameters of the pre-labeling frame according to the adjusting instruction to obtain the adjusted pre-labeling frame.
In the embodiment of the present invention, the pre-labeled box obtained by the preset algorithm model may have a condition that the size, the position or the orientation of the actual obstacle does not match, so that a user is required to manually adjust the relevant parameters of the pre-labeled box with problems. Specifically, the electronic device receives an adjustment instruction input by a user, and adjusts the relevant parameters of the pre-labeling frame according to the adjustment instruction, so as to obtain the adjusted pre-labeling frame.
And 106, detecting the pre-labeling frame to obtain a detection result.
The detection of the pre-marked frame mainly utilizes specific technical indexes to restrain the pre-marked frame, on one hand, the detection is favorable for obtaining the pre-marked frame with problems, and on the other hand, the detection is convenient for inputting the correct point cloud data into a database. Specifically, the electronic device automatically detects the pre-labeling frame, so as to obtain a detection result.
And step 108, recording the point cloud data into a database based on the detection result.
The point data finally recorded into the database is error-free point cloud data, and the point cloud data is stored in the database in a time series storage form.
In some embodiments, as an implementation of step 106, the method includes: and carrying out automatic quality inspection on the adjusted pre-labeling frame by utilizing constraint conditions.
In the embodiment of the present invention, any part involving human labor may be more or less wrong, so that the adjusted pre-labeling frame needs to be subjected to automatic quality inspection by using the constraint condition. And the constraint condition is a specific technical index, and the pre-labeling frame is constrained through the specific technical index.
In some embodiments, as shown in fig. 2, the performing an automated quality inspection on the adjusted pre-labeled box by using the constraint condition includes:
step 202, obtaining the obstacle category.
Since the pre-labeled box defines the most basic shape information of the obstacle, such as the type, the tracking number in consecutive frames, three-dimensional coordinate information, three-dimensional size information, visibility, and the like, the electronic device acquires the pre-labeled box as the obstacle category.
And 204, determining the number of preset point clouds corresponding to the obstacle categories according to the obstacle categories.
The number of the preset point clouds corresponding to the types of the obstacles can be stored in the electronic equipment in advance, the types of the obstacles are different, and the number of the point cloud constraints in the frame is also different. By restricting the number of point clouds in the pre-marked frame, the phenomena of partial marked empty frames and dislocation can be found. After the electronic equipment acquires the obstacle category, determining the number of preset point clouds corresponding to the obstacle category according to the obstacle category.
Step 206, determining whether the point cloud number in the adjusted pre-labeling frame is smaller than the preset point cloud number, and if so, determining that the adjusted pre-labeling frame is not qualified.
In the embodiment of the invention, the quantity of the point clouds in the pre-labeling frame is constrained and determined by comparing the regulated quantity of the point clouds in the pre-labeling frame with the preset quantity of the point clouds. Specifically, if the number of points in the adjusted pre-labeling frame is less than the number of the preset point clouds, it is determined that the adjusted pre-labeling frame is unqualified, and further, the unqualified pre-labeling frame needs to be marked for subsequent rechecking. It should be noted that in principle, fewer than 20 points of motor vehicles, trucks, special vehicles, and the like need not be marked, and in principle, fewer than 10 points of pedestrians and single vehicles need not be marked.
In some embodiments, as shown in fig. 3, the performing an automated quality inspection on the adjusted pre-labeled box by using the constraint condition includes:
step 302, obtaining the obstacle category.
And 304, determining a preset size corresponding to the obstacle type according to the obstacle type.
Specifically, the preset size corresponding to the obstacle category may be pre-stored in the electronic device, and the preset size corresponding to the obstacle category is shown in fig. 4. The electronic equipment obtains the type of the obstacles in the pre-labeling frame, and determines the preset size corresponding to the type of the obstacles according to the type of the obstacles.
Step 306, determining whether the difference value between the size of the adjusted pre-labeling frame and the preset size corresponding to the obstacle category exceeds a preset difference value range, and if so, determining that the adjusted pre-labeling frame is unqualified.
In the embodiment of the invention, whether the size of the adjusted pre-labeling frame is reasonable or not is determined by the prior information of the preset size corresponding to the obstacle category, and of course, the prior information needs to be added with a difference value to adapt to the size of the obstacle which may appear. Wherein the difference is a variance.
Specifically, the adjusted pre-labeling boxes are subjected to size evaluation in a statistical sense through a process sequence, and the average value of the length, the width and the height of the size of each category and corresponding variance data are given. Illustratively, the size variance of the motor vehicle should not be greater than [1.0,0.5,0.5] (l, w, h), the size variance of the truck should not be greater than [6.0,1.0,1.0] (l, w, h), the size variance of the pedestrian should not be greater than [0.4,0.4,0.4] (l, w, h), the size variance of the single passenger should not be greater than [0.8,0.5,0.5] (l, w, h), and the rest of the categories are collectively subjected to data acceptance according to [0.8,0.8,0.8 ]. And comparing the difference value between the adjusted size of the pre-labeling frame and the preset size corresponding to the obstacle category to determine whether the size of the pre-labeling frame is reasonable, and if the difference value between the adjusted size of the pre-labeling frame and the preset size corresponding to the obstacle category exceeds the preset difference range, determining that the adjusted pre-labeling frame is unqualified. In addition, if a special obstacle exists, the special obstacle needs to be marked with a corresponding mark, so that the subsequent treatment is facilitated.
In some embodiments, as shown in fig. 5, the performing an automated quality inspection on the adjusted pre-labeled box by using the constraint condition includes:
step 502, obtaining a preset position interval of the marking frame.
In the embodiment of the present invention, the predetermined position interval of the label frame is located within the interval of [ -100m, -100m, -5m,100m,100m,3m ].
Step 504, determining whether the position interval of the adjusted pre-labeling frame is outside the preset position interval of the labeling frame, and if so, determining that the adjusted pre-labeling frame is unqualified.
And judging whether the adjusted pre-labeling frame is qualified, specifically judging whether the adjusted pre-labeling frame is outside the preset position interval of the labeling frame, if the position interval of the adjusted pre-labeling frame exceeds the preset position interval, determining that the adjusted pre-labeling frame is unqualified, considering the pre-labeling frame as invalid labeling data, and not counting the adjusted pre-labeling frame into an effective labeling frame.
In some embodiments, as shown in fig. 6, the performing an automated quality inspection on the adjusted pre-labeled box by using the constraint condition includes:
and 602, clustering the point cloud data in the adjusted pre-labeling frame to obtain a clustering frame.
Step 604, obtaining the course angle of the clustering frame.
In the embodiment of the invention, the single-frame heading angle is restricted, and the method is mainly used for motor vehicles and bicycles with obvious mechanical mechanisms. Each adjusted pre-labeling frame has a corresponding labeled course angle, so that the course angle of the adjusted pre-labeling frame needs to be constrained, and whether the orientation of the adjusted pre-labeling frame is reasonable or not is determined.
Specifically, clustering is carried out on the point cloud data in the adjusted pre-labeling frame, and a clustering frame is obtained. In a single frame of point cloud, knowing an undetected adjusted pre-labeling frame, acquiring a point cloud index inside the adjusted pre-labeling frame, then carrying out PCA principal component analysis on the point cloud inside the adjusted pre-labeling frame to obtain a principal direction vector and a secondary direction vector, then finding an OBB clustering frame of the point cloud inside the frame through the obtained parameter information, and then acquiring a course angle of the clustering frame.
Step 606, determining the error between the course angle of the adjusted pre-labeling frame and the course angle of the clustering frame, and if the error exceeds an error threshold, determining that the adjusted pre-labeling frame is unqualified.
And comparing the course angle of the adjusted pre-marked frame with the course angle of the clustering frame to determine whether the adjusted pre-marked frame is reasonable, if the error exceeds an error threshold, indicating that the adjusted pre-marked frame is unqualified, marking the unqualified pre-marked frame, and facilitating subsequent rechecking.
In some embodiments, as shown in fig. 7, the performing an automated quality inspection on the adjusted pre-labeled box by using the constraint condition includes:
step 702, obtaining the position coordinates of the surrounding obstacle marking frame.
In the embodiment of the invention, the continuous frame heading angle is restrained, and the restraint on the continuous frame heading angle needs to use the position information of the vehicle. The course angle constraint in the continuous frame state mainly considers that the motion states of the motor vehicles and part of non-motor vehicles in unit time are not mutable, and the motor vehicles can run according to the road structure rule, so that the motion states of surrounding obstacles are constrained to avoid random noise of manual labeling data. Specifically, since the vehicle itself in the continuous frame state has the global positioning information and the marking frame information of the surrounding obstacle, the electronic device acquires the position coordinates of the marking frame of the surrounding obstacle.
Step 704, converting the position coordinates of the surrounding obstacle marking frame to obtain the movement direction of the obstacle.
The point cloud data is acquired through a laser radar, so that the position coordinates of the obstacle marking frame are the position coordinates under a laser radar coordinate system, and after the electronic equipment acquires the position coordinates of the surrounding obstacle marking frame, the position coordinates of the surrounding obstacle marking frame need to be converted, namely the position coordinates of the surrounding obstacle marking frame are converted from the laser radar coordinate system to a global map coordinate system, and therefore the movement direction between the obstacle frames can be acquired.
Step 706, obtaining the heading angle of the obstacle according to the moving direction of the obstacle.
Specifically, 10 frames are selected as a local motion sliding window for sampling, and the heading angle of each obstacle is obtained based on the motion direction of the obstacle.
Step 708, determining an error between the course angle of the adjusted pre-labeling frame and the course angle of the obstacle, and if the error exceeds an error threshold, determining that the adjusted pre-labeling frame is unqualified.
And comparing the adjusted course angle of the pre-marked frame with the course angle of the obstacle to determine whether the adjusted pre-marked frame is reasonable, and if the error between the adjusted course angle of the pre-marked frame and the course angle of the obstacle exceeds an error threshold value, determining that the adjusted pre-marked frame is unqualified. And marking the unqualified pre-marked frame with a corresponding mark to facilitate subsequent rechecking.
In some embodiments, as shown in fig. 8, the performing an automated quality inspection on the adjusted pre-labeled box by using the constraint condition includes:
step 802, a center guide line of the moving lane is obtained.
In the embodiment of the invention, the continuous frame motion state is restrained through the central guide line, the continuous frame motion state restraint mainly considers the high-precision map information of the vehicle driving cycle, and the marking data is restrained by mainly utilizing the lane information of the high-precision map. Specifically, the electronic device acquires a center guide line of the moving lane.
Step 804, sampling at least one adjusted pre-labeling frame to obtain a position sampling point.
Specifically, the electronic device samples at least one adjusted pre-labeling frame to obtain a position sampling point. The motor vehicle normally runs on a lane, the motion track of the motor vehicle is tightly attached to a central guide line of the current motion lane in a time sequence, the left-right oscillation condition cannot occur, the frequency of the laser radar is 10HZ in a point cloud sequence of 30 frames, the time length of the time sequence is 3S, and under an ideal condition, 30 position sampling points of the motor vehicle can exist.
Step 806, analyzing the position sampling points, determining the distance between the position sampling points and a center guide line of the moving lane, and if the distance is greater than a preset distance threshold, determining that the adjusted pre-labeling frame is unqualified.
After the electronic equipment obtains the position sampling point, the position sampling point is placed under batch to analyze a speed curve and an acceleration curve, so that the distance between the position sampling point and a central guide line of the moving lane is determined, if the distance between the position sampling point and the central guide line of the moving lane is greater than a preset distance threshold value, the adjusted pre-labeling frame is considered to be unqualified, and the situation that the position sampling point is misplaced is indicated, and correction is needed. It should be noted that the acceleration curve is derivable in the time domain.
In some other embodiments, the method further comprises: and marking the unqualified pre-labeling frame, and rechecking.
Specifically, after the adjusted pre-marked frame is subjected to automatic quality inspection by using the constraint conditions, the unqualified pre-marked frame is marked with a corresponding mark for rechecking, in the rechecking process, for the pre-marked frame passing the rechecking, the point cloud data in the pre-marked frame is recorded into the database, and otherwise, the step of receiving the adjustment instruction input by the user is continuously executed.
In the embodiment of the invention, the point cloud data is acquired, then the point cloud data is processed by using the preset algorithm model to obtain the pre-marked frame, then the pre-marked frame is detected to obtain the detection result, finally the point cloud data is recorded into the database based on the detection result, and the semi-automatic marking of the point cloud data is realized by using the preset algorithm model, so that the marking speed can be improved, and the cost can be reduced.
It should be noted that, in the foregoing embodiments, a certain order does not necessarily exist between the foregoing steps, and it can be understood by those skilled in the art from the description of the embodiments of the present invention that, in different embodiments, the foregoing steps may have different execution orders, that is, may be executed in parallel, may also be executed in an exchange manner, and the like.
To facilitate understanding of the invention, a specific embodiment is described below, as shown in figure 9,
s900, point cloud data are obtained, and the operation is switched to S901;
s901, preprocessing the point cloud data, wherein the preprocessing comprises cleaning, screening and/or segmenting, and then turning to S902;
s902, processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame, and turning to S903;
s903, receiving an adjusting instruction input by a user, and turning to S904;
s404, adjusting the parameters of the pre-labeling frame according to the adjusting instruction to obtain an adjusted pre-labeling frame, and turning to S905;
s905, carrying out automatic quality inspection on the adjusted pre-labeling frame by using constraint conditions, turning to S906 if the frame is qualified, turning to S907 if the frame is not qualified; (ii) a
S906, recording the point cloud data into a database, and turning to S908;
s907, marking the unqualified pre-labeling frame, rechecking, passing the rechecking, turning to S906, failing to pass the rechecking, and turning to S903;
and S908, updating the preset algorithm model, and turning to S902.
Correspondingly, an embodiment of the present invention further provides a point cloud annotation apparatus 100, as shown in fig. 10, including:
an obtaining module 102, configured to obtain point cloud data;
the processing module 104 is configured to process the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
the detection module 106 is configured to detect the pre-labeling frame to obtain a detection result;
and the recording module 108 is used for recording the point cloud data into a database based on the detection result.
In the embodiment of the invention, the point cloud data is obtained through the obtaining module, then the point cloud data is processed through the processing module by using the preset algorithm model to obtain the pre-marked frame, then the pre-marked frame is detected through the detecting module to obtain the detection result, and finally the point cloud data is recorded into the database by using the recording module based on the detection result, so that the marking speed can be improved, and the cost can be reduced.
Optionally, in another embodiment of the apparatus, as shown in fig. 10, the apparatus 100 further includes:
the training module 110 is used for training the algorithm model in advance to obtain a preset algorithm model;
and the optimizing module 112 is configured to import the preset algorithm model into a deep learning inference optimizer to optimize the preset algorithm model.
Optionally, in another embodiment of the apparatus, as shown in fig. 10, the apparatus 100 further includes:
and a receiving module 114, configured to receive an adjustment instruction input by a user.
And the adjusting module 116 is configured to adjust the parameter of the pre-labeling frame according to the adjusting instruction, so as to obtain the adjusted pre-labeling frame.
Optionally, in another embodiment of the apparatus, as shown in fig. 10, the apparatus 100 further includes:
and the rechecking module 118 is used for marking the unqualified pre-labeling boxes for rechecking.
Optionally, in another embodiment of the apparatus, as shown in fig. 10, the apparatus 100 further includes:
an updating module 120, configured to update the preset algorithm model when a preset time is reached.
Optionally, in other embodiments of the apparatus, the detection module 106 is specifically configured to:
and carrying out automatic quality inspection on the adjusted pre-labeling frame by utilizing constraint conditions.
Obtaining the type of the obstacle;
determining the number of preset point clouds corresponding to the obstacle categories according to the obstacle categories;
determining whether the point cloud number in the adjusted pre-labeling frame is smaller than the preset point cloud number, and if so, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
obtaining the type of the obstacle;
determining a preset size corresponding to the obstacle type according to the obstacle type;
determining whether the difference value between the size of the adjusted pre-labeling frame and the preset size corresponding to the obstacle category exceeds a preset difference value range, and if so, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
acquiring a preset position interval of a marking frame;
determining whether the position interval of the adjusted pre-labeling frame is outside the preset position interval of the labeling frame, if so, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
clustering the point cloud data in the adjusted pre-marked frame to obtain a clustering frame;
acquiring a course angle of the clustering frame;
determining the error between the course angle of the adjusted pre-labeling frame and the course angle of the clustering frame, and if the error exceeds an error threshold value, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
acquiring position coordinates of a surrounding obstacle marking frame;
converting the position coordinates of the surrounding obstacle marking frame to obtain the movement direction of the obstacle;
acquiring a course angle of the obstacle according to the movement direction of the obstacle;
determining the error between the course angle of the adjusted pre-labeling frame and the course angle of the obstacle, and if the error exceeds an error threshold value, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
acquiring a central guide line of the moving lane;
sampling at least one adjusted pre-labeling frame to obtain a position sampling point;
and analyzing the position sampling point, determining the distance between the position sampling point and a central guide line of the moving lane, and if the distance is greater than a preset distance threshold value, determining that the adjusted pre-labeling frame is unqualified.
It should be noted that the point cloud labeling device can execute the functional modules and beneficial effects corresponding to the point cloud labeling method provided by the embodiment of the invention. For technical details that are not described in detail in the embodiment of the point cloud annotation device, reference may be made to the point cloud annotation method provided in the embodiment of the present invention.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention, and as shown in fig. 11, the electronic device 110 includes:
one or more processors 12 and a memory 14, with one processor 12 being an example in fig. 11.
The processor 12 and the memory 14 may be connected by a bus or other means, such as the bus connection in fig. 11.
The memory 14, which is a non-volatile computer-readable storage medium, can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the point cloud annotation method in the embodiment of the present invention. The processor 12 executes various functional applications and data processing of the electronic device by executing the nonvolatile software programs, instructions and modules stored in the memory 14, that is, the point cloud labeling method in the above embodiment is realized.
The memory 14 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the point cloud labeling apparatus, and the like. Further, the memory 14 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 14 optionally includes memory located remotely from the processor 12, which may be connected to the point cloud annotation device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, where computer-executable instructions are stored, and when executed by one or more processors, the computer-executable instructions may cause the one or more processors to perform the point cloud annotation method in any of the above method embodiments.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (11)

1. A point cloud annotation method, characterized in that the method comprises:
acquiring point cloud data;
processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
detecting the pre-labeling frame to obtain a detection result;
and recording the point cloud data into a database based on the detection result.
2. The method of claim 1, further comprising:
training an algorithm model in advance to obtain a preset algorithm model;
and importing the preset algorithm model into a deep learning inference optimizer to optimize the preset algorithm model.
3. The method of claim 1, wherein after the acquiring point cloud data, the method further comprises:
and preprocessing the point cloud data, wherein the preprocessing comprises cleaning, screening and/or segmentation.
4. The method of claim 3, wherein after the point cloud data is processed by using a preset algorithm model to obtain a pre-labeled box, the method further comprises:
receiving an adjusting instruction input by a user;
and adjusting the parameters of the pre-labeling frame according to the adjusting instruction to obtain the adjusted pre-labeling frame.
5. The method of claim 4, wherein the detecting the pre-labeled box comprises:
and carrying out automatic quality inspection on the adjusted pre-labeling frame by utilizing constraint conditions.
6. The method of claim 5, wherein the automated quality inspection of the adjusted pre-labeled box using the constraint condition comprises:
obtaining the type of the obstacle;
determining the number of preset point clouds corresponding to the obstacle categories according to the obstacle categories;
determining whether the point cloud number in the adjusted pre-labeling frame is smaller than the preset point cloud number, and if so, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
obtaining the type of the obstacle;
determining a preset size corresponding to the obstacle type according to the obstacle type;
determining whether the difference value between the size of the adjusted pre-labeling frame and the preset size corresponding to the obstacle category exceeds a preset difference value range, and if so, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
acquiring a preset position interval of a marking frame;
determining whether the position interval of the adjusted pre-labeling frame is outside the preset position interval of the labeling frame, if so, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
clustering the point cloud data in the adjusted pre-marked frame to obtain a clustering frame;
acquiring a course angle of the clustering frame;
determining the error between the course angle of the adjusted pre-labeling frame and the course angle of the clustering frame, and if the error exceeds an error threshold value, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
acquiring position coordinates of a surrounding obstacle marking frame;
converting the position coordinates of the surrounding obstacle marking frame to obtain the movement direction of the obstacle;
acquiring a course angle of the obstacle according to the movement direction of the obstacle;
determining the error between the course angle of the adjusted pre-labeling frame and the course angle of the obstacle, and if the error exceeds an error threshold value, determining that the adjusted pre-labeling frame is unqualified; and/or the presence of a gas in the gas,
acquiring a central guide line of the moving lane;
sampling at least one adjusted pre-labeling frame to obtain a position sampling point;
and analyzing the position sampling point, determining the distance between the position sampling point and a central guide line of the moving lane, and if the distance is greater than a preset distance threshold value, determining that the adjusted pre-labeling frame is unqualified.
7. The method of claim 6, further comprising:
and marking the unqualified pre-labeling frame, and rechecking.
8. The method according to any one of claims 1-7, further comprising:
and when the preset time is reached, updating the preset algorithm model.
9. A point cloud annotation apparatus, the apparatus comprising:
the acquisition module is used for acquiring point cloud data;
the processing module is used for processing the point cloud data by using a preset algorithm model to obtain a pre-labeling frame;
the detection module is used for detecting the pre-labeling frame to obtain a detection result;
and the recording module is used for recording the point cloud data into a database based on the detection result.
10. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
11. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a processor, cause the processor to perform the method of any one of claims 1-8.
CN202110260628.5A 2021-03-10 2021-03-10 Point cloud labeling method and device and electronic equipment Active CN112990293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110260628.5A CN112990293B (en) 2021-03-10 2021-03-10 Point cloud labeling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110260628.5A CN112990293B (en) 2021-03-10 2021-03-10 Point cloud labeling method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112990293A true CN112990293A (en) 2021-06-18
CN112990293B CN112990293B (en) 2024-03-29

Family

ID=76334807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110260628.5A Active CN112990293B (en) 2021-03-10 2021-03-10 Point cloud labeling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112990293B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449632A (en) * 2021-06-28 2021-09-28 重庆长安汽车股份有限公司 Vision and radar perception algorithm optimization method and system based on fusion perception and automobile
CN113901991A (en) * 2021-09-15 2022-01-07 天津大学 3D point cloud data semi-automatic labeling method and device based on pseudo label
CN114549644A (en) * 2022-02-24 2022-05-27 北京百度网讯科技有限公司 Data labeling method and device, electronic equipment and storage medium
CN114581739A (en) * 2022-04-15 2022-06-03 长沙公信诚丰信息技术服务有限公司 Point cloud marking method and device based on feature recognition and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750499A (en) * 2015-04-21 2015-07-01 南京大学 Constraint solving and description logic based web service combination method
CN105513129A (en) * 2016-01-15 2016-04-20 浙江中产科技有限公司 Laser 3D modeling-based automatic rod counting system
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN110826432A (en) * 2019-10-23 2020-02-21 南京农业大学 Power transmission line identification method based on aerial picture
CN111563450A (en) * 2020-04-30 2020-08-21 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium
CN111833358A (en) * 2020-06-26 2020-10-27 中国人民解放军32802部队 Semantic segmentation method and system based on 3D-YOLO
CN111931727A (en) * 2020-09-23 2020-11-13 深圳市商汤科技有限公司 Point cloud data labeling method and device, electronic equipment and storage medium
CN112036462A (en) * 2020-08-25 2020-12-04 北京三快在线科技有限公司 Method and device for model training and target detection
CN112347986A (en) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 Sample generation method, neural network training method, intelligent driving control method and device
CN112395962A (en) * 2020-11-03 2021-02-23 北京京东乾石科技有限公司 Data augmentation method and device, and object identification method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750499A (en) * 2015-04-21 2015-07-01 南京大学 Constraint solving and description logic based web service combination method
CN105513129A (en) * 2016-01-15 2016-04-20 浙江中产科技有限公司 Laser 3D modeling-based automatic rod counting system
WO2019137196A1 (en) * 2018-01-11 2019-07-18 阿里巴巴集团控股有限公司 Image annotation information processing method and device, server and system
CN110826432A (en) * 2019-10-23 2020-02-21 南京农业大学 Power transmission line identification method based on aerial picture
CN111563450A (en) * 2020-04-30 2020-08-21 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium
CN111833358A (en) * 2020-06-26 2020-10-27 中国人民解放军32802部队 Semantic segmentation method and system based on 3D-YOLO
CN112036462A (en) * 2020-08-25 2020-12-04 北京三快在线科技有限公司 Method and device for model training and target detection
CN111931727A (en) * 2020-09-23 2020-11-13 深圳市商汤科技有限公司 Point cloud data labeling method and device, electronic equipment and storage medium
CN112395962A (en) * 2020-11-03 2021-02-23 北京京东乾石科技有限公司 Data augmentation method and device, and object identification method and system
CN112347986A (en) * 2020-11-30 2021-02-09 上海商汤临港智能科技有限公司 Sample generation method, neural network training method, intelligent driving control method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449632A (en) * 2021-06-28 2021-09-28 重庆长安汽车股份有限公司 Vision and radar perception algorithm optimization method and system based on fusion perception and automobile
CN113901991A (en) * 2021-09-15 2022-01-07 天津大学 3D point cloud data semi-automatic labeling method and device based on pseudo label
CN114549644A (en) * 2022-02-24 2022-05-27 北京百度网讯科技有限公司 Data labeling method and device, electronic equipment and storage medium
CN114581739A (en) * 2022-04-15 2022-06-03 长沙公信诚丰信息技术服务有限公司 Point cloud marking method and device based on feature recognition and electronic equipment

Also Published As

Publication number Publication date
CN112990293B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
CN112990293B (en) Point cloud labeling method and device and electronic equipment
CN108345822B (en) Point cloud data processing method and device
CN112380317B (en) High-precision map updating method and device, electronic equipment and storage medium
CN111461209B (en) Model training device and method
CN111179300A (en) Method, apparatus, system, device and storage medium for obstacle detection
CN111814835A (en) Training method and device of computer vision model, electronic equipment and storage medium
CN111866728B (en) Multi-site roadbed network sensing method, device, terminal and system
CN110532961A (en) A kind of semantic traffic lights detection method based on multiple dimensioned attention mechanism network model
CN110135216B (en) Method and device for detecting lane number change area in electronic map and storage equipment
CN113771573B (en) Vehicle suspension control method and device based on identification road surface information
CN112633812B (en) Track segmentation method, device, equipment and storage medium for freight vehicle
CN112036385A (en) Library position correction method and device, electronic equipment and readable storage medium
CN114359233B (en) Image segmentation model training method and device, electronic equipment and readable storage medium
CN111126154A (en) Method and device for identifying road surface element, unmanned equipment and storage medium
CN114926724A (en) Data processing method, device, equipment and storage medium
CN114359859A (en) Method and device for processing target object with shielding and storage medium
CN114494986A (en) Road scene recognition method and device
CN111488771A (en) OCR (optical character recognition) hanging method, device and equipment
JP2019117501A (en) Determination device, determination method, and determination program
US11592565B2 (en) Flexible multi-channel fusion perception
US20220309799A1 (en) Method for Automatically Executing a Vehicle Function, Method for Evaluating a Computer Vision Method and Evaluation Circuit for a Vehicle
CN116189142A (en) Parking space line detection method and system based on deep learning edge detection
CN112115798A (en) Object labeling method and device in driving scene and storage medium
CN118097595A (en) Deceleration strip identification method and device, storage medium and vehicle
CN117765509A (en) Guardrail detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant