CN115761701A - Laser radar point cloud data enhancement method, device, equipment and storage medium - Google Patents

Laser radar point cloud data enhancement method, device, equipment and storage medium Download PDF

Info

Publication number
CN115761701A
CN115761701A CN202211517040.4A CN202211517040A CN115761701A CN 115761701 A CN115761701 A CN 115761701A CN 202211517040 A CN202211517040 A CN 202211517040A CN 115761701 A CN115761701 A CN 115761701A
Authority
CN
China
Prior art keywords
point cloud
frame
labeling
data
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211517040.4A
Other languages
Chinese (zh)
Inventor
杨伟丽
刘金彦
邓皓匀
任凡
姜铭山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202211517040.4A priority Critical patent/CN115761701A/en
Publication of CN115761701A publication Critical patent/CN115761701A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application provides a laser radar point cloud data enhancement method, a device, equipment and a storage medium, which relate to the technical field of laser radar point cloud data enhancement, and comprise the steps of obtaining mark frame information data in a data set and coordinate information of each point cloud in a mark frame, and respectively storing the mark frame information data and the coordinate information; determining the minimum value of the point cloud data quantity in the marking frame, and deleting the marking frame which does not meet the minimum point cloud quantity; calculating the number of original labeling frames in each frame of point cloud picture, and setting the expected number of the labeling frames in one frame of point cloud picture; acquiring the actual number of the labeling frames of each frame of point cloud picture, extracting the labeling frames in the data set, and putting the labeling frames into the current point cloud picture; and performing data enhancement processing on the current frame point cloud picture and the labeling frame. The method and the device have the advantages that the targeted point cloud data enhancement processing is carried out according to the marked scene of the laser radar point cloud image, the generalization performance of the deep learning neural network model is improved, the problem of overfitting of a training set is eliminated, the unbalanced proportion of samples is reduced, and different environments and working conditions are better adapted.

Description

Laser radar point cloud data enhancement method, device, equipment and storage medium
Technical Field
The application relates to the technical field of laser radar point cloud data enhancement, in particular to a laser radar point cloud data enhancement method, a device, equipment and a storage medium.
Background
The automatic driving automobile depends on the cooperation of artificial intelligence, visual calculation, radar, monitoring device and global positioning system, so that the computer can operate the motor vehicle automatically and safely without any active operation of human.
For the automatic driving deep learning of the laser radar point cloud 3D target frame, the quality, the quantity and the complexity of the labeled data are very important influencing factors and directly influence the detection accuracy and the detection breadth of a deep learning model at a training position, so that when the data volume is not very sufficient, more useful data are necessarily generated from the existing training data, and the important overfitting technology can be effectively reduced on the premise of not changing a neural network structure. A targeted data enhancement method needs to be carried out on original input point cloud data, the point cloud data can be processed in a targeted mode, and the training effect of deep learning is improved.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention provides a method for enhancing laser radar point cloud data to solve the above-mentioned technical problems.
The invention provides a laser radar point cloud data enhancement method, which comprises the following steps:
acquiring marking frame information data in a data set and coordinate information of each point cloud in a marking frame, and storing the marking frame information data and the coordinate information respectively;
determining the minimum value of the point cloud data quantity in the marking frame, and deleting the marking frame which does not meet the minimum point cloud quantity;
calculating the number of original labeling frames in each frame of point cloud picture, and setting the expected number of the labeling frames in one frame of point cloud picture;
acquiring the actual number of the labeling frames of each frame of point cloud picture, extracting the labeling frames in the data set, and putting the labeling frames in the current point cloud picture to make the number of the labeling frames of the current point cloud picture consistent with the number of the expected labeling frames;
and performing data enhancement processing on the current frame point cloud picture and the labeling frame.
In an exemplary embodiment of the present application, acquiring and storing annotation frame information data in a data set and coordinate information of each point cloud in an annotation frame, respectively includes: acquiring information data of a labeling frame in a data set, wherein the information data of the labeling frame comprises the center coordinate and the length, width and height information of the labeling frame; acquiring coordinate information of each point cloud in a labeling frame; and respectively storing the information data of the marking frame and the coordinate information of the point cloud, and naming in a form of adding a serial number to a category name.
In an exemplary embodiment of the present application, the obtaining of the coordinate information of each point cloud in the mark frame includes: calculating 8 vertexes of the labeling frame according to the coordinates of the central point, the length, the width, the height and the course angle of the labeling frame; calculating 6 surfaces of the labeling frame and corresponding normal vectors according to the 8 vertexes; and establishing a plane equation ax + by + cz + n =0 to judge whether the point in the point cloud is in the marking frame.
In an exemplary embodiment of the present application, obtaining the data set includes: obtaining a data conversion program according to the format requirement of a deep learning training algorithm on input data; using a data conversion program, the existing point cloud and label file are converted into a format for deep learning training.
In an exemplary embodiment of the present application, the data enhancement processing includes performing integral rotation and flipping on the current frame point cloud image and the corresponding labeling frame thereof, and performing rotation and translation on each labeling frame and the corresponding point cloud thereof.
In an exemplary embodiment of the present application, the integral rotation includes: according to the rotation range determined in the configuration file; converting all point clouds and the labeling frames in the frame point cloud picture by using a conversion matrix;
the overturning comprises the following steps: determining a turning coordinate axis, wherein the forward direction of an X axis is the forward direction of the collected vehicle; changing the coordinate axis value in the central point coordinate of the point cloud picture into an opposite number.
In an exemplary embodiment of the present application, rotating each of the label boxes and the corresponding point clouds includes: reading a configuration file, and determining a rotation range according to the configuration file; and (4) rotating the marked frame and the corresponding point cloud in each frame of point cloud picture through a rotating matrix.
The application also provides a laser radar point cloud data reinforcing means, and the device includes:
the data acquisition device acquires a data set, acquires information data of a marking frame in the data set and coordinate information of each point cloud in the marking frame, and respectively stores the information data and the coordinate information;
the data processing device determines the minimum value of the point cloud data quantity in the labeling frames of all categories and deletes the labeling frames which do not meet the minimum point cloud quantity; calculating the number of original labeling frames of each category in each frame of point cloud picture, and setting the expected number of each category labeling frame in one frame of point cloud picture; acquiring the actual number of the labeling frames of each category of each frame of point cloud picture, extracting the labeling frames in the data set, and putting the current point cloud picture into the data set so as to enable the number of the labeling frames of the current point cloud picture to be consistent with the number of the expected labeling frames; and performing data enhancement processing on the current frame point cloud picture and the corresponding labeling frame thereof.
The present application further provides an electronic device, which includes: one or more processors;
a storage device for storing one or more programs that, when executed by one or more processors, cause an electronic device to implement the lidar point cloud data enhancement method of any of the above.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the method of laser radar point cloud data enhancement as in any one of the above.
By combining the prior art, the invention has the beneficial effects that:
the method and the device process and store the existing data sets respectively, when the number of the current point cloud picture labeling frames is insufficient, the current point cloud picture labeling frames are supplemented into the current point cloud picture after being extracted from the data sets, and then data enhancement processing is carried out on the current point cloud picture. According to the method and the device, the targeted point cloud data is enhanced according to the marked scene of the laser radar point cloud image, the problems encountered in deep learning training are pertinently solved, the generalization performance of a deep learning neural network model is improved, the problem of overfitting of a training set is eliminated, the unbalanced proportion of samples is reduced, different environments and working conditions are better adapted, and the robustness of the training model is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic diagram of a lidar point cloud data enhancement method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of step S110 in an exemplary embodiment of the embodiment shown in FIG. 1;
FIG. 3 is a schematic diagram illustrating obtaining point cloud coordinate information in a label box according to an exemplary embodiment of the present application;
FIG. 4 is a flow chart illustrating an overall rotation of a point cloud graph according to an exemplary embodiment of the present application;
FIG. 5 is a flow chart illustrating an overall flipping of a point cloud graph according to an exemplary embodiment of the present application;
FIG. 6 is a flow chart illustrating rotation of a label box and its corresponding point cloud according to an exemplary embodiment of the present application;
FIG. 7 is a schematic view of a point cloud with global rotation as shown in an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram illustrating an overall inversion of a point cloud in accordance with an exemplary embodiment of the present application;
FIG. 9 is a schematic diagram illustrating rotation of a label box and its corresponding point cloud according to an exemplary embodiment of the present application;
FIG. 10 is a schematic diagram of a lidar point cloud data enhancement apparatus shown in an exemplary embodiment of the present application;
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the disclosure herein, wherein the embodiments of the present invention are described in detail with reference to the accompanying drawings and preferred embodiments. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be understood that the preferred embodiments are illustrative of the invention only and are not limiting upon the scope of the invention.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention, however, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, to avoid obscuring embodiments of the present invention.
The automatic driving automobile depends on the cooperation of artificial intelligence, visual calculation, radar, monitoring device and global positioning system, so that the computer can operate the motor vehicle automatically and safely without any active operation of human.
Autodrive automobiles rely heavily on input training data to make driving decisions, logically the more detailed the data, the better, and most importantly, the safer, the vehicle will make decisions. Although modern cameras can capture very detailed real world features, the output is still 2D and the effect is not ideal enough because it limits the information we can provide to the neural network of an autonomous vehicle, which means that the vehicle must learn to make guesses about the 3D world. At the same time, the camera has limited ability to capture information, such as when it is raining, the image captured by the camera is almost indistinguishable, and the lidar can still capture information. Therefore, the 2D camera cannot work in all environments, and since the autonomous driving car is a high-risk application scenario of the neural network, we must ensure that the constructed network is as perfect as possible, all this is from the data. Ideally, the network would be expected to have 3D data as input, since it would need to make predictions about the 3D world, which is where lidar is used.
The lidar consists of four parts: 1) Laser: a light pulse (typically ultraviolet or near infrared) is sent to the object. 2) A scanner: the speed of the laser scanning target object and the maximum distance reached by the laser are adjusted. 3) A sensor: the time (and thus the distance) required for the light from the laser to bounce off the target object and return to the system is measured. 4) GPS: the position of the lidar system is tracked to ensure accuracy of the range measurements. Lidar systems can typically transmit pulses up to 500k per second. The measurements from these pulses are aggregated into a point cloud, which is essentially a set of coordinates representing the object that the system has sensed. The point cloud is used to create a 3D model of the space surrounding the lidar.
Combining them with neural networks is reasonable in view of the type of output they generate, and a neural network that does run on a point cloud has proven to be effective. The application of laser radar point clouds to autonomous vehicles can be divided into two categories:
1) Real-time context awareness and processing for the purpose of object detection and scene understanding.
2) And generating a high-definition map and a city model for target positioning and reference.
Lidar data is used for semantic segmentation, target detection/localization and object classification, the only difference being in 3D, which makes the model more nuanced. One challenge for neural networks running on lidar data is that there is a large amount of variation depending on scan time, weather conditions, sensor type, distance, background, and a host of other factors. Due to the way the lidar operates, the density and intensity of the object vary greatly. In addition to the frequent noise of the sensors, especially the general incompleteness of lidar data (due to low surface reflectivity of certain materials and urban background clutter, etc.), a neural network that processes lidar data needs to be able to handle many variations.
For the automatic driving deep learning of the laser radar point cloud 3D target frame, the quality, the quantity and the complexity of the labeled data are very important influencing factors and directly influence the detection accuracy and the detection breadth of a deep learning model at a training position, so that when the data volume is not very sufficient, more useful data are necessarily generated from the existing training data, and the important overfitting technology can be effectively reduced on the premise of not changing a neural network structure.
Referring to fig. 1, fig. 1 is a schematic diagram of a lidar point cloud data enhancement method shown in an exemplary embodiment, and the application provides a lidar point cloud data enhancement method to generate more useful data from existing data under the condition that the data amount is insufficient, so as to improve the detection accuracy and detection breadth of a deep learning model, and specifically includes the following steps:
and S110, acquiring the information data of the marking frame in the data set and the coordinate information of each point cloud in the marking frame, and storing the information data and the coordinate information respectively.
The existing data sets are classified and stored respectively, so that the data can be matched and extracted quickly when used, and it needs to be noted that before data screening, a data conversion program needs to be compiled according to the format requirements of a deep learning training algorithm on input data, and existing point cloud and tag files are converted into the required format for deep learning training. Referring to fig. 2, fig. 2 is a schematic diagram of an exemplary embodiment of step S110, which includes the following steps:
step S210, obtaining information data of the labeling frame in the data set, wherein the information data of the labeling frame comprises the center coordinate and the length, width and height information of the labeling frame.
And S220, acquiring coordinate information of each point cloud in the labeling frame.
Acquiring coordinate information of each point cloud in the labeling frame, and calculating 8 vertexes of the labeling frame according to the coordinates of the center point, the length, the width, the height and the course angle of the labeling frame so as to determine the range of the labeling frame;
calculating 6 surfaces of the labeling frame and corresponding normal vectors according to the 8 vertexes;
and establishing a plane equation, and establishing a plane equation ax + by + cz + n =0 to judge whether the point in the point cloud is in the mark frame. Therefore, the point cloud information in the labeling frame is determined, and subsequent use is facilitated.
And step S230, respectively storing the information data of the label box and the coordinate information of the point cloud, and naming the label box and the point cloud in a form of adding a serial number to a category name.
And after the information of the label frame and the coordinate information of the corresponding point cloud are screened, storing the corresponding file name and the corresponding category information in a pkl file.
And S120, determining the minimum value of the point cloud data quantity in the marking frame, and deleting the marking frame which does not meet the minimum point cloud quantity.
And determining the minimum value of the point cloud data quantity in the marking frame according to the configuration information, and judging that the marking frame is unqualified when the point cloud quantity in the marking frame is smaller than the minimum value, so that deletion is performed, the accuracy of the marking frame is ensured, and the sufficient point cloud quantity is ensured to support the identification accuracy of the marking frame.
In one embodiment, the labeling frames are classified, the sizes of the labeling frames in different categories are different, the complexity is different, the required point cloud data are different, and the minimum value of the point cloud data amount in the labeling frames is determined for the labeling frames in each category.
The categories of the marking frames can be distinguished as follows, including the distinguishing of vehicles, pedestrians, animals, artificial obstacles, traffic boards, traffic lights, garbage cans and other large categories, and different large categories can be subdivided. Such as vehicles including cars, buses, trucks, bicycles, tricycles, special-shaped vehicles, etc.; the pedestrians comprise common pedestrians and special-shaped persons, and the special-shaped persons can be persons sitting on a wheelchair, persons stooping and squatting on the ground, persons sitting on the roadside and the like; the artificial obstacles comprise a cone, a water horse, a warning post and the like. And for different categories, deleting the marking frames less than the minimum point cloud data according to the minimum value of the point cloud data in the marking frames of all the categories in the configuration file, and reserving the marking frames not less than the minimum point cloud data so as to ensure the accuracy of the marking frames.
Step S130, calculating the number of original marking frames in each frame of point cloud picture, and setting the expected number of the marking frames in one frame of point cloud picture.
And calculating the quantity of the labeling frames of each category in each frame of the point cloud image, determining the expected quantity of the labeling frames of each category in each frame of the point cloud image according to the configuration file, and calculating the difference value between the actual quantity and the expected quantity of the labeling frames of each category.
For different scenes, the number of required labeling frames of each category is different, for example, the number of the category labeling frames of vehicles, pedestrians, traffic lights and the like is required to be increased at an intersection; if at the school intersection, more pedestrian marking frames need to be added; for example, on a highway, red and green light type labeling frames do not need to be added, and the number of types of artificial obstacles such as water horses, cone barrels and the like needs to be added, so that the number of the labeling frames of each type in each frame of point cloud pictures is determined according to different scenes and different training needs.
And step S140, acquiring the actual number of the labeling frames of each frame of point cloud picture, extracting the labeling frames in the data set, and putting the labeling frames in the current point cloud picture to enable the number of the labeling frames of the current point cloud picture to be consistent with the number of the expected labeling frames.
When the actual number of the label frames of each frame of the point cloud image is less than the expected number of the label frames, the label frames and the corresponding point cloud information stored in the step S110 are extracted and added to the current point cloud image, so as to ensure that the number of the label frames of the current point cloud image is not less than the expected number of the label frames. The labeling boxes in step S110 are named in the form of category plus serial number, which is more convenient for randomly extracting the corresponding labeling boxes in the target category database and for identifying and extracting.
And S150, performing data enhancement processing on the current frame point cloud picture and the labeling frame.
And performing further data enhancement processing on the current frame point cloud picture and the labeling frame to obtain enough data for deep learning training.
The data enhancement processing comprises integral rotation, integral overturning, rotation and translation of the marking frame and the corresponding point cloud.
Referring to fig. 7, fig. 7 is a schematic view illustrating an overall rotation of a point cloud according to an exemplary embodiment of the present application. Firstly, converting all point clouds and marking frames in the frame of point cloud image by using a conversion matrix according to a rotation range determined by a configuration file, taking a marking frame A in fig. 7 as an example, forming an A' marking frame after rotation, and randomly rotating through the rotation matrix in the rotation process after the rotation range is determined according to the configuration file.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating an overall inversion of a point cloud according to an exemplary embodiment of the present application. Firstly, determining a reversed coordinate axis, wherein the forward direction of an X axis is the forward direction of the collected vehicle, and then changing the value of the coordinate axis in the coordinate of the central point of the point cloud picture into an opposite number. Taking the B label frame in fig. 8 as an example, the B' label frame is formed after being turned, it should be noted that the turned coordinate axis may be an X axis or a Y axis, the forward direction of the X axis is the forward direction of the collected vehicle, the forward direction of the Y axis may generally be the counterclockwise rotation of the X axis by 90 °, and the determination of the specific Y axis and the specific Z axis may be adjusted according to actual needs, which is not limited in this application.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating a rotation of a label box and a point cloud corresponding to the label box according to an exemplary embodiment of the present application. Firstly, reading a configuration file, determining a rotation range according to the configuration file, and then rotating a mark frame and a point cloud corresponding to the mark frame in each frame of point cloud picture through a rotation matrix. Taking the label box C in fig. 9 as an example, the label box C' is formed after rotation. It should be noted that the rotation of the mark frame and the point cloud in the mark frame usually uses the Z coordinate axis as a rotation axis, and the Z axis is perpendicular to the plane where the picture is located, but may also be adjusted according to actual needs, which is not limited in this application. After the specific rotating angle is determined according to the configuration file, random rotation in the rotating angle range is carried out through the rotating matrix so as to meet the diversity of data.
The data after data enhancement is subjected to deep learning training, the quantity requirement of the deep learning training is met through the data after the data enhancement, targeted point cloud data enhancement processing is carried out according to the marked laser radar point cloud scene, the problems encountered in the deep learning training are solved in a targeted manner, the generalization performance of a deep learning neural network model is improved, the problem of overfitting of a training set is eliminated, the unbalanced proportion of samples is reduced, different environments and working conditions are better adapted, and the robustness of the training model is improved.
Referring to fig. 10, fig. 10 is a schematic diagram of a lidar point cloud data enhancement apparatus according to an exemplary embodiment of the disclosure, and the disclosure further provides a lidar point cloud data enhancement apparatus including:
the data acquisition module 1001 acquires an initial data set through the data acquisition module 1001, and acquires information data of a labeling frame in the data set and coordinate information of each point cloud in the labeling frame.
And the data storage module 1002 is used for storing the marking frame information data acquired and processed by the data acquisition device and the coordinate information of each point cloud in the marking frame, and naming the point cloud in the marking frame information data and the coordinate information in the marking frame in a mode of adding a serial number to a category name during storage.
The data processing module 1003 determines the minimum value of the point cloud data amount in the labeling boxes of all categories, and deletes the labeling boxes which do not meet the minimum point cloud amount; calculating the number of original labeling frames of each category in each frame of point cloud picture, and setting the expected number of each category labeling frame in one frame of point cloud picture; acquiring the actual number of the labeling frames of each category of each frame of point cloud picture, extracting the labeling frames in the data set, and putting the current point cloud picture into the data set to make the number of the labeling frames of the current point cloud picture consistent with the number of the expected labeling frames; and carrying out data enhancement processing on the current frame point cloud picture and the corresponding labeling frame thereof.
It should be noted that the lidar point cloud data enhancement device provided by the above embodiment and the lidar point cloud data enhancement method provided by the above embodiment belong to the same concept, and specific ways for the modules and units to execute operations have been described in detail in the method embodiment, and are not described herein again. In practical applications, the laser radar point cloud data enhancement device provided in the above embodiment may distribute the above functions by different functional modules according to needs, that is, divide the internal structure of the device into different functional modules to complete all or part of the above described functions, which is not limited herein.
An embodiment of the present application further provides an electronic device, including: one or more processors; a storage device, configured to store one or more programs, which when executed by the one or more processors, cause the electronic device to implement the laser radar point cloud data enhancement method provided in the above-described embodiments.
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application. It should be noted that the computer system 1100 of the electronic device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the application scope of the embodiments of the present application.
As shown in fig. 11, a computer system 1100 includes a Central Processing Unit (CPU) 1101, which can perform various appropriate actions and processes, such as executing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. In the RAM1103, various programs and data necessary for system operation are also stored. The CPU 1101, ROM 1102, and RAM1103 are connected to each other by a bus 1104. An Input/Output (I/O) interface 1105 is also connected to bus 1104.
The following components are connected to the I/O interface 1105: an input portion 1106 including a keyboard, mouse, and the like; an output section 1107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed on the drive 1111 as needed, so that a computer program read out therefrom is installed into the storage section 1108 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication portion 1109 and/or installed from the removable medium 1111. When the computer program is executed by a Central Processing Unit (CPU) 1101, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may comprise a propagated data signal with a computer-readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the lidar point cloud data enhancement method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist alone without being assembled into the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the laser radar point cloud data enhancement method provided in the above embodiments.
The foregoing embodiments are merely illustrative of the principles of the present invention and its efficacy, and are not to be construed as limiting the invention. Those skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A laser radar point cloud data enhancement method is characterized by comprising the following steps:
acquiring and respectively storing marking frame information data in a data set and coordinate information of each point cloud in a marking frame;
determining the minimum value of the point cloud data quantity in the marking frame, and deleting the marking frame which does not meet the minimum point cloud quantity;
calculating the number of the original labeling frames in each frame of point cloud picture, and setting the expected number of the labeling frames in one frame of point cloud picture;
acquiring the actual number of the labeling frames of each frame of point cloud picture, extracting the labeling frames in a data set, and putting the labeling frames into the current point cloud picture so as to enable the number of the labeling frames of the current point cloud picture to be consistent with the expected number of the labeling frames;
and performing data enhancement processing on the current frame point cloud picture and the labeling frame.
2. The lidar point cloud data enhancement method of claim 1, wherein obtaining and storing marking frame information data in the dataset and coordinate information of each point cloud within the marking frame, respectively, comprises:
acquiring information data of a labeling frame in a data set, wherein the information data of the labeling frame comprises the center coordinate and the length, width and height information of the labeling frame;
acquiring coordinate information of each point cloud in the labeling frame;
and respectively storing the information data of the marking frame and the coordinate information of the point cloud, and naming in a form of adding a serial number to a category name.
3. The lidar point cloud data enhancement method of claim 2, wherein obtaining coordinate information of each point cloud within the labeling box comprises:
calculating 8 vertexes of the labeling frame according to the coordinates of the central point, the length, the width, the height and the course angle of the labeling frame;
calculating 6 surfaces of the labeling frame and corresponding normal vectors according to the 8 vertexes;
and establishing a plane equation ax + by + cz + n =0 to judge whether the point in the point cloud is in the labeling frame.
4. The lidar point cloud data enhancement method of claim 1, wherein the obtaining of the dataset comprises:
obtaining a data conversion program according to the format requirement of a deep learning training algorithm on input data;
using the data conversion program, existing point clouds and label files are converted into a format for deep learning training.
5. The lidar point cloud data enhancement method of claim 1, wherein the data enhancement processing comprises rotating and turning the current frame point cloud image and its corresponding labeling frame as a whole, and rotating and translating each labeling frame and its corresponding point cloud.
6. The lidar point cloud data enhancement method of claim 5, wherein the global rotation comprises:
according to the rotation range determined in the configuration file;
converting all the point clouds and the labeling frames in the frame of point cloud picture by using a conversion matrix;
the flipping comprises:
determining a turning coordinate axis, wherein the forward direction of an X axis is the forward direction of the collected vehicle;
and changing the coordinate axis value in the central point coordinate of the point cloud chart into an opposite number.
7. The lidar point cloud data enhancement method of claim 5, wherein rotating each of the labeling boxes and its corresponding point cloud comprises:
reading a configuration file, and determining a rotation range according to the configuration file;
and rotating the marking frame and the point cloud corresponding to the marking frame in each frame of the point cloud picture through a rotating matrix.
8. A lidar point cloud data enhancement apparatus, the apparatus comprising:
the data acquisition module acquires a data set, acquires information data of a marking frame in the data set and coordinate information of each point cloud in the marking frame, and respectively stores the information data and the coordinate information;
the data processing module is used for determining the minimum value of the point cloud data quantity in the marking frame of each category and deleting the marking frame which does not meet the minimum point cloud quantity; calculating the number of the original labeling frames of each category in each frame of point cloud picture, and setting the expected number of each category labeling frame in one frame of point cloud picture; acquiring the actual number of the labeling frames of each category of each frame of point cloud picture, extracting the labeling frames in a data set, and putting the current point cloud into the data set to make the number of the labeling frames of the current point cloud picture consistent with the expected number of the labeling frames; and performing data enhancement processing on the current frame point cloud picture and the corresponding labeling frame thereof.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the lidar point cloud data enhancement method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the lidar point cloud data enhancement method of any one of claims 1 to 7.
CN202211517040.4A 2022-11-29 2022-11-29 Laser radar point cloud data enhancement method, device, equipment and storage medium Pending CN115761701A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211517040.4A CN115761701A (en) 2022-11-29 2022-11-29 Laser radar point cloud data enhancement method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211517040.4A CN115761701A (en) 2022-11-29 2022-11-29 Laser radar point cloud data enhancement method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115761701A true CN115761701A (en) 2023-03-07

Family

ID=85341286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211517040.4A Pending CN115761701A (en) 2022-11-29 2022-11-29 Laser radar point cloud data enhancement method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115761701A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984805A (en) * 2023-03-15 2023-04-18 安徽蔚来智驾科技有限公司 Data enhancement method, target detection method and vehicle
CN117689908A (en) * 2023-12-11 2024-03-12 深圳技术大学 Stair point cloud data enhancement method and device, intelligent terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984805A (en) * 2023-03-15 2023-04-18 安徽蔚来智驾科技有限公司 Data enhancement method, target detection method and vehicle
CN117689908A (en) * 2023-12-11 2024-03-12 深圳技术大学 Stair point cloud data enhancement method and device, intelligent terminal and storage medium

Similar Documents

Publication Publication Date Title
Zhang et al. CCTSDB 2021: a more comprehensive traffic sign detection benchmark
US11798281B2 (en) Systems and methods for utilizing machine learning models to reconstruct a vehicle accident scene from video
CN115761701A (en) Laser radar point cloud data enhancement method, device, equipment and storage medium
CN109389046B (en) All-weather object identification and lane line detection method for automatic driving
WO2020103893A1 (en) Lane line property detection method, device, electronic apparatus, and readable storage medium
KR20210013216A (en) Multi-level target classification and traffic sign detection method and apparatus, equipment, and media
CN110458050B (en) Vehicle cut-in detection method and device based on vehicle-mounted video
CN112330593A (en) Building surface crack detection method based on deep learning network
CN114418895A (en) Driving assistance method and device, vehicle-mounted device and storage medium
CN112801227B (en) Typhoon identification model generation method, device, equipment and storage medium
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN116597270A (en) Road damage target detection method based on attention mechanism integrated learning network
CN112613434A (en) Road target detection method, device and storage medium
Uzar et al. Performance analysis of YOLO versions for automatic vehicle detection from UAV images
CN110909656A (en) Pedestrian detection method and system with integration of radar and camera
Lashkov et al. Edge-computing-facilitated nighttime vehicle detection investigations with CLAHE-enhanced images
Chen Road vehicle recognition algorithm in safety assistant driving based on artificial intelligence
CN115294774A (en) Non-motor vehicle road illegal parking detection method and device based on deep learning
CN114998863A (en) Target road identification method, target road identification device, electronic equipment and storage medium
Hasan Yusuf et al. Real-Time Car Parking Detection with Deep Learning in Different Lighting Scenarios
Rani et al. Traffic sign detection and recognition using deep learning-based approach with haze removal for autonomous vehicle navigation
CN114037976A (en) Road traffic sign identification method and device
Srivastava Traffic Light Detection in Autonomous Driving Vehicles
Ng et al. Real-Time Detection of Objects on Roads for Autonomous Vehicles Using Deep Learning
Agarwal et al. The Enhancement in Road Safety using Different Image Detection and Recognition Techniques:-A State of Art

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination