CN114333418B - Data processing method for automatic driving and related device - Google Patents
Data processing method for automatic driving and related device Download PDFInfo
- Publication number
- CN114333418B CN114333418B CN202111647088.2A CN202111647088A CN114333418B CN 114333418 B CN114333418 B CN 114333418B CN 202111647088 A CN202111647088 A CN 202111647088A CN 114333418 B CN114333418 B CN 114333418B
- Authority
- CN
- China
- Prior art keywords
- data frame
- image data
- point cloud
- lidar point
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The application provides a data processing method, a device, electronic equipment and a computer readable storage medium for automatic driving, wherein the data processing method comprises the following steps: acquiring Lidar point cloud data of a region to be detected to generate a Lidar point cloud data frame; collecting an image data frame of a region to be detected; matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame to obtain the Lidar point cloud data frame corresponding to the image data frame; and sensing the area to be detected according to the image data frame and the Lidar point cloud data frame corresponding to the image data frame. Therefore, the Lidar point cloud data frame and the image data frame can be strictly and accurately matched, and accurate perception processing can be carried out based on the data of the Lidar point cloud data frame and the image data frame.
Description
Technical Field
The present application relates to the field of automatic driving, and in particular, to a data processing method and apparatus for automatic driving, an electronic device, and a computer-readable storage medium.
Background
Currently, for automatic driving, a vehicle needs to automatically travel while automatically avoiding surrounding obstacles. Therefore, sensing and perception processing of surrounding obstacles is required. In general, lidar point cloud data is acquired by a Lidar, image data is acquired by a sensing camera, a sensing camera image frame closest to a timestamp of a certain packet of data of the Lidar point cloud data frame is searched with the timestamp as a reference, the Lidar point cloud data frame and the image frame are matched and data fusion is performed, and information such as a position, a size, a category, an orientation, a track, a speed and the like of an obstacle is extracted based on the fused data.
However, the above conventional data matching method has the following drawbacks: firstly, the time stamp of the intermediate data packet of the Lidar point cloud data frame is not necessarily the time of image frame exposure, which easily causes that the time stamps of the Lidar point cloud data frame and the image frame cannot be strictly matched; next, when the number of Lidar is one or more, there is a problem that it is not possible to determine which Lidar data frame is to be matched with reference to the time stamp of the Lidar data frame.
Disclosure of Invention
An object of the present application is to provide a data processing method, apparatus, electronic device, and computer-readable storage medium for automatic driving, which enable a strict and accurate matching of a Lidar point cloud data frame with an image data frame, thereby enabling accurate sensing processing based on data of both.
The purpose of the application is realized by adopting the following technical scheme:
in a first aspect, the present application provides a data processing method for autonomous driving, comprising: acquiring Lidar point cloud data of a region to be detected to generate a Lidar point cloud data frame; collecting an image data frame of the area to be detected; matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame to obtain the Lidar point cloud data frame corresponding to the image data frame; and sensing the area to be detected according to the image data frame and the Lidar point cloud data frame corresponding to the image data frame. The technical scheme has the beneficial effects that the problem that the timestamp of the Lidar point cloud data frame cannot be aligned with the timestamp of the image is solved; by matching with the image data frame as a reference, the Lidar point cloud data frame and the image data frame can be strictly and accurately matched, so that accurate sensing processing can be performed on the basis of data of the Lidar point cloud data frame and the image data frame.
In some optional embodiments, the matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame to obtain the Lidar point cloud data frame corresponding to the image data frame comprises: matching the acquisition time of the image data frame with the timestamp of the Lidar point cloud data; and acquiring the Lidar point cloud data frame to which the timestamp belongs based on the timestamp of the Lidar point cloud data matched with the acquisition time of the image data frame. The technical scheme has the beneficial effects that by utilizing simple and efficient time matching logic, the matching of the Lidar point cloud data frame and the image data frame can be more accurate through the matching of the acquisition time of the image data frame and the timestamp of the Lidar point cloud data, so that the sensing processing can be more accurately carried out.
In some optional embodiments, the matching the frame of image data to the frame of Lidar point cloud data based on an acquisition time of the frame of image data comprises: and when the image data frame is acquired, matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame. The technical scheme has the advantages that when the image frame is collected, the point cloud data frame of the Lidar matched with the time information of the image data frame can be immediately and synchronously obtained, so that the sensing processing can be immediately carried out, and therefore the obstacle identification has high real-time performance.
In some optional embodiments, the generating the Lidar point cloud data frame comprises: generating the Lidar point cloud data frame in a first time period, wherein the acquiring the image data frame of the area to be measured comprises: acquiring the image data frame at a second time period, matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame to obtain the Lidar point cloud data frame corresponding to the image data frame, including: and matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame acquired in the second time period when the length of the first time period is less than or equal to that of the second time period, and matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the latest acquired image data frame in the image data frames acquired in the first time period when the length of the first time period is greater than that of the second time period. The technical scheme has the beneficial effects that the image data frame and the Lidar point cloud data frame can be synchronized and matched even under the condition that the first time period is not equal to the second time period. In addition, under the condition that the length of the first time period is greater than that of the second time period, matching is performed based on the time information of the latest acquired image data frame in the image data frames acquired in the first time period, so that repeated matching of a plurality of image data frames and the same Lidar point cloud data frame can be avoided, the load of data processing can be reduced, and the speed of perception processing is increased.
In some optional embodiments, the acquiring Lidar point cloud data of the region to be measured comprises: acquiring Lidar point cloud data of the area to be detected by utilizing an independent thread, wherein the step of acquiring an image data frame of the area to be detected comprises the following steps: and acquiring the image data frame of the area to be detected by utilizing an independent thread. The technical scheme has the beneficial effects that the image data frame and the Lidar point cloud data are respectively acquired through mutually independent threads, the image data frame and the Lidar point cloud data can be simultaneously acquired, and the image data frame and the Lidar point cloud data can be rapidly matched, so that the sensing processing can be carried out in real time.
In some optional embodiments, the data processing method for autonomous driving further comprises: after the Lidar point cloud data frame is generated, the generated Lidar point cloud data frame is cached, and meanwhile, a time stamp of the Lidar point cloud data is recorded. The technical scheme has the advantages that the timestamp of the Lidar point cloud data can be recorded in real time, so that the matching between the Lidar point cloud data frame and the image data frame can be more accurate.
In some optional embodiments, the data processing method for autonomous driving further comprises: after the image data frame of the area to be detected is collected, caching the collected image data frame, and simultaneously recording a time stamp of the image data frame as the collection time of the image data frame. The technical scheme has the advantages that the time information of the image data frame can be recorded in real time, so that the matching between the Lidar point cloud data frame and the image data frame can be more accurate.
In a second aspect, the present application provides a data processing apparatus for automatic driving, characterized by comprising: the system comprises a Lidar point cloud data acquisition module, a Lidar point cloud data acquisition module and a Lidar point cloud data processing module, wherein the Lidar point cloud data acquisition module acquires Lidar point cloud data of a region to be detected to generate a Lidar point cloud data frame; the image data frame acquisition module acquires an image data frame of the area to be detected; the data matching module matches the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame so as to obtain the Lidar point cloud data frame corresponding to the image data frame; and the sensing module is used for sensing the area to be detected according to the image data frame and the Lidar point cloud data frame corresponding to the image data frame.
In a third aspect, the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of any one of the above methods when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the methods described above.
Drawings
The present application is further described below with reference to the drawings and examples.
FIG. 1 is a schematic flow chart diagram of a data processing method for automatic driving according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of another data processing method for automatic driving according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of another data processing method for automatic driving provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of another data processing method for automatic driving provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a data processing device for automatic driving according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a specific example of a data processing device for automatic driving according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a data frame matching operation in an embodiment of the present application;
FIG. 8 is another schematic diagram of a data frame matching operation according to an embodiment of the present application;
FIG. 9 is a further illustration of a data frame matching operation of an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application; and
fig. 11 is a schematic structural diagram of a program product of a data processing method for automatic driving according to an embodiment of the present application.
Detailed Description
The present application is further described with reference to the accompanying drawings and the detailed description, and it should be noted that, in the present application, the embodiments or technical features described below may be arbitrarily combined to form a new embodiment without conflict.
Referring to fig. 1, an embodiment of the present application provides a data processing method for automatic driving, including: acquiring Lidar point cloud data of a region to be detected to generate a Lidar point cloud data frame (step S1); collecting an image data frame of a region to be detected (step S2); matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame to obtain a Lidar point cloud data frame corresponding to the image data frame (step S3); and sensing the area to be detected according to the image data frame and the Lidar point cloud data frame corresponding to the image data frame (step S4).
Therefore, the problem that the frame timestamp of the Lidar point cloud data cannot be aligned with the image timestamp is solved. By matching with the image data frame as a reference, the Lidar point cloud data frame and the image data frame can be strictly and accurately matched, so that accurate sensing processing can be performed on the basis of data of the Lidar point cloud data frame and the image data frame. And the logic of time matching is simple and efficient.
As shown in fig. 2, step S3 may include, for example: matching the acquisition time of the image data frame with the time stamp of the Lidar point cloud data (step S31); and obtaining the Lidar point cloud data frame to which the timestamp belongs based on the timestamp of the Lidar point cloud data matched with the acquisition time of the image data frame (step S32).
Therefore, by utilizing simple and efficient time matching logic, the matching between the Lidar point cloud data frame and the image data frame can be more accurate through the matching between the acquisition time of the image data frame and the timestamp of the Lidar point cloud data, and the sensing processing can be more accurately carried out.
Step S3 may further include, for example: and when the image data frame is acquired, matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame.
Therefore, each time an image frame is acquired, a point cloud data frame of Lidar matched with the time information of the image data frame can be immediately acquired synchronously, so that the sensing processing can be immediately carried out, and therefore the obstacle identification can be provided with high real-time performance.
Step S1 includes, for example: and generating a Lidar point cloud data frame in a first time period. Step S2 includes, for example: frames of image data are acquired at a second time period. Step S3 includes, for example: and matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame acquired in the second time period when the length of the first time period is less than or equal to that of the second time period, and matching one image data frame with the Lidar point cloud data frame based on the acquisition time of one image data frame in the image data frames acquired in the first time period when the length of the first time period is greater than that of the second time period. In one specific application, in the case that the length of the first time period is greater than the length of the second time period, one image data frame is matched with the Lidar point cloud data frame based on the acquisition time of the latest acquired image data frame in the image data frames acquired in the first time period. Wherein, the latest collection refers to the last collection in time sequence.
Thereby, synchronization and matching of the image data frame and the Lidar point cloud data frame are enabled even when the first time period and the second time period are not equal. In addition, under the condition that the length of the first time period is greater than that of the second time period, matching is performed based on time information of one image data frame in the image data frames collected in the first time period, so that repeated matching of a plurality of image data frames and the same Lidar point cloud data frame can be avoided, the load of data processing can be reduced, and the speed of perception processing is increased. In addition, the time information of the latest acquired image data frame in the image data frames acquired in the first time period is used for matching, the image data frame corresponding to the Lidar point cloud data is the latest acquired data, and the current state of the area to be detected can be accurately reflected for perception processing.
Step S1 includes, for example: and acquiring Lidar point cloud data of the area to be detected by utilizing an independent thread. Step S2 includes, for example: and acquiring an image data frame of the region to be detected by utilizing an independent thread.
Therefore, the image data frame and the Lidar point cloud data are acquired through mutually independent threads, the image data frame and the Lidar point cloud data can be acquired at the same time, the matching between the image data frame and the Lidar point cloud data can be rapidly performed, and the sensing processing can be performed in real time.
As shown in fig. 3, the data processing method for autonomous driving of the present embodiment may further include: after the Lidar point cloud data frame is generated, the generated Lidar point cloud data frame is cached, and meanwhile, the time stamp of the Lidar point cloud data is recorded (step S5).
Therefore, the timestamp of the Lidar point cloud data can be recorded in real time, and the matching between the Lidar point cloud data frame and the image data frame can be more accurate.
As shown in fig. 4, the data processing method for autonomous driving of the present embodiment may further include: after the image data frame of the region to be measured is acquired, the acquired image data frame is buffered, and the time stamp of the image data frame is recorded as the acquisition time of the image data frame (step S6).
Therefore, the time information of the image data frame can be recorded in real time, and the matching between the Lidar point cloud data frame and the image data frame can be more accurate.
Referring to fig. 5, an embodiment of the present application further provides a data processing apparatus for automatic driving, and a specific implementation manner of the data processing apparatus is consistent with the implementation manner and the achieved technical effect described in the embodiment of the data processing method, and a part of the contents are not repeated.
As shown in fig. 5, the data processing apparatus 100 for autonomous driving includes: the system comprises a Lidar point cloud data acquisition module 101, wherein the Lidar point cloud data acquisition module 101 acquires Lidar point cloud data of a region to be detected to generate a Lidar point cloud data frame; an image data frame acquisition module 102, wherein the image data frame acquisition module 102 acquires an image data frame of a region to be detected; the data matching module 103 is used for matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame so as to obtain the Lidar point cloud data frame corresponding to the image data frame; and the sensing module 104, wherein the sensing module 104 performs sensing processing on the area to be detected according to the image data frame and the Lidar point cloud data frame corresponding to the image data frame.
The data matching module 103 may match the acquisition time of the image data frame with the timestamp of the Lidar point cloud data, and obtain the Lidar point cloud data frame to which the timestamp belongs based on the timestamp of the Lidar point cloud data matched with the acquisition time of the image data frame.
The data matching module 103 may also be configured to perform a matching of an image data frame to a Lidar point cloud data frame each time the image data frame is acquired based on the acquisition time of the image data frame.
The Lidar point cloud data acquisition module 101 may generate a Lidar point cloud data frame at a first time period. The image data frame acquisition module 102 may acquire the image data frame at a second time period. The data matching module 103 may also: under the condition that the length of the first time period is less than or equal to the length of the second time period, matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame acquired in the second time period; and matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the latest acquired image data frame in the image data frames acquired in the first time period under the condition that the length of the first time period is greater than that of the second time period.
The Lidar point cloud data acquisition module 101 may acquire Lidar point cloud data of the area to be measured, for example, using an independent thread. The image data frame acquisition module 102 may acquire an image data frame of the region under test, for example, using a separate thread.
The data processing apparatus 100 for autonomous driving may further include: and the caching and recording module is used for caching the generated Lidar point cloud data frame and recording the timestamp of Lidar point cloud data, caching the acquired image data frame and recording the timestamp of the image data frame as the acquisition time of the image data frame.
The following describes a specific example of a data processing method and apparatus for automatic driving provided in the embodiments of the application.
As shown in fig. 6, the data processing apparatus for automatic driving of the present example includes a lidar manager module, a CamManager module, a DataProcess module, and a perception algorithm module.
The Lidar manager module is used for realizing related functions of Lidar point cloud data acquisition. Specifically, each Lidar device corresponds to one Lidar manager object, and the Lidar manager objects adopt independent threads to collect and cache data. As shown in fig. 7, each time a data packet of, for example, 100ms is collected by the Lidar manager object, the data packet of 100ms is packed into one frame of data, i.e., the generation period of the Lidar point cloud data frame is 100ms. And synchronously recording the time stamps of the data packets in the data frame while generating the Lidar point cloud data frame by packaging.
And the CamManager module is used for realizing related functions of camera image frame acquisition. Specifically, the CamManager corresponds to the forward-looking sensing camera module, and acquires and caches image frames by adopting an independent thread. As shown in fig. 7, the acquisition period of the image frames is, for example, 100ms. Each time an acquired image frame is buffered, the time stamp of the image frame is synchronously recorded.
The data processing module is used for searching the matched Lidar point cloud data frame and the perception camera image frame and sending the matched data of the Lidar point cloud data frame and the perception camera image frame to the perception algorithm module. Specifically, each time the CamManager module acquires an image data frame, the DataProcess module acquires the image frame from the CamManager module, searches for a timestamp of the Lidar point cloud data matched with the timestamp of the image frame by using the timestamp of the image frame as a reference, and sends the Lidar point cloud data frame to which the timestamp of the Lidar point cloud data belongs and the corresponding image frame, that is, the Lidar point cloud data frame and the image frame which are matched with each other, to the perception algorithm module.
The sensing algorithm module carries out sensing algorithm processing based on the Lidar point cloud data frame and the sensing camera image frame which are matched with each other and received from the DataProcessExample module, so that information such as the position, the size, the category, the direction, the track, the speed and the like of surrounding obstacles is extracted, and the obstacles are identified.
The specific operation of data frame matching is further described below with reference to fig. 7-9.
As shown in fig. 7, the Lidar point cloud data is collected for the area to be measured, and each 100ms Lidar data packet is collected, the 100ms data packet is packed to generate a frame of data, for example, lidar data frame 1, lidar data frame 2, lidar data frame 3 \8230, lidar data frame N; in addition, the image frames of the area to be measured, such as image 1, image 2, image 3, 82308230and image N, are acquired by the sensing camera every 100ms. Each time an image frame is acquired, the timestamp of the Lidar point cloud data frame matched with the timestamp of the image frame is searched by taking the timestamp of the image frame as a reference, so that a Lidar data frame and an image frame matched with each other are obtained, for example, a Lidar data frame 1& image 1, a Lidar data frame 2& image 2, a Lidar data frame 3& image 3 \8230, a Lidar data frame N & image N shown in fig. 7.
The case where the generation period (first time period) of the Lidar data frame and the acquisition period (second time period) of the image frame are both 100ms, that is, the generation period of the Lidar data frame and the acquisition period of the image frame are equal has been described above. However, the generation period of the Lidar data frame and the acquisition period of the image frame are not limited to this, and a generation period of the Lidar data frame and an acquisition period of the image frame that are different from each other may be employed. Hereinafter, a case where the generation period of the Lidar data frame is less than the acquisition period of the image frame and a case where the generation period of the Lidar data frame is greater than the acquisition period of the image frame will be described as an example with reference to fig. 8 and 9, respectively.
For example, as shown in fig. 8, the generation period of the Lidar data frame is 100ms, and the acquisition period of the image frame is 120ms. Every 100ms of data packets of the Lidar are collected, the 100ms data packets are packaged to generate a frame of data, such as a Lidar data frame 1, a Lidar data frame 2, a Lidar data frame 3 and a Lidar data frame 4 shown in fig. 8; in addition, image frames of the perception camera, such as image 1, image 2, image 3, 82303030image N shown in FIG. 8, are acquired every 120ms. When an image frame is acquired at a period of 120ms, the timestamp of the image frame is used as a reference to find the timestamp of the Lidar point cloud data frame matched with the timestamp of the image frame, so as to obtain Lidar data frames and image frames matched with each other, for example, lidar data frame 2& image 1, lidar data frame 3& image 2, lidar data frame 4& image 3 8230, lidar data frame M & image N (M > N) shown in fig. 8.
That is, when the generation cycle of the Lidar data frame is less than or equal to the capture cycle of the image frame, the image data frame is matched with the Lidar point cloud data frame with reference to the time stamp of the image frame captured each time in the capture cycle of the image frame.
Also for example, as shown in fig. 9, the generation period of the Lidar data frame is 200ms, and the acquisition period of the image frame is 100ms. Every 200ms of data packets are collected, the 200ms data packets are packaged to generate a frame of data, for example, a Lidar data frame 1, a Lidar data frame 2 \8230anda Lidar data frame M shown in FIG. 9; in addition, image frames of the perception camera, such as image 1, image 2, image 3, image 4 \ 8230shown in fig. 9, image N-1, image N, are acquired every 100ms. As shown in fig. 9, two image frames are acquired in each 200ms Lidar data frame generation period, and matching between the timestamp of one image frame (image 2, 4 \8230; N, N is a positive even number) and the timestamp of the Lidar point cloud data frame may be performed with reference to the timestamp of the one image frame, without operating the other image frame (image 1, 3 \8230; N-1), so as to obtain mutually matched Lidar data frames and image frames, such as Lidar data frame 1& image 2, lidar data frame 2& image 4, \8230 \ Lidar data frame M & image N (N = 2M) shown in fig. 9.
That is, when the generation period of the Lidar data frame is longer than the acquisition period of the image frame, matching is performed based on the timestamp of the latest acquired one of the plurality of image data frames acquired within the Lidar data frame generation period. Therefore, repeated matching of a plurality of image data frames and the same Lidar point cloud data frame can be avoided, the load of data processing can be reduced, and the speed of perception processing is increased.
Further, although the above example describes that the time stamp of one image data frame among a plurality of image data frames acquired within the Lidar data frame generation period is referenced in the case where the generation period of the Lidar data frame is greater than the acquisition period of the image frame, the matching of the image data frame and the Lidar point cloud data frame may be performed with the time stamp of the image frame acquired every time the acquisition period of the image frame is equal to or less than the acquisition period of the image frame as a reference as in the case where the generation period of the Lidar data frame is less than or equal to the acquisition period of the image frame.
According to the specific example of the data processing method and device for automatic driving, the problem that the time stamp of the Lidar point cloud data frame cannot be aligned with the time stamp of the image is solved, and the matching between the Lidar point cloud data frame and the image data frame can be more accurate by utilizing the time information in the image data frame and the time stamp of the Lidar point cloud data for matching, so that the sensing processing can be more accurately carried out. Further, even in the case where the first time period and the second time period are not equal, synchronization and matching of the image data frame and the Lidar point cloud data frame can be performed. Moreover, the time matching logic is simple and efficient.
Referring to fig. 10, an embodiment of the present application further provides an electronic device 200, where the electronic device 200 includes at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.
The memory 210 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 211 and/or cache memory 212, and may further include Read Only Memory (ROM) 213.
The memory 210 further stores a computer program, and the computer program can be executed by the processor 220, so that the processor 220 executes the steps of any one of the methods in the embodiments of the present application, and the specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiments of the method, and some contents are not described again.
Accordingly, processor 220 may execute the computer programs described above, as well as may execute programs/utilities 214.
The electronic device 200 may also communicate with one or more external devices 240, such as a keyboard, pointing device, bluetooth device, etc., and may also communicate with one or more devices capable of interacting with the electronic device 200, and/or with any devices (e.g., routers, modems, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and when the computer program is executed, the steps of any one of the methods in the embodiment of the present application are implemented, and a specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiment of the method, and some contents are not described again.
Fig. 11 shows a program product 300 provided by the present embodiment for implementing the method, which may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product 300 of the present invention is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Program product 300 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The foregoing description and drawings are only for purposes of illustrating the preferred embodiments of the present application and are not intended to limit the present application, which is, therefore, to the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present application.
Claims (9)
1. A data processing method for autonomous driving, comprising:
acquiring Lidar point cloud data of a region to be detected to generate a Lidar point cloud data frame;
collecting an image data frame of the area to be detected;
matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame to obtain the Lidar point cloud data frame corresponding to the image data frame; and
sensing the area to be detected according to the image data frame and the Lidar point cloud data frame corresponding to the image data frame;
the generating the Lidar point cloud data frame comprises the following steps: generating the Lidar point cloud data frame at a first time period;
the acquiring the image data frame of the region to be detected comprises: acquiring the frame of image data at a second time period;
the matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame to obtain the Lidar point cloud data frame corresponding to the image data frame comprises:
matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame acquired at the second time period if the length of the first time period is less than or equal to the length of the second time period;
and matching one image data frame with the Lidar point cloud data frame based on the acquisition time of the latest acquired image data frame in the image data frames acquired in the first time period under the condition that the length of the first time period is greater than that of the second time period.
2. The data processing method of claim 1, wherein the matching the image data frame with the Lidar point cloud data frame based on an acquisition time of the image data frame to obtain the Lidar point cloud data frame corresponding to the image data frame comprises:
matching the acquisition time of the image data frame with the timestamp of the Lidar point cloud data; and
and obtaining the Lidar point cloud data frame to which the timestamp belongs based on the timestamp of the Lidar point cloud data matched with the acquisition time of the image data frame.
3. The data processing method of claim 1, wherein the matching the image data frame to the Lidar point cloud data frame based on an acquisition time of the image data frame comprises:
and matching the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame when the image data frame is acquired.
4. The data processing method of claim 1, wherein the acquiring Lidar point cloud data of the area to be measured comprises: acquiring Lidar point cloud data of the area to be detected by utilizing an independent thread,
the acquiring the image data frame of the region to be detected comprises: and acquiring the image data frame of the area to be detected by utilizing an independent thread.
5. The data processing method of claim 2, further comprising: after the Lidar point cloud data frame is generated, caching the generated Lidar point cloud data frame, and meanwhile recording a time stamp of the Lidar point cloud data.
6. The data processing method of claim 1, further comprising: after the image data frame of the area to be detected is collected, caching the collected image data frame, and recording a time stamp of the image data frame as the collection time of the image data frame.
7. A data processing apparatus for autonomous driving, comprising:
the system comprises a Lidar point cloud data acquisition module, a Lidar point cloud data acquisition module and a database, wherein the Lidar point cloud data acquisition module acquires Lidar point cloud data of a region to be detected so as to generate a Lidar point cloud data frame;
the image data frame acquisition module acquires an image data frame of the area to be detected;
the data matching module matches the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame so as to obtain the Lidar point cloud data frame corresponding to the image data frame; and
the sensing module is used for sensing the area to be detected according to the image data frame and the Lidar point cloud data frame corresponding to the image data frame;
the Lidar point cloud data acquisition module generates a Lidar point cloud data frame in a first time period;
the image data frame acquisition module acquires the image data frame in a second time period;
the data matching module matches the image data frame with the Lidar point cloud data frame based on the acquisition time of the image data frame acquired in the second time period when the length of the first time period is less than or equal to the length of the second time period;
and the data matching module matches one image data frame with the Lidar point cloud data frame based on the acquisition time of the latest acquired image data frame in the image data frames acquired in the first time period under the condition that the length of the first time period is greater than that of the second time period.
8. An electronic device, characterized in that the electronic device comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, carries out the steps of the data processing method according to any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that a computer program is stored which, when being executed by a processor, carries out the steps of the data processing method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111647088.2A CN114333418B (en) | 2021-12-30 | 2021-12-30 | Data processing method for automatic driving and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111647088.2A CN114333418B (en) | 2021-12-30 | 2021-12-30 | Data processing method for automatic driving and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114333418A CN114333418A (en) | 2022-04-12 |
CN114333418B true CN114333418B (en) | 2022-11-01 |
Family
ID=81016343
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111647088.2A Active CN114333418B (en) | 2021-12-30 | 2021-12-30 | Data processing method for automatic driving and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114333418B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109271880A (en) * | 2018-08-27 | 2019-01-25 | 深圳清创新科技有限公司 | Vehicle checking method, device, computer equipment and storage medium |
CN111179358A (en) * | 2019-12-30 | 2020-05-19 | 浙江商汤科技开发有限公司 | Calibration method, device, equipment and storage medium |
WO2020104423A1 (en) * | 2018-11-20 | 2020-05-28 | Volkswagen Aktiengesellschaft | Method and apparatus for data fusion of lidar data and image data |
CN113256740A (en) * | 2021-06-29 | 2021-08-13 | 湖北亿咖通科技有限公司 | Calibration method of radar and camera, electronic device and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198145B (en) * | 2017-12-29 | 2020-08-28 | 百度在线网络技术(北京)有限公司 | Method and device for point cloud data restoration |
-
2021
- 2021-12-30 CN CN202111647088.2A patent/CN114333418B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109271880A (en) * | 2018-08-27 | 2019-01-25 | 深圳清创新科技有限公司 | Vehicle checking method, device, computer equipment and storage medium |
WO2020104423A1 (en) * | 2018-11-20 | 2020-05-28 | Volkswagen Aktiengesellschaft | Method and apparatus for data fusion of lidar data and image data |
CN111179358A (en) * | 2019-12-30 | 2020-05-19 | 浙江商汤科技开发有限公司 | Calibration method, device, equipment and storage medium |
CN113256740A (en) * | 2021-06-29 | 2021-08-13 | 湖北亿咖通科技有限公司 | Calibration method of radar and camera, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114333418A (en) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107677279B (en) | Method and system for positioning and establishing image | |
EP3830714B1 (en) | Systems and methods for generating metadata describing unstructured data objects at the storage edge | |
CN109145680B (en) | Method, device and equipment for acquiring obstacle information and computer storage medium | |
CN108683937B (en) | Voice interaction feedback method and system for smart television and computer readable medium | |
WO2005081127A2 (en) | System and method for generating a viewable video index for low bandwidth applications | |
JP7267363B2 (en) | Test method, device and equipment for traffic flow monitoring measurement system | |
JP7273129B2 (en) | Lane detection method, device, electronic device, storage medium and vehicle | |
US20220147542A1 (en) | Automation solutions for event logging and debugging on kubernetes | |
CN113159091B (en) | Data processing method, device, electronic equipment and storage medium | |
US8639559B2 (en) | Brand analysis using interactions with search result items | |
CN109376664B (en) | Machine learning training method, device, server and medium | |
CN114186007A (en) | High-precision map generation method and device, electronic equipment and storage medium | |
CN114333418B (en) | Data processing method for automatic driving and related device | |
CN114415542A (en) | Automatic driving simulation system, method, server and medium | |
CN115061386B (en) | Intelligent driving automatic simulation test system and related equipment | |
CN112017202B (en) | Point cloud labeling method, device and system | |
CN114285114A (en) | Charging control method and device, electronic equipment and storage medium | |
CN113674006A (en) | Agricultural product logistics traceability system based on visual calculation | |
CN111721355A (en) | Railway contact net monitoring data acquisition system | |
WO2024131459A1 (en) | Target detection method, apparatus and device, and readable storage medium | |
US11967041B2 (en) | Geospatial image processing for targeted data acquisition | |
KR102045913B1 (en) | Method for providing blackbox video and apparatus therefor | |
CN116524330B (en) | Embedded image recognition model operation management method, system and image recognition device | |
CN117609337A (en) | Organization method, device and equipment for cleaning automatic driving data and storage medium | |
Ghorbanpanah et al. | A real-time asset management system based on deep-learning via TensorFlow software library |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |