CN118279729A - Method and system for verifying intelligent driving domain controller sensing algorithm based on analog camera - Google Patents

Method and system for verifying intelligent driving domain controller sensing algorithm based on analog camera

Info

Publication number
CN118279729A
CN118279729A CN202410675104.6A CN202410675104A CN118279729A CN 118279729 A CN118279729 A CN 118279729A CN 202410675104 A CN202410675104 A CN 202410675104A CN 118279729 A CN118279729 A CN 118279729A
Authority
CN
China
Prior art keywords
data
camera
domain controller
real
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410675104.6A
Other languages
Chinese (zh)
Inventor
杨永翌
王强
张鲁
孟佳旭
王寅东
陈超
沈永旺
武晓梦
曹曼曼
王剑飞
陈旭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongqi Zhilian Technology Co ltd
China Automotive Technology and Research Center Co Ltd
Original Assignee
Zhongqi Zhilian Technology Co ltd
China Automotive Technology and Research Center Co Ltd
Filing date
Publication date
Application filed by Zhongqi Zhilian Technology Co ltd, China Automotive Technology and Research Center Co Ltd filed Critical Zhongqi Zhilian Technology Co ltd
Publication of CN118279729A publication Critical patent/CN118279729A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to the technical field of intelligent driving camera simulation, and discloses a method and a system for verifying a perception algorithm of an intelligent driving domain controller based on an analog camera, wherein the method comprises the following steps: acquiring view data and laser radar point cloud data of a real driving scene; preprocessing view data; the method comprises the steps of performing camera simulation in a module simulation module, outputting intermediate data and data group packets to an intelligent driving domain controller, and outputting laser radar point cloud data to the intelligent driving domain controller through an Ethernet board to obtain true value information of a target object under a real camera coordinate system; the sensing algorithm unit outputs a sensing result according to the original format data, compares the sensing result with true value information of a target object under a real camera coordinate system, and verifies performance indexes of the sensing algorithm. The test efficiency can be improved, the authenticity of the simulation scene and the credibility of the test result are improved, the problems of the perception algorithm are accurately identified, and the test risk is reduced.

Description

Method and system for verifying intelligent driving domain controller sensing algorithm based on analog camera
Technical Field
The invention relates to the technical field of intelligent driving camera simulation, in particular to a method and a system for verifying a perception algorithm of an intelligent driving domain controller based on an analog camera.
Background
At present, the test verification of the automatic driving automobile perception algorithm is always an industry problem, the actual automobile verification is often selected in the traditional automobile development process, and the verification is performed based on the test mileage. However, for an automatic driving automobile, the actual automobile test efficiency is low and the reproducibility is poor. In addition, the sensing algorithm of the automatic driving automobile is complex, and the sensing algorithm capability is difficult to comprehensively evaluate from the dimension of mileage alone, so that the test verification of the automatic driving automobile in the industry is generally considered to be based on the scene for simulation verification. Autopilot cars are typically equipped with a variety of sensors, including cameras, millimeter wave radars, lidars, ultrasonic radars, etc., and with the increasing level of autopilot, the number of sensors is increasing, thereby generating a large amount of real driving scenario data, which is challenging to use to verify the perception algorithm.
In the loop simulation verification method, a simulation scene is generally output through simulation software, and the intelligent driving domain controller carries out perception recognition after receiving the simulation scene, but the industry currently challenges the authenticity of the simulation scene output by the simulation software, and considers that the simulation scene picture has a certain difference in authenticity from the real world driving scene; the actual vehicle test has high fidelity and high reliability of test results, but has lower test efficiency, and is difficult to deal with the development iteration speed of an automatic driving perception algorithm; the test process of the real vehicle test is difficult to reproduce by 100%, namely the test process is difficult to trace, the real vehicle test cannot be perfectly reproduced in principle, and the requirement of the problem of a positioning sensing algorithm cannot be met; the real vehicle test is in the face of dangerous scene, and the test risk is high.
Therefore, a method and a system for verifying the perception algorithm of the intelligent driving domain controller based on the analog camera are needed, so that the testing efficiency can be improved, the authenticity of a simulation scene and the credibility of a testing result can be improved, the problems of the perception algorithm can be accurately identified, and the testing risk can be reduced.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method and a system for verifying the perception algorithm of the intelligent driving domain controller based on the analog camera, which can improve the test efficiency, the authenticity of the simulation scene and the credibility of the test result, further accurately identify the problems of the perception algorithm and reduce the test risk.
The invention provides a method and a system for verifying a perception algorithm of an intelligent driving domain controller based on an analog camera, wherein the method comprises the following steps:
Respectively acquiring view data and laser radar point cloud data of a real driving scene through a real camera and a laser radar, and carrying out data synchronization processing on the view data and the laser radar point cloud data; wherein the view data comprises video data and/or picture data;
preprocessing view data of a real driving scene subjected to data synchronization processing to obtain original format data, and transmitting the original format data to a video injection board card; the original format data comprises original view data which is acquired by the real camera and is not processed by the real camera, and the original format data is parallel data;
The method comprises the steps that camera simulation is carried out in a module simulation module of a video injection board card, the module simulation module comprises a camera simulation module and a communication simulation module, the camera simulation module receives original format data and carries out data reduction processing to obtain original format intermediate data, the intermediate data and a data group packet obtained according to camera simulation are sent to the communication simulation module, and the communication simulation module sends the intermediate data and the data group packet to a string adding unit of the video injection board card; the intermediate data comprises original view data and communication information data which are acquired by the real camera and are not processed by the real camera;
Converting the intermediate data and the data group packet into serial data through a string adding unit of the video injection board card, outputting the converted intermediate data and data group packet to the intelligent driving domain controller, and simultaneously outputting laser radar point cloud data subjected to data synchronization processing to the intelligent driving domain controller through the Ethernet board card to obtain true value information of a target object under a real camera coordinate system;
The converted intermediate data and the data group packet are recovered into parallel data through a deserializing unit in the intelligent driving domain controller, and are input into a perception algorithm unit in the intelligent driving domain controller;
the sensing algorithm unit outputs a sensing result according to the original format data, compares the sensing result with true value information of a target object under a real camera coordinate system, and verifies performance indexes of the sensing algorithm.
Further, the performing the camera simulation in the module simulation module of the video injection board card specifically includes:
Configuring module parameters of a camera simulation module according to parameter information of a real camera, and packaging the module parameters into a data group packet; the module parameters comprise resolution, pixel space, frame rate, communication protocol, data field and synchronous information, and the data group package is parallel data;
configuring communication parameters of a communication simulation module according to a transmission protocol of a real camera; wherein the communication parameters include binary addresses and numerical values of registers.
Further, configuring the communication parameters of the communication simulation module according to the transmission protocol of the real camera includes:
Analyzing and processing according to the transmission protocol of the real camera to obtain binary addresses and numerical values of a plurality of registers to be configured;
the binary address and the numerical value of a register corresponding to the intelligent driving domain controller executing the reading operation need to be configured completely according to the binary address and the numerical value of the register of the real camera;
For the binary address of the register corresponding to the write operation executed by the intelligent driving domain controller, the binary address needs to be configured completely according to the binary address of the register of the real camera, and the value of the binary address is configured to be 0.
Further, performing data synchronization processing on the view data and the laser radar point cloud data includes:
respectively acquiring the time of the real camera based on the data acquired under the main clock and the time of the laser radar based on the data acquired under the main clock; wherein the master clock is the time sequence of the Ethernet board;
and mapping the time of the data acquired by the laser radar onto a time axis by taking the time of the data acquired by the real camera as the time axis, so that the time synchronization and alignment of the view data and the laser radar point cloud data are realized.
Further, when the intermediate data and the data group packet are converted into serial data through the string adding unit of the video injection board card, and the converted intermediate data and data group packet are output to the intelligent driving domain controller, and meanwhile, when the laser radar point cloud data are output to the intelligent driving domain controller through the Ethernet board card, the intelligent driving domain controller further comprises:
And (3) taking the time sequence of the Ethernet board as a master clock to time the video injection board, so as to ensure the synchronism of the output of the intermediate data and the laser radar point cloud data to the intelligent driving domain controller.
Further, preprocessing view data of a real driving scene to obtain original format data includes:
when the view data includes only video data, converting the video data into original format data;
When the view data includes picture data, the picture data is converted into video stream data, and then the video stream data is converted into original format data.
Further, outputting the laser radar point cloud data subjected to the data synchronization processing to the intelligent driving domain controller through the Ethernet board, and obtaining the true value information of the target object under the real camera coordinate system comprises the following steps:
The intelligent driving domain controller outputs true value information of a target object in a real scene according to the laser radar point cloud data, and converts the true value information of the target object into true value information of the target object under a real camera coordinate system through coordinate mapping; the target comprises vehicles, road lines and traffic signs, and the truth value information comprises positions, types and motion states.
The invention also provides a system for verifying the intelligent driving domain controller sensing algorithm based on the analog camera, which is used for executing the method for verifying the intelligent driving domain controller sensing algorithm based on the analog camera, and comprises the following modules:
The data acquisition module comprises a real camera and a laser radar, and is respectively used for acquiring view data of a real driving scene and laser radar point cloud data, and carrying out data synchronization processing on the view data and the laser radar point cloud data; wherein the view data comprises video data and/or picture data;
The preprocessing module is connected with the real camera and is used for preprocessing view data of the real driving scene subjected to data synchronization processing to obtain original format data, and transmitting the original format data to the video injection board card; the original format data comprises original view data which is acquired by the real camera and is not processed by the real camera, and the original format data is parallel data;
The video injection board card is connected with the preprocessing module and comprises a module simulation module and a string adding unit, the module simulation module is used for performing camera simulation, the module simulation module comprises a camera simulation module and a communication simulation module, the camera simulation module receives data in an original format and performs data reduction processing to obtain intermediate data in the original format, the intermediate data and a data group packet obtained according to camera simulation are sent to the communication simulation module, and the communication simulation module sends the intermediate data and the data group packet to the string adding unit of the video injection board card; the intermediate data comprise original view data and communication information data which are acquired by the real camera and are not processed by the real camera;
The string adding unit is connected with the module simulation module and is used for converting the intermediate data and the data group packet into serial data and outputting the converted intermediate data and data group packet to the intelligent driving domain controller;
The Ethernet board is connected with the laser radar and used for outputting laser radar point cloud data to the intelligent driving domain controller to obtain true value information of a target object under a real camera coordinate system;
the intelligent driving domain controller is connected with the video injection board card and the Ethernet board card, and comprises a deserializing unit and a perception algorithm unit, wherein the deserializing unit is used for recovering the converted intermediate data and data group packets into parallel data and inputting the parallel data into the perception algorithm unit in the intelligent driving domain controller;
The sensing algorithm unit is connected with the deserializing unit and is used for outputting a sensing result according to the original format data, comparing the sensing result with true value information of a target object under a real camera coordinate system and verifying the performance index of the sensing algorithm.
Further, the preprocessing module further comprises a video decoding module, and the video decoding module is used for converting video data or video stream data into original format data.
The embodiment of the invention has the following technical effects:
By taking the truly collected video data or picture data as input data, compared with a simulation scene output by simulation software, the reality is higher, and the reliability of a test result is higher; by simulating the simulation camera, the test efficiency is greatly improved compared with the test by directly adopting a real camera; the simulation camera is configured according to the real camera, so that the simulation camera is completely consistent with the real camera, the real collected video data or picture data is sent to the intelligent driving domain controller through the simulation camera, the whole test flow is completely traceable and reproducible, and the problem of a perception algorithm can be accurately identified; the method is used for testing in a mode of simulating the camera, and even if the camera faces a dangerous scene, no testing risk exists, and the danger coefficient is 0.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for verifying a sensing algorithm of an intelligent driving domain controller based on an analog camera provided by an embodiment of the invention;
fig. 2 is a schematic structural diagram of a system for verifying a sensing algorithm of an intelligent driving domain controller based on an analog camera according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the invention, are within the scope of the invention.
Fig. 1 is a flowchart of a method for verifying a sensing algorithm of an intelligent driving domain controller based on an analog camera according to an embodiment of the present invention, referring to fig. 1, specifically includes:
S1, respectively acquiring view data and laser radar point cloud data of a real driving scene through a real camera and a laser radar, and carrying out data synchronization processing on the view data and the laser radar point cloud data.
Specifically, the data acquisition of the real driving scene can be completed through a real vehicle, a real camera, a laser radar, acquisition software, a data recording system and an upper computer. The view data is collected by the real camera, the laser radar point cloud data is collected by the laser radar, the collected view data is stored in an H264 format, the collected radar point cloud data is stored in a PCD format, and the view data and the radar point cloud data are stored in the data recording system and can be directly displayed in collecting software in an upper computer.
Further, the view data includes video data and/or picture data. And (3) taking the view data of the real driving scene as input data of a perception algorithm of the intelligent driving domain controller, taking the laser radar point cloud data as true value data, comparing the true value data with the result output by the perception algorithm of the intelligent driving domain controller, and evaluating the performance of the output result of the perception algorithm.
Further, since sampling frequencies of the two collected data may be different, it is necessary to perform data synchronization processing on the view data and the laser radar point cloud data, i.e., time synchronization and alignment processing. The data synchronization processing of the view data and the laser radar point cloud data specifically comprises the following steps:
S11, respectively acquiring time of data acquired by a real camera based on a main clock and time of data acquired by a laser radar based on the main clock; wherein the master clock is the time series of the ethernet cards.
Specifically, the time sequence of the Ethernet board is used as a master clock, the time of the master clock is set to be the highest priority time of the acquired data, and the accuracy is ensured by the vehicle-mounted GPS system. Meanwhile, a time node of data acquired by the sensor is acquired through the counter, the process is called data acquisition time, and thus the time of the data acquired by the real camera and the laser radar based on the main clock is obtained.
And S12, mapping the time of the data acquired by the laser radar onto a time axis by taking the time of the data acquired by the real camera as the time axis, so that the time synchronization and alignment of the view data and the laser radar point cloud data are realized.
Specifically, time deviation can be generated between data acquired by the real camera and the laser radar based on the time of the main clock, namely, first time data deviation, second time data deviation, the first time data deviation, the third time data deviation and the nth time data deviation, and as view data acquired by the real camera are mainly used as test data and laser radar point cloud data are used as verification data in the follow-up process, the time of the data acquired by the real camera is used as a time axis, and the time of the data acquired by the laser radar point cloud data is mapped to the time axis of the real camera. The average value of n times of data deviation is selected as the data acquisition deviation of the two sensors, and the average value is taken into consideration of hardware stability factors of a real camera and a laser radar, so that the error in a data acquisition period can be minimized.
S2, preprocessing view data of the real driving scene subjected to data synchronization processing to obtain original format data, and sending the original format data to a video injection board card.
Specifically, the original format data includes original view data which is acquired by the real camera and is not processed by the real camera, and the original format data is parallel data at the moment and is not suitable for long-distance transmission. The original view data shot by the real camera can be processed by an isp (image processing module) to render a picture which can be recognized by human beings, and the step needs to realize the inverse process, namely, the acquired picture or video frame picture which can be recognized by human beings is inversely transformed into the original view data.
Preprocessing view data of a real driving scene to obtain original format data, wherein the original format data comprises:
s2.1, when the view data only comprises video data, converting the video data into original format data; the conversion of the video DATA into the original format DATA may be implemented by the video decoding module, and the specific form of the original format should be a DATA format supported by the domain controller, for example, a RAW DATA format.
The video decoding module is composed of a reversible neural network mapping model, and performs learning function parameter optimization by utilizing a one-dimensional convolution layer to obtain an optimal transformation function weight matrix; meanwhile, in order to prevent the distortion of data compression in the data conversion process, a differentiable compression algorithm model is designed, a compression function is designed based on Fourier series, data reconstruction is achieved, and the conversion from real acquisition data to original data is achieved with minimum loss.
S2.2, when the view data comprises picture data, converting the picture data into video stream data, and then converting the video stream data into original format data.
Further, the conversion of the picture data into the video stream data generally needs to be performed through functions of reading, encoding, decoding, writing and the like of the pictures, firstly, the pictures are arranged according to a certain sequence, the pictures are compressed into a video code stream through encoding processing, the resolution, the frame rate and the encoding format of the video are set according to a real acquisition data format, so that the size of a video file is reduced, the storage and the transmission are facilitated, then interpolation and compensation processing are performed on the pictures, the encoded video stream is smoother, and the video stream data required by simulation are obtained.
S3, performing camera simulation in a module simulation module of the video injection board.
Specifically, the CMOS module of the real camera is simulated by a programmable FPGA development board by performing camera simulation in the module simulation module of the video injection board. The module simulation module comprises a camera simulation module and a communication simulation module, wherein the camera simulation module receives data in an original format and performs data reduction processing to obtain intermediate data in the original format, the intermediate data and a data group packet obtained according to camera simulation are sent to the communication simulation module, and the communication simulation module sends the intermediate data and the data group packet to a string adding unit of the video injection board card. The intermediate data comprises original view data and communication information data which are acquired by the real camera and are not processed by the real camera.
Specifically, the performing camera simulation in the module simulation module of the video injection board card specifically includes:
S3.1, configuring module parameters of a camera simulation module according to parameter information of a real camera, and packaging the module parameters into a data packet; the module parameters comprise resolution, pixel space, frame rate, communication protocol, data field, synchronous information and the like, and the module parameters of the camera simulation module are completely consistent with the parameter information of the real camera; the data group packet at this time is parallel data, and is not suitable for long-distance transmission.
The camera simulation module specifically comprises an image-sensitive unit array, a row driver, a column driver, a time sequence control logic, an AD converter, a DATA bus output interface, a control interface, a single clock and the like, and simultaneously configures a CMOS driving program, and each pixel point of RAW DATA DATA on the FPGA development board only can feel one of R, G, B, so that the DATA stored in each pixel point is monochromatic light DATA.
S3.2, configuring communication parameters of the communication simulation module according to a transmission protocol of the real camera; wherein the communication parameters include binary addresses and numerical values of registers.
Specifically, a programmable FPGA development board is embedded on a video injection board card, and a reference clock is set to complete the adaptation of a system data source. The data transmission between the real camera and the intelligent driving domain controller involves some communication data besides the view data, so that the communication data also needs to be reproduced.
Further, configuring the communication parameters of the communication simulation module according to the transmission protocol of the real camera includes:
And analyzing and processing according to the transmission protocol of the real camera to obtain binary addresses and numerical values of a plurality of registers to be configured.
The binary address and the numerical value of the register corresponding to the intelligent driving domain controller executing the reading operation need to be configured completely according to the binary address and the numerical value of the register of the real camera.
For the binary address of the register corresponding to the write operation executed by the intelligent driving domain controller, the binary address needs to be configured completely according to the binary address of the register of the real camera, and the value of the binary address is configured to be 0.
The transmission mode of the communication data is exemplified by the I2C protocol. The method specifically comprises the following steps:
And a video injection board card is connected in series between the real camera and the intelligent driving domain controller, the data flow is the real camera, the video injection board card and the intelligent driving domain controller, and the video injection board card is configured and simultaneously the view data and the I2C data are recorded. After the intelligent driving domain controller is powered on, the intelligent driving domain controller sends a trigger signal to the real camera and modifies the intelligent driving domain camera register configuration, and the behaviors are stored in I2C data in an I2C protocol mode, and are data generated under the real condition.
In order to realize the communication behavior between the real camera and the intelligent driving domain controller in simulation, in CMOS module simulation, the obtained I2C data under the real condition is analyzed and processed to obtain binary addresses and numerical values of a plurality of registers to be configured, and a block of storage is opened up in a storage area of a video injection board card for simulating the I2C data.
The intelligent driving domain controller has two kinds of behaviors to the camera: for reading and writing, for writing, only the existence of the binary address of the register corresponding to the writing is ensured, and the value can be set to 0, because the intelligent driving domain controller can cover the original register value during the writing. For the reading operation, 100% reproduction is required according to the data under the real condition, the accuracy and format cannot be wrong and cannot be lost, because once the intelligent driving domain controller does not read the correct data, the normal communication with the camera cannot be ensured, and the whole data path cannot be performed.
S4, converting the intermediate data and the data group packet into serial data through a string adding unit of the video injection board card, outputting the converted intermediate data and data group packet to the intelligent driving domain controller, and simultaneously outputting laser radar point cloud data subjected to data synchronous processing to the intelligent driving domain controller through the Ethernet board card to obtain true value information of a target object under a real camera coordinate system.
Specifically, the remote transmission of data can be realized after the intermediate data and the data group packet are converted into serial data through the string adding unit, and the string adding unit outputs the data according to bits (bits). The specific implementation mode is as follows: the FPGA chip in the video injection board card is used as a CMOS module for simulating a real camera, intermediate data in an original format is output to the string adding unit, and specific contents transmitted to the string adding unit comprise line synchronization information, frame synchronization information, image data information, pixel clocks and the like.
Illustratively, the deserializing unit may be configured as follows:
The main clock synchronous DATA stream is configured, in the scheme, the clock frequency is configured to be 1.2GHz, the resolution of transmission DATA is 3840 multiplied by 2160, the frame rate is 30fps, and the detected intelligent driving domain controller receives the RAW DATA DATA format, so that the serial unit DATA format is set to be the RAW DATA mode. The serialization unit generally supports a plurality of data channels, and can support 4 paths at maximum, and 4 paths of data channels are set for data transmission because the 4k video data volume is larger. The serial adding unit generally has a low power consumption mode and a high performance mode, and is set to the high performance mode in order to ensure the lossless of data transmission.
Further, the intermediate data and the laser radar point cloud data are respectively input to the intelligent driving domain controller through the video injection board card and the Ethernet board card, and the intelligent driving domain controller needs to acquire the intermediate data in a triggering mode, so that when the data are reinjected to the intelligent driving domain controller, the time sequence of the Ethernet board card is used as a master clock to time the video injection board card, and the synchronism of the output of the intermediate data and the laser radar point cloud data to the intelligent driving domain controller is ensured.
Further, outputting the laser radar point cloud data subjected to the data synchronization processing to the intelligent driving domain controller through the Ethernet board, and obtaining the true value information of the target object under the real camera coordinate system comprises the following steps:
The intelligent driving domain controller outputs true value information of a target object in a real scene according to the laser radar point cloud data, and converts the true value information of the target object into true value information of the target object under a real camera coordinate system through coordinate mapping; the target object may include a vehicle, a road line, a traffic sign, etc., and the truth information may include a position, a type, a movement state, etc.
Furthermore, the truth information of the target object in the real scene can be output through a truth system computing platform in the intelligent driving domain controller according to the laser radar point cloud data.
S5, recovering the converted intermediate data and data group packets into parallel data through a deserializing unit in the intelligent driving domain controller, and inputting the parallel data into a perception algorithm unit in the intelligent driving domain controller.
Specifically, the deserializing unit receives the serial data and the clock signal source output by the deserializing unit, samples the input serial data according to the rising edge or the falling edge of the clock signal, stores the data in an internal register of the deserializer, and controls the sampling and storing operation by the clock signal, wherein the frequencies of the clock signal and the serial data are required to be kept consistent.
And S6, the perception algorithm unit outputs a perception result according to the original format data, compares the perception result with true value information of a target object under a real camera coordinate system, and verifies the performance index of the perception algorithm.
Illustratively, the perceived algorithm performance index may include accuracy, precision, and the like. The preset accuracy rate can be set, and when the accuracy rate between the sensing result output by the sensing algorithm unit according to the original format data and the true value information of the target object under the real camera coordinate system is greater than or equal to the preset accuracy rate, the sensing algorithm unit indicates that the sensing algorithm performance is excellent.
In the embodiment of the invention, by taking the truly acquired video data or the image data as the input data, compared with a simulation scene output by simulation software, the reality is higher, and the reliability of the test result is higher; by simulating the simulation camera, the test efficiency is greatly improved compared with the test by directly adopting a real camera; the simulation camera is configured according to the real camera, so that the simulation camera is completely consistent with the real camera, the real collected video data or picture data is sent to the intelligent driving domain controller through the simulation camera, the whole test flow is completely traceable and reproducible, and the problem of a perception algorithm can be accurately identified; the method is used for testing in a mode of simulating the camera, and even if the camera faces a dangerous scene, no testing risk exists, and the danger coefficient is 0.
Fig. 2 is a schematic structural diagram of a system for verifying a sensing algorithm of a intelligent driving domain controller based on an analog camera according to an embodiment of the present invention, where the system is used for executing the method for verifying the sensing algorithm of the intelligent driving domain controller based on the analog camera according to the above embodiment, and as shown in fig. 2, the system includes the following modules:
The data acquisition module comprises a real camera and a laser radar, and is respectively used for acquiring view data of a real driving scene and laser radar point cloud data, and carrying out data synchronization processing on the view data and the laser radar point cloud data; wherein the view data comprises video data and/or picture data;
The preprocessing module is connected with the real camera and is used for preprocessing view data of the real driving scene subjected to data synchronization processing to obtain original format data, and transmitting the original format data to the video injection board card; the original format data comprises original view data which is acquired by the real camera and is not processed by the real camera, and the original format data is parallel data;
The video injection board card is connected with the preprocessing module and comprises a module simulation module and a string adding unit, the module simulation module is used for performing camera simulation, the module simulation module comprises a camera simulation module and a communication simulation module, the camera simulation module receives data in an original format and performs data reduction processing to obtain intermediate data in the original format, the intermediate data and a data group packet obtained according to camera simulation are sent to the communication simulation module, and the communication simulation module sends the intermediate data and the data group packet to the string adding unit of the video injection board card; the intermediate data comprise original view data and communication information data which are acquired by the real camera and are not processed by the real camera;
The string adding unit is connected with the module simulation module and is used for converting the intermediate data and the data group packet into serial data and outputting the converted intermediate data and data group packet to the intelligent driving domain controller;
The Ethernet board is connected with the laser radar and used for outputting laser radar point cloud data to the intelligent driving domain controller to obtain true value information of a target object under a real camera coordinate system;
the intelligent driving domain controller is connected with the video injection board card and the Ethernet board card, and comprises a deserializing unit and a perception algorithm unit, wherein the deserializing unit is used for recovering the converted intermediate data and data group packets into parallel data and inputting the parallel data into the perception algorithm unit in the intelligent driving domain controller;
The sensing algorithm unit is connected with the deserializing unit and is used for outputting a sensing result according to the original format data, comparing the sensing result with true value information of a target object under a real camera coordinate system and verifying the performance index of the sensing algorithm.
Further, the preprocessing module further comprises a video decoding module, and the video decoding module is used for converting video data or video stream data into original format data.
In the embodiment of the invention, the view data of the real scene is acquired through the real camera, and compared with the simulation scene output through the simulation software, the real degree is higher, and the reliability of the test result is higher; the real camera is simulated by the video decoding module, the module simulation module and the string adding unit, so that the testing efficiency is greatly improved compared with the method of directly adopting the real camera for testing; based on the module simulation module, the real camera simulates the real acquired view data to be sent to the intelligent driving domain controller, the whole test flow is completely traceable and reproducible, and the problem of a perception algorithm can be accurately identified; through the mode test of simulation camera, even facing dangerous scene, there is not any test risk yet, and the danger coefficient is 0.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used in this specification, the terms "a," "an," "the," and/or "the" are not intended to be limiting, but rather are to be construed as covering the singular and the plural, unless the context clearly dictates otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method or apparatus that includes the element.
It should also be noted that the positional or positional relationship indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the positional or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or element in question must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Unless specifically stated or limited otherwise, the terms "mounted," "connected," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present invention.

Claims (9)

1. The method for verifying the intelligent driving domain controller perception algorithm based on the analog camera is characterized by comprising the following steps of:
Respectively acquiring view data and laser radar point cloud data of a real driving scene through a real camera and a laser radar, and carrying out data synchronization processing on the view data and the laser radar point cloud data; wherein the view data comprises video data and/or picture data;
Preprocessing the view data of the real driving scene subjected to data synchronization processing to obtain original format data, and transmitting the original format data to a video injection board card; the original format data comprises original view data which is acquired by the real camera and is not processed by the real camera, and the original format data is parallel data;
The method comprises the steps that camera simulation is carried out in a module simulation module of the video injection board card, the module simulation module comprises a camera simulation module and a communication simulation module, the camera simulation module receives data in an original format and carries out data reduction processing to obtain intermediate data in the original format, the intermediate data and a data group packet obtained according to the camera simulation are sent to the communication simulation module, and the communication simulation module sends the intermediate data and the data group packet to a string adding unit of the video injection board card; the intermediate data comprise original view data and communication information data which are acquired by the real camera and are not processed by the real camera;
Converting the intermediate data and the data group packet into serial data through a string adding unit of the video injection board card, outputting the converted intermediate data and the converted data group packet to a intelligent driving domain controller, and simultaneously outputting the laser radar point cloud data subjected to data synchronization processing to the intelligent driving domain controller through an Ethernet board card to obtain true value information of a target object under a real camera coordinate system;
the intermediate data and the data group packet after conversion are recovered into parallel data through a deserializing unit in the intelligent driving domain controller, and are input into a perception algorithm unit in the intelligent driving domain controller;
And the perception algorithm unit outputs a perception result according to the original format data, compares the perception result with the true value information of the target object under the real camera coordinate system, and verifies the performance index of the perception algorithm.
2. The method for verifying intelligent driving domain controller perception algorithm based on an analog camera according to claim 1, wherein the performing of the camera analog simulation in the module simulation module of the video injection board card specifically comprises:
configuring module parameters of a camera simulation module according to the parameter information of the real camera, and packaging the module parameters into a data group packet; the module parameters comprise resolution, pixel space, frame rate, communication protocol, data field and synchronous information, and the data group package is parallel data;
Configuring communication parameters of a communication simulation module according to the transmission protocol of the real camera; wherein the communication parameters comprise binary addresses and numerical values of registers.
3. The method for verifying intelligent driving domain controller sensing algorithm based on the analog camera according to claim 2, wherein the configuring the communication parameters of the communication simulation module according to the transmission protocol of the real camera comprises:
Analyzing and processing according to the transmission protocol of the real camera to obtain binary addresses and numerical values of a plurality of registers to be configured;
The binary address and the numerical value of the register corresponding to the intelligent driving domain controller executing the reading operation need to be configured completely according to the binary address and the numerical value of the register of the real camera;
and the binary address of the register corresponding to the write operation executed by the intelligent driving domain controller is required to be configured completely according to the binary address of the register of the real camera, and the numerical value of the binary address is configured to be 0.
4. The method for verifying intelligent driving domain controller perception algorithm based on an analog camera according to claim 1, wherein the performing data synchronization processing on the view data and the lidar point cloud data comprises:
Respectively acquiring the time of the real camera based on the data acquired under the main clock and the time of the laser radar based on the data acquired under the main clock; wherein the master clock is the time sequence of the Ethernet board;
And mapping the time of the data acquired by the laser radar onto a time axis by taking the time of the data acquired by the real camera as the time axis, so that the time synchronization and alignment of the view data and the laser radar point cloud data are realized.
5. The method for verifying a sensor algorithm of a smart driver domain controller based on an analog camera according to claim 4, wherein when the intermediate data and the data group packet are converted into serial data by a serialization unit of the video injection board and the converted intermediate data and the converted data group packet are output to the smart driver domain controller, and simultaneously, the laser radar point cloud data is output to the smart driver domain controller by an ethernet board, further comprising:
and carrying out time service on the video injection board by taking the time sequence of the Ethernet board as a master clock, so as to ensure the synchronism of the output of the intermediate data and the laser radar point cloud data to the intelligent driving domain controller.
6. The method for verifying a sensor perception algorithm of an intelligent driving domain controller based on an analog camera according to claim 1, wherein preprocessing the view data of the real driving scene to obtain the original format data comprises:
Converting the video data into the original format data when the view data includes only video data;
when the view data includes picture data, converting the picture data into video stream data, and converting the video stream data into the original format data.
7. The method for verifying a sensing algorithm of a intelligent driving domain controller based on an analog camera according to claim 1, wherein outputting the laser radar point cloud data subjected to data synchronization processing to the intelligent driving domain controller through an ethernet board, and obtaining true value information of a target object under the real camera coordinate system comprises:
The intelligent driving domain controller outputs true value information of a target object in a real scene according to the laser radar point cloud data, and converts the true value information of the target object into true value information of the target object under a real camera coordinate system through coordinate mapping; the target comprises vehicles, road lines and traffic signs, and the truth value information comprises positions, types and motion states.
8. A system for verifying a smart driver domain controller perception algorithm based on an analog camera for performing the method for verifying a smart driver domain controller perception algorithm based on an analog camera as claimed in any one of claims 1-7, comprising the following modules:
The data acquisition module comprises a real camera and a laser radar, and is used for acquiring view data of a real driving scene and laser radar point cloud data respectively and carrying out data synchronization processing on the view data and the laser radar point cloud data; wherein the view data comprises video data and/or picture data;
The preprocessing module is connected with the real camera and is used for preprocessing the view data of the real driving scene subjected to data synchronization processing to obtain original format data, and transmitting the original format data to a video injection board card; the original format data comprises original view data which is acquired by the real camera and is not processed by the real camera, and the original format data is parallel data;
The video injection board is connected with the preprocessing module and comprises a module simulation module and a string adding unit, the module simulation module is used for performing camera simulation, the module simulation module comprises a camera simulation module and a communication simulation module, the camera simulation module receives the original format data and performs data reduction processing to obtain the original format intermediate data, the intermediate data and a data group packet obtained according to camera simulation are sent to the communication simulation module, and the communication simulation module sends the intermediate data and the data group packet to the string adding unit of the video injection board; the intermediate data comprise original view data and communication information data which are acquired by the real camera and are not processed by the real camera;
the string adding unit is connected with the module simulation module and is used for converting the intermediate data and the data group packet into serial data and outputting the converted intermediate data and the converted data group packet to an intelligent driving domain controller;
The Ethernet board is connected with the laser radar and is used for outputting the laser radar point cloud data to the intelligent driving domain controller to obtain true value information of a target object under the real camera coordinate system;
The intelligent driving domain controller is connected with the video injection board card and the Ethernet board card, and comprises a deserializing unit and a perception algorithm unit, wherein the deserializing unit is used for recovering the converted intermediate data and the converted data group packet into parallel data and inputting the parallel data into the perception algorithm unit in the intelligent driving domain controller;
the sensing algorithm unit is connected with the deserializing unit and is used for outputting a sensing result according to the original format data, comparing the sensing result with the true value information of the target object under the real camera coordinate system and verifying the performance index of the sensing algorithm.
9. The system for verifying a smart driver domain controller awareness algorithm based on an analog camera of claim 8, wherein the preprocessing module further comprises a video decoding module for converting video data or video streaming data into raw format data.
CN202410675104.6A 2024-05-29 Method and system for verifying intelligent driving domain controller sensing algorithm based on analog camera Pending CN118279729A (en)

Publications (1)

Publication Number Publication Date
CN118279729A true CN118279729A (en) 2024-07-02

Family

ID=

Similar Documents

Publication Publication Date Title
CN107610084B (en) Method and equipment for carrying out information fusion on depth image and laser point cloud image
CN109345596A (en) Multisensor scaling method, device, computer equipment, medium and vehicle
US11210570B2 (en) Methods, systems and media for joint manifold learning based heterogenous sensor data fusion
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
Komorkiewicz et al. FPGA-based hardware-in-the-loop environment using video injection concept for camera-based systems in automotive applications
CN106339194A (en) Method and system for dynamically adjusting multi-device display effect
CN114661028A (en) Intelligent driving controller test method and device, computer equipment and storage medium
CN118279729A (en) Method and system for verifying intelligent driving domain controller sensing algorithm based on analog camera
US20210018589A1 (en) Sensor calibration in advanced driver-assistance system verification
CN107483868A (en) Processing method, FPGA and the laser television of VBO signals
CN116665170A (en) Training of target detection model, target detection method, device, equipment and medium
CN114821544B (en) Perception information generation method and device, vehicle, electronic equipment and storage medium
US20210241074A1 (en) System for generating synthetic digital data for data multiplication
US11637953B2 (en) Method, apparatus, electronic device, storage medium and system for vision task execution
CN115205974A (en) Gesture recognition method and related equipment
US20200104647A1 (en) Automated method and device capable of providing dynamic perceptive invariance of a space-time event with a view to extracting unified semantic representations therefrom
Marcus et al. A lightweight machine learning pipeline for LiDAR-simulation
CN111428751B (en) Object detection method based on compressed sensing and convolutional network
US11551061B2 (en) System for generating synthetic digital data of multiple sources
CN111831570A (en) Test case generation method oriented to automatic driving image data
Venturi et al. Hands-on vision and behavior for self-driving cars: explore visual perception, lane detection, and object classification with Python 3 and OpenCV 4
WO2018152783A1 (en) Image processing method and device, and aircraft
WO2024055229A1 (en) Image processing method, apparatus, and system, and intelligent device
CN117911525A (en) Multi-mode multi-path complementary visual data calibration method and device
KR102426844B1 (en) Data conversion and processing system including image recording device and network server and method using the system

Legal Events

Date Code Title Description
PB01 Publication