CN117667706A - Simulation data processing method and device, electronic equipment and storage medium - Google Patents

Simulation data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117667706A
CN117667706A CN202311671826.6A CN202311671826A CN117667706A CN 117667706 A CN117667706 A CN 117667706A CN 202311671826 A CN202311671826 A CN 202311671826A CN 117667706 A CN117667706 A CN 117667706A
Authority
CN
China
Prior art keywords
data
cameras
paths
original
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311671826.6A
Other languages
Chinese (zh)
Inventor
谷苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202311671826.6A priority Critical patent/CN117667706A/en
Publication of CN117667706A publication Critical patent/CN117667706A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3696Methods or tools to render software testable

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure provides a simulation data processing method, a simulation data processing device, electronic equipment and a storage medium. The disclosure relates to the technical field of automatic driving, in particular to the technical fields of simulation test, data processing and the like. The specific scheme is as follows: receiving an original camera data file, wherein the original camera data file is obtained by splicing original data files based on N paths of cameras; splitting the original camera data file to obtain original image data corresponding to each of the N paths of cameras; generating simulation image data of the N paths of cameras under a preset time sequence based on the original image data corresponding to the N paths of cameras; and outputting simulation image data of the N paths of cameras under a preset time sequence. According to the scheme, the image data of the N-path cameras can be transmitted to the FPGA within the specified time window, the simulation image data of the N-path cameras under the preset time sequence are generated and output, the efficiency and the stability of data transmission are improved, and therefore the accuracy of automatic driving simulation test is improved.

Description

Simulation data processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of automatic driving, in particular to the technical fields of simulation test, data processing and the like.
Background
In the field of autopilot, when camera sensor simulation is performed based on a field programmable gate array (FieldProgrammableGate Array, FPGA), an upper layer application needs to transmit image data of multiple camera sensors to the FPGA within a specified time window. Therefore, how to realize accurate and timely transmission of image data to the FPGA is a key to realize automatic driving simulation.
Disclosure of Invention
The disclosure provides a simulation data processing method, a simulation data processing device, electronic equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided a simulation data processing method, including:
receiving an original data file, wherein the original data file is obtained by splicing original data files based on N paths of cameras, and N is a positive integer greater than 1;
splitting the original camera data file to obtain original image data corresponding to each of the N paths of cameras;
generating simulation image data of the N paths of cameras under a preset time sequence based on the original image data corresponding to the N paths of cameras;
and outputting simulation image data of the N paths of cameras under a preset time sequence.
According to a second aspect of the present disclosure, there is provided an emulation data processing apparatus comprising:
the receiving module is used for receiving an original camera data file, the original camera data file is obtained by splicing original data files based on N paths of cameras, and N is a positive integer greater than 1;
The splitting module is used for splitting the original camera data file to obtain the original image data corresponding to each of the N paths of cameras;
the generation module is used for generating simulation image data of the N paths of cameras under a preset time sequence based on the original image data corresponding to the N paths of cameras;
and the output module is used for outputting the simulation image data of the N paths of cameras under the preset time sequence.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor;
a memory communicatively coupled to the at least one processor;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a method according to any one of the embodiments of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program stored on a storage medium, which when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
According to the scheme, the image data of the N-path cameras can be transmitted to the FPGA within the specified time window, the simulation image data of the N-path cameras under the preset time sequence is generated and output, the data transmission efficiency is improved, the data transmission stability is improved, and therefore the accuracy of the automatic driving simulation test is improved.
The foregoing summary is for the purpose of the specification only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will become apparent by reference to the drawings and the following detailed description.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the disclosure and are not therefore to be considered limiting of its scope.
FIG. 1 is a flow diagram of a simulated data processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic architecture diagram of simulated data processing according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a camera data stitching process according to an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of the storage of multi-path camera data according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a simulated data processing apparatus according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a scenario of a simulated data processing method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device for implementing the simulation data processing method of the embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a series of steps or elements. The method, system, article, or apparatus is not necessarily limited to those explicitly listed but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
In the autopilot field, when camera sensor simulation is performed based on an FPGA, an upper application program needs to transmit data of multiple camera sensors to the FPGA within a predetermined time. Because camera data has a high degree of real-time requirements, any delay in failing to respond to a trigger signal in time can result in loss of valuable sensor data. Therefore, ensuring that the data of the multi-path camera sensor can be accurately and timely transmitted to the FPGA is a key for realizing automatic driving simulation. In order to avoid data loss, an upper application program needs to adopt an efficient data transmission mechanism to simultaneously transmit the data of multiple paths of camera sensors to the FPGA so as to ensure that the data of each path of camera sensors can reach the FPGA within a preset time.
In the prior art, an upper application program sends multiple paths of camera data to an FPGA (field programmable gate array) in a time-sharing multiplexing mode through a high-speed serial computer expansion bus (Peripheral ComponentInterconnectExpress, PCIE) interface, the FPGA analyzes received data frame heads, then caches each path of camera data to a first-in first-out (FirstInputFirstOutput, FIFO) data buffer, waits for a camera trigger (trigger) signal, and sends out each path of camera data when detecting the trigger signal; however, this method has the following problems:
1. When the upper layer application program sends the multiple paths of camera data, arbitration needs to be performed, that is, the multiple paths of camera data are sent in a certain order, for example, the multiple paths of camera data are sent in a circulation mode in the order of cameras 1, 2, 3, 4, 5, 6, 7 and 8, and a great amount of time is consumed when the upper layer application program makes arbitration each time, so that the transmission efficiency is low.
After receiving the camera data through PCIE, the fpga needs to analyze according to different camera identification numbers (IDs), and then distributes each path of camera data to different camera FIFO data buffers.
3. In an automatic driving system, in order to ensure that data of multiple camera sensors are synchronously transmitted when a trigger signal occurs, camera data with enough data amount buffered in each path of camera FIFO data buffer needs to be transmitted after the trigger signal is detected. In order to ensure that the FIFO data buffer of each path of camera cannot be empty during data transmission, each path of camera data transmitted through PCIE (peripheral component interface express) in a single time cannot be oversized, namely, an upper application program only can transmit a small amount of data each time when transmitting each path of camera data, so that the transmission efficiency is lower.
In order to at least partially solve one or more of the above problems and other potential problems, the present disclosure proposes a simulation data processing method, capable of transmitting a spliced original camera data file to an FPGA, so as to ensure that the original camera data file is accurately and timely transmitted to the FPGA; in addition, the original camera data file is split by utilizing the parallel computing capability of the PFGA, so that the original image data of N paths of cameras can be obtained simultaneously; simulating a real time sequence, generating simulation image data of the N-path camera under a preset time sequence, and outputting the simulation image data of the N-path camera from the PFGA so as to be used for simulating an automatic driving function, so that the image data loss caused by failure to respond to a trigger signal in time is avoided, the data transmission efficiency is improved, and the accuracy and the instantaneity of the automatic driving simulation data are improved.
An embodiment of the present disclosure provides a simulation data processing method, and fig. 1 is a schematic flow diagram of a simulation data processing method according to an embodiment of the present disclosure, where the simulation data processing method may be applied to a simulation data processing apparatus. The simulation data processing apparatus is located in an electronic device. The electronic device includes, but is not limited to, a stationary device and/or a mobile device. For example, the fixed device includes, but is not limited to, a server, which may be a cloud server or a general server. For example, mobile devices include, but are not limited to, vehicle terminals, notebook computers, tablet computers, and the like. In some possible implementations, the simulated data processing method may also be implemented by way of a processor invoking computer readable instructions stored in a memory. As shown in fig. 1, the simulation data processing method includes:
s101: receiving an original camera data file, wherein the original camera data file is obtained by splicing original data files based on N paths of cameras;
s102: splitting the original camera data file to obtain original image data corresponding to each of the N paths of cameras;
s103: generating simulation image data of the N paths of cameras under a preset time sequence based on the original image data corresponding to the N paths of cameras;
S104: and outputting simulation image data of the N paths of cameras under a preset time sequence.
The steps S101 to S104 may be performed by an FPGA. After receiving the original camera data file, the FPGA splits the received original camera data file to obtain the original image data corresponding to each of the N paths of cameras, then generates the simulation image data corresponding to each of the N paths of cameras, and simulates a real time sequence to output the simulation image data corresponding to each of the N paths of cameras.
Here, the raw image data from the N cameras is stitched by the upper layer application. Specifically, the upper layer application program splices the N paths of original camera data files to obtain a single larger original camera data file, and then sends the spliced single original camera data file to the FPGA through PCIE.
In some embodiments, the raw camera data file is spliced based on raw data files of N-way cameras, where N is a positive integer greater than 1. It should be noted that, the stitching is not a normal sequential stitching, but the original data files corresponding to the N cameras are parsed to obtain a plurality of original image data corresponding to the N cameras, and then the original image data in the original data files from the N cameras are stitched according to a preset ordering rule. Here, the preset ordering rule may include: according to a preset sequence of the N paths of cameras, sequencing data of 1 st to m bits (bits) in a plurality of original image data corresponding to the N paths of cameras respectively; sequencing the data of the m+1-2 mbits in the plurality of original image data corresponding to the N paths of cameras according to the preset sequence of the N paths of cameras; sequencing 2m+1-3 mbit data in the plurality of original image data corresponding to the N paths of cameras according to the preset sequence of the N paths of cameras; and the same is repeated until the rest image data in the plurality of original image data corresponding to each of the N cameras is sequenced, and an original camera data file is obtained.
In some embodiments, the acquisition system acquires the original image data of the N-path camera in advance, and stores the original image data of the N-path camera into a database (such as a storage unit) to form an original camera data file of the N-path camera. Here, the raw image data of the N-way camera may be image data collected by an automated driving vehicle at the same place and at the same time; the image data collected by common vehicles at the same place and at the same time can also be obtained; the method can also be used for capturing the image data of the vehicles at the same place and at the same time from the network; or image data from the test vehicle at the same location and at the same time.
In some embodiments, the raw image data corresponding to each of the N cameras in step S102 is obtained by splitting the raw camera data file received in step S101. Illustratively, if n=4, the original camera data file is {01aa,02aa,03aa,04aa,01bb,02bb,03bb,04bb,01cc,02cc,03cc,04cc,01dd,02dd,03dd,04dd }, splitting the original camera data file results in original image data corresponding to each of the 4 cameras, that is, original image data {01aa,01bb,01cc,01dd }, original image data {02aa,02bb,02cc,02dd }, original image data {03aa,03bb,03cc,03dd }, and original image data {04aa,04bb,04cc,04dd }, respectively, of the first camera.
In some embodiments, the preset timing is the same timing as the timing of transmitting image data of the autopilot real camera sensor. For example, the transmission image data timing of the autopilot real camera sensor is: sending a line of image data at intervals of t1, the preset time sequence is also: one line of image data is transmitted every t1 time. For another example, the transmission image data timing of the autopilot real camera sensor is: sending a line of image data at intervals of t2, the preset time sequence is also: one line of image data is transmitted at every t2 time, t1+.t2. For another example, the transmission image data timing of the autopilot real camera sensor is: every time two lines of image data are transmitted, the preset time sequence is also: two lines of image data are transmitted at a time. The above is merely exemplary and is not intended to limit all possible contents included in the preset sequence, but is not intended to be exhaustive.
Here, the preset timing may be set according to the timing of transmitting image data of the real camera sensor. And simulating the real camera sensor data time sequence, and sending the simulation image data corresponding to each of the N paths of cameras from the FPGA.
The steps S101 to S104 described above can be applied to FPGA. Fig. 2 shows a schematic architecture diagram of simulated data processing, taking n=8 as an example, and as shown in fig. 2, acquiring a camera 1 raw data file, a camera 2 raw data file, a camera 3 raw data file, …, a camera 7 raw data file, and a camera 8 raw data file from an online acquisition system; splicing the original data files of the 8 paths of cameras to obtain an original camera data file; the upper layer application program sends the original camera data file to the FPGA in a PCIE Direct memory access (Direct MemoryAccess, DMA) mode, and specifically, the original camera data file is output and cached in the FIFO of the FPGA; splitting the original camera data file through a data splitting module to obtain camera 1 data, camera 2 data, camera 3 data, …, camera 7 data and camera 8 data; respectively caching camera 1 data, camera 2 data, camera 3 data, …, camera 7 data and camera 8 data into data FIFOs corresponding to the 8 paths of cameras; when a trigger1 signal is detected by a data transmission module corresponding to the camera 1, sending out camera 1 data from the FPGA; when the data transmission module corresponding to the camera 2 detects a trigger2 signal, sending out camera 2 data from the FPGA, and when the data transmission module corresponding to the camera 3 detects a trigger3 signal, sending out camera 3 data from the FPGA; …; when a trigger7 signal is detected by a data transmission module corresponding to the camera 7, camera 7 data is transmitted from the FPGA; when a trigger8 signal is detected by a data transmission module corresponding to the camera 8, camera 8 data is transmitted from the FPGA. In practical application, although the data sending modules corresponding to each path of cameras correspond to different trigger signals, in an automatic driving system, in order to ensure that all cameras image at the same moment, the trigger signals corresponding to different paths of cameras arrive at the same time, and when the trigger signals corresponding to all paths of cameras arrive at the same time, the data corresponding to all paths of cameras are sent out from the FPGA at the same time.
The upper layer application program can be a driver program in the simulation system or an application program in the simulation system. The FPGA device belongs to a semi-custom circuit in an application-specific integrated circuit, is a programmable logic array, and can effectively solve the problem of less gate circuits of the original device. The basic structure of the FPGA comprises a programmable input-output unit, a configurable logic block, a digital clock management module, an embedded block (Random-AccessMemory, RAM), wiring resources, an embedded special hard core and a bottom layer embedded functional unit. Illustratively, one FPGA corresponds to 8 cameras. The FPGA supports DMA. DMA is a function provided by some computer bus architecture that enables data to be sent directly from an attached device (e.g., disk drive) to the memory of a computer motherboard.
In some embodiments, the test for the "cruise control" function of an autonomous vehicle is performed during snowy weather: acquiring 4 paths of camera data from an online acquisition system; splicing the original data file of the camera 1, the original data file of the camera 2, the original data file of the camera 3 and the original data file of the camera 4 to obtain a single big data file (namely an original camera data file); the driver program sends the single big data file to the FPGA; the data splitting module of the FPGA splits the single big data file to obtain original image data of the camera 1, original image data of the camera 2, original image data of the camera 3 and original image data of the camera 4; and when the camera data sending module of the FPGA detects the trigger signal, the simulation image data of the 4 paths of cameras are sent to an automatic driving hardware system.
According to the technical scheme, an original camera data file is received, and the original camera data file is obtained by splicing original data files based on N paths of cameras; splitting the original camera data file to obtain original image data corresponding to each of the N paths of cameras; generating simulation image data of the N paths of cameras under a preset time sequence based on the original image data corresponding to the N paths of cameras; and outputting the simulation image data of the N paths of cameras under the preset time sequence. Therefore, the image data of the N-path cameras can be transmitted to the FPGA within a specified time window, and the simulation image data of the N-path cameras under a preset time sequence is generated and output, so that the data transmission efficiency is improved, the stability and the instantaneity of the data transmission are improved, the accuracy of the automatic driving simulation data is improved, and the accuracy of the automatic driving simulation test is improved.
In an embodiment of the present disclosure, the simulation data processing method may further include: and splicing the pixel point data of the images corresponding to the N paths of cameras according to a preset arrangement sequence by taking the pixel points of the images corresponding to the N paths of cameras as basic units, so as to obtain an original camera data file.
In some embodiments, the predetermined arrangement is an arrangement of camera data. The preset arrangement sequence may be to splice pixel point data of images corresponding to the N cameras based on the camera IDs. Illustratively, the ordering is performed according to the order of camera IDs of a preset ordering order, such as a camera 1 data file, a camera 2 data file, a camera 3 data file, a camera 4 data file; the camera data are arranged according to a preset arrangement sequence, such as a camera 1 data file, a camera 3 data file, a camera 2 data file and a camera 4 data file. The foregoing is illustrative only and is not intended to be limiting of the full range of possible content that may be included in the preset sequence, but is not intended to be exhaustive.
FIG. 3 shows a schematic diagram of a camera data stitching process, as shown in FIG. 3, for obtaining camera 1 raw data {01AA,01BB,01CC,01DD }, camera 2 raw data {02AA,02BB,02CC,02DD }, camera 3 raw data {03AA,03BB,03CC,03DD }, camera 4 raw data {04AA,04BB,04CC,04DD }, from an online acquisition system. Firstly, acquiring the first 2 bytes of data 0x01AA of the 1 st path camera to form the first 2 bytes of spliced data; acquiring the 3 rd and 4 th byte data of the spliced data formed by the first 2 byte data 0x02AA of the 2 nd camera; acquiring the 5 th and 6 th byte data of the spliced data formed by the first 2 byte data 0x03AA of the 3 rd camera; finally, the first 2 bytes of data 0x04AA of the 4 th camera form the 7 th and 8 th bytes of data of spliced data, and the splicing of the first two bytes of data of the four-way camera data is completed so as to obtain new data {0x01AA,0x02AA,0x03AA,0x04AA }; and according to the rule, the 3 rd, 4 th, 5 th, 6 th bytes and the like of each path of camera data are spliced in turn. Splicing the camera 1 original data, the camera 2 original data, the camera 3 original data and the camera 4 original data according to a preset arrangement sequence to obtain spliced original camera data {01AA,02AA,03AA,04AA,01BB,02BB,03BB,04BB,01CC,02CC,03CC,04CC,01DD,02DD,03DD,04DD }. Here, the size of {01AA,02AA,03AA,04AA } is 64 bits.
According to the technical scheme, the pixel points of the images corresponding to the N paths of cameras are taken as basic units, and the pixel point data of the images corresponding to the N paths of cameras are spliced according to a preset arrangement sequence to obtain an original camera data file. Therefore, multiple paths of camera data can be mixed and spliced to obtain spliced original camera data files, the spliced original camera data files are directly issued through PCIE, the problem that different camera channels are respectively transmitted is not needed to be considered, and the data processing efficiency can be improved.
In the embodiment of the present disclosure, stitching pixel point data of images corresponding to each of N cameras according to a preset arrangement sequence to obtain an original camera data file, including: reading pixel point data from a first pixel point of an image corresponding to each of the N paths of cameras; splicing the first data sets corresponding to the N paths of cameras respectively according to the preset arrangement sequence of the N paths of cameras to obtain first set data; splicing M data sets corresponding to the N paths of cameras respectively to obtain M data sets; and the same is carried out until the last data group corresponding to each of the N paths of cameras is spliced to obtain the last group of data; splicing the groups of data to obtain an original camera data file; wherein M is an integer greater than 1, and each data set includes a first target number of pixels.
Here, the value of the first target number may be set or adjusted according to the need, for example, the first target number may be 1, 2, 3, 4, or the like. When the first target number is 1, stitching is performed on a pixel-by-pixel basis. When the first target number is 2, the two pixel points are spliced. The foregoing is merely exemplary, and is not intended to limit the number of pixels of each camera included in each set of data during stitching, but is not intended to be exhaustive.
In some embodiments, stitching pixel point data of images corresponding to each of the N cameras according to a preset arrangement sequence to obtain an original camera data file, including: and positive ordering is carried out on the pixel point data of the image.
Illustratively, taking 4-way camera data as an example, the first-way camera data is {01EE,01FF,01GG,01HH }, the second-way camera data is {02EE,02FF,02GG,02HH }, the third-way camera data is {03EE,03FF,03GG,03HH }, and the fourth-way camera data is {04EE,04FF,04GG,04HH }; splicing according to positive ordering of pixel point data of the images to obtain an original camera data file; if the order of N cameras is 1, 2, 3, 4, the original camera data file is {01EE,02EE,03EE,04EE,01FF,02FF,03FF,01 GG,02GG,03GG,01 HH,02HH,03HH,04HH }. If the order of N cameras is 1, 3, 2, 4, the data file is {01EE,03EE,02EE,04EE,01FF,03FF,02FF,04FF,01GG,03GG,02GG,01 HH,03HH,02HH,04HH }. The foregoing is illustrative only and is not intended to be limiting of the overall manner in which stitching is included, but is not intended to be exhaustive.
According to the technical scheme, pixel point data are read from the first pixel point of the image corresponding to each of the N paths of cameras; splicing the first data sets corresponding to the N paths of cameras respectively according to the preset arrangement sequence of the N paths of cameras to obtain first set data; splicing M data sets corresponding to the N paths of cameras respectively to obtain M data sets; and the same is carried out until the last data group corresponding to each of the N paths of cameras is spliced to obtain the last group. Therefore, the multi-path data are spliced in a positive ordering mode of the pixel points of the image, so that the flexibility of data splicing can be improved, and the legibility of the data file can be improved.
In the embodiment of the present disclosure, stitching pixel point data of images corresponding to each of N cameras according to a preset arrangement sequence to obtain an original camera data file, including: reading pixel point data from the last pixel point of the image corresponding to each of the N paths of cameras; splicing the last data sets corresponding to the N paths of cameras respectively according to the preset arrangement sequence of the N paths of cameras to obtain first set data; splicing the last second data sets corresponding to the N paths of cameras respectively to obtain second set data; and the same is carried out until the first data sets corresponding to the N paths of cameras are spliced to obtain the last data set; splicing the groups of data to obtain an original camera data file; wherein each data set includes a second target number of pixels.
Here, the value of the second target number may be set or adjusted according to the need, for example, the second target number may be 1, 2, 3, 4, or the like. The second target number may be the same as or different from the first target number. When the second target number is 1, stitching is performed on a pixel-by-pixel basis. And when the second target number is 2, splicing is carried out on every two pixel points.
In some embodiments, stitching pixel point data of images corresponding to each of the N cameras according to a preset arrangement sequence to obtain an original camera data file, including: and (5) reversely sequencing the pixel point data of the image.
Illustratively, taking 4-way camera data as an example, the first-way camera data is {01EE,01FF,01GG,01HH }, the second-way camera data is {02EE,02FF,02GG,02HH }, the third-way camera data is {03EE,03FF,03GG,03HH }, and the fourth-way camera data is {04EE,04FF,04GG,04HH }; splicing according to the reverse ordering of the pixel point data of the image to obtain an original camera data file; if the ordering order of N cameras is 1, 2, 3, 4, the original camera data file is {01HH,02HH,03HH,04HH,01GG,02GG,03GG,01 FF,02FF,04 FF,01EE,02EE,03EE,04EE }. If the order of N cameras is 4, 3, 2, 1, the original camera data file is {04HH,03HH,02HH,01HH,04GG,03GG,01 GG,04FF,03FF,02FF,01FF,04EE,03EE,02EE,01EE }.
According to the technical scheme, pixel point data are read from the last pixel point of the image corresponding to each of the N paths of cameras; splicing the last data sets corresponding to the N paths of cameras respectively according to the preset arrangement sequence of the N paths of cameras to obtain first set data; splicing the last second data sets corresponding to the N paths of cameras respectively to obtain second set data; and the same is carried out until the first data sets corresponding to the N paths of cameras are spliced to obtain the last set. Therefore, the multi-path data are spliced in a mode of reversely sequencing the pixel points of the image, so that the flexibility of data splicing can be improved.
In the embodiment of the disclosure, the simulation data processing method further includes: after receiving the original camera data file, the original camera data file is sent to a first buffer FIFO corresponding to an interface receiving the original camera data file.
In the disclosed embodiments, a FIFO is typically used to store a series of data items and to ensure that the earliest added item is accessed at any time. Here, the first buffer FIFO is used to store the original camera data file.
In the embodiment of the disclosure, the advantage of storing the original camera data file into the first buffer FIFO is that: 1. first-in first-out, data storage is performed in order of addition, and the earliest added data is processed first, which helps to ensure the processing order of the data; 2. the FIFO memory is easy to manage, the data structure is simple, and the FIFO memory is convenient to operate and manage; 3. the storage space is efficiently utilized, and the storage space can be efficiently utilized by reasonably arranging the storage and the taking out of the data, so that the waste of the space is avoided; 4. the FIFO memory is suitable for processing large amounts of data, and can accommodate large amounts of data, and is therefore particularly suitable for processing data requiring a large amount of memory space.
According to the technical scheme, an original camera data file is sent to a first cache FIFO corresponding to an interface for receiving the original camera data file. Therefore, the storage of the original camera data file through the FIFO is beneficial to saving the storage cost, and the data structure of the FIFO is simple, so that the flow of data processing can be simplified, and the efficiency of data processing is improved.
In the embodiment of the present disclosure, splitting an original camera data file to obtain original image data corresponding to each of N cameras includes: splitting an original camera data file to obtain pixel point data corresponding to each of the N paths of cameras; and sending the pixel point data corresponding to each of the N paths of cameras to a second cache FIFO corresponding to each of the N paths of cameras.
FIG. 4 shows a schematic diagram of the storage of multiple paths of camera data, as shown in FIG. 4, with the original camera data file being retrieved from a first cache FIFO in the FPGA; splitting the original camera data file to obtain original image data corresponding to each of the N paths of cameras; and sending the pixel point data corresponding to each of the N paths of cameras to a second cache FIFO corresponding to each of the N paths of cameras. Illustratively, the raw camera data file {01AA,02AA,03AA,04AA,01BB,02BB,03BB,04BB,01CC,02CC,03CC,04CC,01DD,02DD,03DD,04DD }; splitting the original camera data file to obtain pixel point data corresponding to each of N paths of cameras, namely first path of camera data {01AA,01BB,01CC,01DD }, second path of camera data {02AA,02BB,02CC,02DD }, third path of camera data {03AA,03BB,03CC,03DD }, and fourth path of camera data {04AA,04BB,04CC,04DD }; sending the first path of camera data to a second cache FIFO corresponding to the first path of camera data; sending the second path of camera data to a second cache FIFO corresponding to the second path of camera data; transmitting the third camera data to a second cache FIFO corresponding to the third camera data; and sending the fourth path of camera data to a second buffer FIFO corresponding to the fourth path of camera data. Specifically, 01AA in 64 bits of the first period is sent to a second buffer FIFO (camera 1 data FIFO) corresponding to camera 1, 02AA in 64 bits of the first period is sent to a second buffer FIFO (camera 2 data FIFO) corresponding to camera 2, 03AA in 64 bits of the first period is sent to a second buffer FIFO (camera 3 data FIFO) corresponding to camera 3, and 04AA in 64 bits of the first period is sent to a second buffer FIFO (camera 4 data FIFO) corresponding to camera 4. 01BB in 64 bits of the second period is sent to camera 1 data FIFO, 02BB in 64 bits of the second period is sent to camera 2 data FIFO, 03BB in 64 bits of the second period is sent to camera 3 data FIFO, and 04BB in 64 bits of the second period is sent to camera 4 data FIFO. 01CC in 64bit of the third cycle is sent to camera 1 data FIFO, 02CC in 64bit of the third cycle is sent to camera 2 data FIFO, 03CC in 64bit of the third cycle is sent to camera 3 data FIFO, and 04CC in 64bit of the third cycle is sent to camera 4 data FIFO. 01DD in 64 bits of the fourth cycle is sent to the camera 1 data FIFO, 02DD in 64 bits of the fourth cycle is sent to the camera 2 data FIFO, 03DD in 64 bits of the fourth cycle is sent to the camera 3 data FIFO, and 04DD in 64 bits of the fourth cycle is sent to the camera 4 data FIFO.
According to the technical scheme, an original camera data file is split, and pixel point data corresponding to each of N paths of cameras is obtained; and sending the pixel point data corresponding to each of the N paths of cameras to a second cache FIFO corresponding to each of the N paths of cameras. Therefore, each path of camera data is respectively and independently stored in the corresponding second buffer memory FIFO, and the camera data can be sent out when the corresponding trigger signal is detected, so that the efficiency of data processing is improved.
In the embodiment of the present disclosure, splitting an original camera data file to obtain original image data corresponding to each of N cameras, further includes: and sequencing the pixel point data corresponding to each of the N cameras according to the pixel point reading sequence during splicing corresponding to each of the N cameras aiming at the pixel point data in the second buffer FIFO corresponding to each of the N cameras, so as to obtain the original image data corresponding to each of the N cameras.
In the embodiment of the disclosure, a preset arrangement sequence of N paths of cameras and a pixel point reading sequence when the N paths of cameras are spliced respectively are acquired; splitting the data file based on the preset arrangement sequence of the N-path cameras and the pixel point reading sequence of the N-path cameras during splicing, so as to obtain the original image data corresponding to the N-path cameras.
Illustratively, taking 4-way camera data as an example, a first-way camera data {01AA,01BB,01CC,01DD }, a second-way camera data {02AA,02BB,02CC,02DD }, a third-way camera data {03AA,03BB,03CC,03DD }, and a fourth-way camera data {04AA,04BB,04CC,04DD }; splicing 4 paths of camera data according to a preset arrangement sequence of arranging an odd number firstly and arranging an even number secondly and a positive ordering mode of pixel points of an image to obtain original camera data files {01AA,03AA,02AA,04AA,01BB,03BB,02BB,04BB,01CC,03CC,02CC,04CC,01DD,03DD,02DD,04DD }; before splitting an original camera data file, acquiring a preset arrangement sequence of 'firstly arranging odd numbers and then arranging even numbers' of N paths of cameras and a mode of 'positively ordering' pixel points of an image; splitting the data file based on a preset arrangement sequence of N paths of cameras and a pixel reading sequence of N paths of cameras during respective corresponding splicing to obtain respective corresponding original image data of the N paths of cameras, namely first paths of camera data {01AA,01BB,01CC,01DD }, second paths of camera data {02AA,02BB,02CC,02DD }, third paths of camera data {03AA,03BB,03CC,03DD }, and fourth paths of camera data {04AA,04BB,04CC,04DD }.
According to the technical scheme of the embodiment of the disclosure, aiming at pixel point data in the second buffer FIFO corresponding to each of the N-path cameras, the pixel point data corresponding to each of the N-path cameras is ordered according to the pixel point reading sequence during splicing corresponding to each of the N-path cameras, so as to obtain original image data corresponding to each of the N-path cameras. Therefore, the data file can be split based on the pixel point reading sequence during splicing, and the accuracy and the stability of data processing can be improved, so that the accuracy and the instantaneity of automatic driving simulation data can be improved.
In an embodiment of the present disclosure, generating simulated image data of N cameras at a preset time sequence based on original image data corresponding to each of the N cameras includes: determining the data transmission quantity of the N paths of cameras when transmitting data each time under a preset time sequence; and generating simulation image data corresponding to each of the N paths of cameras, which are matched with the data transmission quantity, according to the original image data corresponding to each of the N paths of cameras.
In some embodiments, the data transmission amount of each time the N cameras transmit data is a whole row of data at a predetermined time sequence. The preset timing is required to transmit the image data in a line-by-line transmission manner, i.e., the image data is transmitted line by line. Specifically, for a certain image of a certain path of camera, firstly, sending image data of a first line of the camera, and after the sending of the image data of the first line of the camera is completed, sending the image data of a second line of the camera; and so on until the image data of the last line of the second cache FIFO of the camera is sent out, and all image data corresponding to the image are sent out.
Illustratively, an image includes 1860 lines of image data, each line including 2880 pixels, and each line of image data of the image is output line by line as the FPGA outputs the image.
According to the technical scheme of the embodiment of the disclosure, for the original image data corresponding to each of the N cameras, a progressive transmission mode is adopted to transmit the simulation image data corresponding to each of the N cameras. By adopting a progressive transmission mode, the real-time performance of data processing can be improved, so that the accuracy and the real-time performance of the automatic driving simulation data are improved.
In an embodiment of the present disclosure, outputting simulation image data of N cameras at a preset time sequence includes: in response to detecting a trigger signal corresponding to any one of the N cameras, if at least one line of data volume is cached in a second cache FIFO corresponding to the any one camera, transmitting the image data of the whole line of data volume of the any one camera from the second cache FIFO corresponding to the any one camera.
In some embodiments, the trigger signal may be a trigger signal. In response to detecting a trigger signal corresponding to an ith camera in the N cameras, sending out full-line camera data of the ith camera from a second buffer FIFO corresponding to the ith camera, wherein i is a positive integer.
In some implementations, simulating real camera sensor data timing sends data out of the FPGA. Detecting whether the data volume in the second buffer FIFO is at least one line of data volume, if the data volume in the second buffer FIFO is detected to be full of one line, waiting for a trigger signal to arrive, and starting transmission of one frame of image data after the trigger signal arrives. For example, if the data amount in the second buffer FIFO of the 3 rd path of camera data is 1 line and half, sending out the data of the full line from the FPGA when the trigger signal arrives; if the data amount in the second buffer FIFO of the 4 th path of camera data is half line, the data of the full line is sent out from the FPGA until the data amount in the second buffer FIFO of the 4 th path of camera data is full of one line, if the trigger signal arrives.
According to the technical scheme, in response to detection of a trigger signal corresponding to any one of the N cameras, if at least one line of data volume is cached in the second cache FIFO corresponding to the any one camera, camera data corresponding to the any one camera are transmitted from the second cache FIFO corresponding to the any one camera. Therefore, the data output requirement of the FPGA can be met, and the accuracy and the instantaneity of the automatic driving simulation test can be improved.
In an embodiment of the present disclosure, the simulation data processing method may further include: and acquiring original camera data files of N paths of cameras of the target real vehicles in the target test scene from a database according to the target test scene, wherein the database stores the original camera data files corresponding to the N paths of cameras of the real vehicles in each test scene.
In some implementations, the target test scenario can include: test scenes of special weather, such as snowstorm weather, thunderstorm weather, stormy weather, hail weather and the like; may further include: testing scenes in special places, such as ramps, intersections and the like of highways; may also include: testing of functions of an autonomous vehicle such as adaptive cruise, auto park, lane keeping and departure warning.
In some embodiments, the database may be a database corresponding to an automated acquisition system. The automatic acquisition system stores the acquired original camera data files of the N paths of cameras into a database. The database may be located at the vehicle end or cloud end.
Here, the target real vehicle refers to a vehicle in which a plurality of cameras are disposed. Preferably, the target real vehicle is an autonomous vehicle.
Taking a target test scene as an example in thunderstorm weather, the original camera data files corresponding to N paths of cameras of a certain automatic driving vehicle are screened from the database.
Also, taking the target test scene as a traffic light test scene as an example, the original camera data files respectively corresponding to the N paths of cameras of a certain automatic driving vehicle at a certain intersection are screened out from the database.
Taking the target test scene as a parking test scene as an example, the original camera data files respectively corresponding to the N paths of cameras of a certain automatic driving vehicle when the certain parking lot is parked are screened out from the database.
According to the technical scheme of the embodiment of the disclosure, the original camera data file of the N-path camera of the target real vehicle under the target test scene is obtained from the database according to the target test scene. The multi-path camera data of the target test scene is acquired from the database to carry out simulation test, so that the speed of acquiring the automatic driving simulation data is improved, the test efficiency of the automatic driving function is further improved, and the safety of the automatic driving vehicle is improved. Compared with the real-way test, the simulation test is performed by acquiring the multi-path camera data of the target test scene from the database, so that the cost pressure brought by the real-way test can be reduced, the problem of low image data collection speed of the N-way camera in the target test scene during the real-way test can be solved, and the test efficiency of the automatic driving function is improved.
An embodiment of the present disclosure provides a simulation data processing apparatus, as shown in fig. 5, which may include: the receiving module 501 is configured to receive an original camera data file, where the original camera data file is obtained by splicing original data files based on N paths of cameras, and N is a positive integer greater than 1; the splitting module 502 is configured to split the original camera data file to obtain original image data corresponding to each of the N paths of cameras; a generating module 503, configured to generate simulated image data of the N cameras under a preset time sequence based on the original image data corresponding to each of the N cameras; and an output module 504, configured to output the simulated image data of the N cameras at the preset time sequence.
In some embodiments, the simulated data processing apparatus further comprises: and the splicing module (not shown in fig. 5) is used for splicing the pixel point data of the images corresponding to the N paths of cameras according to a preset arrangement sequence by taking the pixel points of the images corresponding to the N paths of cameras as basic units, so as to obtain an original camera data file.
In some embodiments, the splice module (not shown in fig. 5) includes: the first reading submodule is used for reading pixel point data from the first pixel point of the image corresponding to each of the N paths of cameras; the first splicing sub-module is used for splicing the first data groups corresponding to the N paths of cameras respectively according to the preset arrangement sequence of the N paths of cameras to obtain first group data; splicing M data sets corresponding to the N paths of cameras respectively to obtain M data sets; and the same is carried out until the last data group corresponding to each of the N paths of cameras is spliced to obtain the last group of data; splicing the groups of data to obtain the original camera data file; wherein M is an integer greater than 1, and each data set includes a first target number of pixels.
In some embodiments, the splice module (not shown in fig. 5) includes: the second reading submodule is used for reading pixel point data from the last pixel point of the image corresponding to each of the N paths of cameras; the second splicing sub-module is used for splicing the last data groups corresponding to the N paths of cameras respectively according to the preset arrangement sequence of the N paths of cameras to obtain first group data; splicing the last second data sets corresponding to the N paths of cameras respectively to obtain second set data; and the same is carried out until the first data sets corresponding to the N paths of cameras are spliced to obtain the last data set; splicing the groups of data to obtain the original camera data file; wherein each data set includes a second target number of pixels.
In some embodiments, the simulated data processing further comprises: a transmitting module (not shown in fig. 5) for transmitting the original camera data file to a first buffer FIFO corresponding to an interface for receiving the original camera data file.
In some embodiments, the splitting module 502 includes: the splitting module is used for splitting the data file to obtain pixel point data corresponding to each of the N paths of cameras; and the transmitting sub-module is used for transmitting the pixel point data corresponding to each of the N paths of cameras to the second cache FIFO corresponding to each of the N paths of cameras.
In some embodiments, the splitting module 502 further comprises: the sorting sub-module is used for sorting the pixel point data corresponding to each of the N cameras according to the pixel point reading sequence when the N cameras are spliced according to the pixel point data in the second buffer FIFO corresponding to each of the N cameras, so as to obtain the original image data corresponding to each of the N cameras.
In some embodiments, the generating module 503 includes: the determining submodule is used for determining the data transmission quantity of the N paths of cameras when transmitting data each time under a preset time sequence; the generation sub-module is used for generating simulation image data corresponding to each of the N paths of cameras, which are matched with the data transmission quantity, aiming at the original image data corresponding to each of the N paths of cameras.
In some embodiments, the output module 504 includes: and the output sub-module is used for responding to the detection of the trigger signal corresponding to any one of the N cameras, and if at least one line of data volume is cached in the second cache FIFO corresponding to any one of the N cameras, the image data of the whole line of data volume is transmitted from the second cache FIFO corresponding to any one of the N cameras.
In some embodiments, the simulated data processing apparatus further comprises: the acquisition module (not shown in fig. 5) is configured to acquire, from a database according to a target test scene, an original camera data file of N cameras of a target real vehicle in the target test scene, where the database stores original camera data files corresponding to the N cameras of each real vehicle in each test scene.
It should be understood by those skilled in the art that the functions of each processing module in the simulation data processing apparatus according to the embodiments of the present disclosure may be understood by referring to the foregoing description of the simulation data processing method, and each processing module in the simulation data processing apparatus according to the embodiments of the present disclosure may be implemented by implementing a generating circuit for the functions of the embodiments of the present disclosure, or may be implemented by executing software for executing the functions of the embodiments of the present disclosure on an electronic device.
The simulation data processing device disclosed by the embodiment of the disclosure can improve the efficiency of data transmission, stability and instantaneity of data transmission, and is beneficial to improving the accuracy of automatic driving simulation data, so that the accuracy of automatic driving simulation test is improved.
Embodiments of the present disclosure provide a scenario diagram of simulated data processing, as shown in fig. 6.
As described above, the simulation data processing method provided by the embodiment of the present disclosure is applied to an electronic device. Electronic devices are intended to represent various forms of digital computers, such as servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as a car-end digital assistant, a car-end telephone, and other similar computing devices.
In particular, the electronic device may specifically perform the following operations:
receiving an original camera data file, wherein the original camera data file is obtained by splicing original data files based on N paths of cameras, and N is a positive integer;
splitting the original camera data file to obtain original image data corresponding to each of the N paths of cameras;
generating simulation image data of the N paths of cameras under a preset time sequence based on the original image data corresponding to the N paths of cameras;
and outputting the simulation image data of the N paths of cameras under the preset time sequence.
The image data of the N paths of cameras used for splicing can be obtained from a data source of the automatic driving vehicle. The data source may be various forms of data storage devices such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The map data source may also represent various forms of mobile devices, such as a car-end digital assistant, a car-end telephone, and other similar computing devices. In addition, the data source can be located at a vehicle end or a cloud end.
It should be understood that the scene diagram shown in fig. 6 is merely illustrative and not limiting, and that various obvious changes and/or substitutions may be made by one skilled in the art based on the example of fig. 6, and the resulting technical solution still falls within the scope of the disclosure of the embodiments of the present disclosure.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read-only memory (ROM) 702 or a computer program loaded from a storage unit 708 into a random access memory (RandomAccessMemory, RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM702, and the RAM703 are connected to each other through a bus 704. An Input/Output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a central processing unit CPU, a graphics processing unit (GraphicsProcessingUnit, GPU), various dedicated artificial intelligence (Artificial Intelligence, AI) computing chips, various computing units running machine learning model algorithms, a digital signal processor (DigitalSignalProcessor, DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, such as the simulation data processing method. For example, in some embodiments, the simulation data processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM702 and/or communication unit 709. When a computer program is loaded into RAM703 and executed by computing unit 701, one or more steps of the above-described simulation data processing method may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the simulated data processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuitry, field programmable gate arrays (FieldProgrammableGateArray, FPGA), application specific integrated circuits (ApplicationSpecificIntegratedCircuit, ASIC), application specific standard products (Application-SpecificStandardProducts, ASSP), system on chip (SystemonChip, SOC), complex programmable logic devices (Complex ProgrammableLogicDevice, CPLD), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a random access memory, a read-only memory, an erasable programmable read-only memory (erasableummableread-OnlyMemory, EPROM), a flash memory, an optical fiber, a portable compact disc read-only memory (CompactDiskReadOnlyMemory, CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a cathode ray tube (CathodeRayTube, CRT) or a liquid crystal display (LiquidCrystalDisplay, LCD) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: a local area Network (LocalAreaNetwork, LAN), a Wide Area Network (WAN), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions, improvements, etc. that are within the principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (23)

1. A simulated data processing method comprising:
receiving an original camera data file, wherein the original camera data file is obtained by splicing original data files based on N paths of cameras, and N is a positive integer greater than 1;
splitting the original camera data file to obtain original image data corresponding to each of the N paths of cameras;
generating simulation image data of the N paths of cameras under a preset time sequence based on the original image data corresponding to the N paths of cameras;
And outputting the simulation image data of the N paths of cameras under the preset time sequence.
2. The method of claim 1, further comprising:
and aiming at the original data file from the N-path cameras, splicing the pixel point data of the images corresponding to the N-path cameras according to a preset arrangement sequence by taking the pixel points of the images corresponding to the N-path cameras as basic units, so as to obtain the original camera data file.
3. The method of claim 2, wherein the stitching the pixel point data of the images corresponding to the N cameras according to the preset arrangement sequence to obtain the original camera data file includes:
reading pixel point data from a first pixel point of an image corresponding to each of the N paths of cameras;
according to the preset arrangement sequence of the N paths of cameras, respectively splicing the first data sets corresponding to the N paths of cameras to obtain first set data; splicing M data sets corresponding to the N paths of cameras respectively to obtain M data sets; and so on until the last data group corresponding to each of the N paths of cameras is spliced to obtain the last group of data; splicing the groups of data to obtain the original camera data file; wherein M is an integer greater than 1, and each data set includes a first target number of pixels.
4. The method of claim 2, wherein the stitching the pixel point data of the images corresponding to the N cameras according to the preset arrangement sequence to obtain the original camera data file includes:
reading pixel point data from the last pixel point of the image corresponding to each of the N paths of cameras;
splicing the last data sets corresponding to the N paths of cameras respectively according to the preset arrangement sequence of the N paths of cameras to obtain first set of data; splicing the last second data sets corresponding to the N paths of cameras respectively to obtain second set data; and so on until the first data sets corresponding to the N paths of cameras are spliced to obtain the last data set; splicing the groups of data to obtain the original camera data file; wherein each data set includes a second target number of pixels.
5. The method of claim 1, wherein after receiving the original camera data file, further comprising:
and sending the original camera data file to a first cache FIFO corresponding to an interface for receiving the original camera data file.
6. The method of claim 1, wherein the splitting the raw camera data file to obtain raw image data corresponding to each of the N cameras includes:
Splitting the original camera data file to obtain pixel point data corresponding to each of the N paths of cameras;
and sending the pixel point data corresponding to each of the N paths of cameras to a second cache FIFO corresponding to each of the N paths of cameras.
7. The method of claim 6, wherein the splitting the raw camera data file to obtain raw image data corresponding to each of the N cameras further comprises:
and sequencing the pixel point data corresponding to each of the N paths of cameras according to the pixel point reading sequence in the splicing process corresponding to each of the N paths of cameras aiming at the pixel point data in the second buffer FIFO corresponding to each of the N paths of cameras, so as to obtain the original image data corresponding to each of the N paths of cameras.
8. The method of claim 1, wherein the generating simulated image data of the N cameras at a preset time sequence based on the raw image data corresponding to the N cameras respectively comprises:
determining the data transmission quantity of the N paths of cameras when transmitting data each time under the preset time sequence;
and generating simulation image data corresponding to each of the N paths of cameras, which are matched with the data transmission quantity, according to the original image data corresponding to each of the N paths of cameras.
9. The method of claim 1, wherein the outputting the simulated image data of the N-way camera at the preset timing comprises:
and responding to the detection of the trigger signal corresponding to any one of the N cameras, and if at least one line of data volume is cached in the second cache FIFO corresponding to any one of the N cameras, outputting the simulation image data of the whole line of data volume of any one of the N cameras.
10. The method of claim 1, further comprising:
and acquiring original camera data files of the N-path cameras of the target real vehicles in the target test scene from a database according to the target test scene, wherein the database stores the original camera data files corresponding to the N-path cameras of the real vehicles in each test scene.
11. An emulation data processing apparatus comprising:
the receiving module is used for receiving an original camera data file, wherein the original camera data file is obtained by splicing original data files based on N paths of cameras, and N is a positive integer greater than 1;
the splitting module is used for splitting the original camera data file to obtain the original image data corresponding to each of the N paths of cameras;
the generation module is used for generating simulation image data of the N paths of cameras under a preset time sequence based on the original image data corresponding to the N paths of cameras;
And the output module is used for outputting the simulation image data of the N paths of cameras under the preset time sequence.
12. The apparatus of claim 11, wherein the apparatus further comprises:
the splicing module is used for splicing the pixel point data of the images corresponding to the N paths of cameras according to a preset arrangement sequence by taking the pixel points of the images corresponding to the N paths of cameras as basic units aiming at the original data file from the N paths of cameras, so as to obtain the original camera data file.
13. The apparatus of claim 12, wherein the stitching module comprises:
the first reading submodule is used for reading pixel point data from the first pixel point of the image corresponding to each of the N paths of cameras;
the first splicing sub-module is used for reading pixel point data from a first pixel point of the image corresponding to each of the N paths of cameras; according to the preset arrangement sequence of the N paths of cameras, respectively splicing the first data sets corresponding to the N paths of cameras to obtain first set data; splicing M data sets corresponding to the N paths of cameras respectively to obtain M data sets; and so on until the last data group corresponding to each of the N paths of cameras is spliced to obtain the last group of data; splicing the groups of data to obtain the original camera data file; wherein M is an integer greater than 1, and each data set includes a first set of data for a first target number of pixels.
14. The apparatus of claim 12, wherein the stitching module comprises:
the second reading submodule is used for reading pixel point data from the last pixel point of the image corresponding to each of the N paths of cameras;
the second splicing sub-module is used for splicing the last data groups corresponding to the N paths of cameras respectively according to the preset arrangement sequence of the N paths of cameras to obtain first group data; splicing the last second data sets corresponding to the N paths of cameras respectively to obtain second set data; and so on until the first data sets corresponding to the N paths of cameras are spliced to obtain the last data set; splicing the groups of data to obtain the original camera data file; wherein each data set includes a second target number of pixels.
15. The apparatus of claim 11, wherein the apparatus further comprises:
and the sending module is used for sending the original camera data file to a first cache FIFO corresponding to an interface for receiving the original camera data file.
16. The apparatus of claim 11, wherein the splitting module comprises:
the splitting module is used for splitting the original camera data file to obtain pixel point data corresponding to each of the N paths of cameras;
And the transmitting sub-module is used for transmitting the pixel point data corresponding to each of the N paths of cameras to the second cache FIFO corresponding to each of the N paths of cameras.
17. The apparatus of claim 16, wherein the splitting module further comprises:
and the sequencing sub-module is used for sequencing the pixel point data corresponding to each of the N paths of cameras according to the pixel point reading sequence during splicing corresponding to each of the N paths of cameras aiming at the pixel point data in the second cache FIFO corresponding to each of the N paths of cameras, so as to obtain the original image data corresponding to each of the N paths of cameras.
18. The apparatus of claim 11, wherein the generating module comprises:
the determining submodule is used for determining the data transmission quantity of the N paths of cameras when transmitting data each time under the preset time sequence;
and the generation sub-module is used for generating simulation image data corresponding to each of the N paths of cameras, which are matched with the data transmission quantity, aiming at the original image data corresponding to each of the N paths of cameras.
19. The apparatus of claim 11, wherein the output module comprises:
and the output sub-module is used for responding to the detection of the trigger signal corresponding to any one of the N cameras, and outputting the simulation image data of the whole line of data volume corresponding to any one of the N cameras if the data volume of at least one line is cached in the second cache FIFO corresponding to any one of the N cameras.
20. The apparatus of claim 11, further comprising:
the acquisition module is used for acquiring original camera data files of the N-path cameras of the target real vehicles in the target test scenes from a database according to the target test scenes, wherein the database stores the original camera data files corresponding to the N-path cameras of the real vehicles in the test scenes.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method according to any one of claims 1-10.
23. A computer program product comprising a computer program stored on a storage medium, which, when executed by a processor, implements the method according to any of claims 1-10.
CN202311671826.6A 2023-12-07 2023-12-07 Simulation data processing method and device, electronic equipment and storage medium Pending CN117667706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311671826.6A CN117667706A (en) 2023-12-07 2023-12-07 Simulation data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311671826.6A CN117667706A (en) 2023-12-07 2023-12-07 Simulation data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117667706A true CN117667706A (en) 2024-03-08

Family

ID=90074747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311671826.6A Pending CN117667706A (en) 2023-12-07 2023-12-07 Simulation data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117667706A (en)

Similar Documents

Publication Publication Date Title
CN108806243B (en) Traffic flow information acquisition terminal based on Zynq-7000
US11073484B2 (en) Image acceleration processing device for automatic optical inspection of LCD module
CN103986869A (en) Image collecting and displaying device of high-speed TDICCD remote sensing camera
CN107391258B (en) Software and hardware integrated portable remote sensing image real-time processing system
CN113159091A (en) Data processing method and device, electronic equipment and storage medium
US11671678B2 (en) Method and device, equipment, and storage medium for data processing
CN112347650A (en) Hardware-in-loop simulation test method and system for automatic driving
CN103617592A (en) Hyperspectral image high-speed parallel processing system and method based on FPGA and multiple DSPs
CN103581505A (en) Digital video signal processing device and method
CN110766600B (en) Image processing system with distributed architecture
CN114049488A (en) Multi-dimensional information fusion remote weak and small target detection method and terminal
CN113780480B (en) Method for constructing multi-target detection and category identification model based on YOLOv5
CN114693963A (en) Recognition model training and recognition method and device based on electric power data feature extraction
CN117667706A (en) Simulation data processing method and device, electronic equipment and storage medium
CN116957453A (en) Distribution method and system based on high-speed image recognition
CN105550245A (en) Optimization system and method for network picture loading and caching based on Android platform
CN115861755A (en) Feature fusion method and device, electronic equipment and automatic driving vehicle
CN115495500A (en) Data processing method and device, electronic equipment and storage medium
CN115061386A (en) Intelligent driving automatic simulation test system and related equipment
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN111652226A (en) Image-based target identification method and device and readable storage medium
CN117556438A (en) Image desensitizing method and device
CN115952315B (en) Campus monitoring video storage method, device, equipment, medium and program product
CN117156073B (en) Video data transmission device and system
CN117671495B (en) Real-time pavement disease automatic detection method and system based on edge calculation technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination