CN115623339B - Image data exchange method, device and system for multiple cameras and automobile - Google Patents

Image data exchange method, device and system for multiple cameras and automobile Download PDF

Info

Publication number
CN115623339B
CN115623339B CN202211587425.8A CN202211587425A CN115623339B CN 115623339 B CN115623339 B CN 115623339B CN 202211587425 A CN202211587425 A CN 202211587425A CN 115623339 B CN115623339 B CN 115623339B
Authority
CN
China
Prior art keywords
image
data
frame data
image frame
processing chip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211587425.8A
Other languages
Chinese (zh)
Other versions
CN115623339A (en
Inventor
华益晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Semidrive Technology Co Ltd
Original Assignee
Nanjing Semidrive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Semidrive Technology Co Ltd filed Critical Nanjing Semidrive Technology Co Ltd
Priority to CN202211587425.8A priority Critical patent/CN115623339B/en
Publication of CN115623339A publication Critical patent/CN115623339A/en
Application granted granted Critical
Publication of CN115623339B publication Critical patent/CN115623339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Abstract

The application provides an image data exchange method, device and system for multiple cameras and an automobile, wherein the method comprises the following steps: receiving image frame data to be processed from a plurality of camera sensing chips through receiving interfaces of a plurality of protocols, wherein the receiving interfaces correspond to the interface protocols of the camera sensing chips; gating the image frame data to be processed based on the number of image interfaces corresponding to the image processing chip to obtain the gated image frame data with the specified number; widening the image synchronization signal of the gated image frame data according to the analysis standard of the image processing chip, and remapping the image data of the gated image frame data to obtain target image frame data matched with the image processing chip; and inputting the target image frame data into the image processing chip for the image processing chip to perform data processing on the target image frame data.

Description

Image data exchange method, device and system for multiple cameras and automobile
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a system, and an automobile for exchanging image data of multiple cameras.
Background
When the camera is off the vehicle, the camera includes a rear view camera, a surround view camera, a front side rear camera provided for an automatic driving assistance function of L2 or below for covering, and the like. For different types of camera Sensor Chips (CIS), there are differences in the core algorithms of the camera Sensor chips, and different solutions are generally required to perform centralized processing of data.
In recent years, in order to reduce the computation load of a Central Processing Unit (CPU), some image processing algorithms with repeated batches are selected to be integrated into hardware, so that unified scheduling of resources is achieved to accelerate the speed of image processing. However, if a single Image is processed by a single Image Signal Processor (ISP), the entire performance of the ISP may not be fully realized due to excessive calculation power, and in an actual use scenario, since there is usually a difference between ISPs adapted to different CIS, if an Image generated by the CIS is directly handed to the ISP for processing, a problem is easily encountered in data matching between front and rear modules, and the actual Image processing requirements cannot be satisfied.
Disclosure of Invention
The application provides an image data exchange method, device and system for multiple cameras and an automobile, and aims to at least solve the technical problems in the prior art.
According to a first aspect of embodiments of the present application, there is provided an image data exchange method for multiple cameras, the method including: receiving image frame data to be processed from a plurality of camera sensing chips through receiving interfaces of a plurality of protocols, wherein the receiving interfaces correspond to the interface protocols of the camera sensing chips; gating the image frame data to be processed based on the number of image interfaces corresponding to the image processing chip to obtain the gated image frame data with the specified number; widening the image synchronization signal of the gated image frame data according to the analysis standard of the image processing chip, and remapping the image data of the gated image frame data to obtain target image frame data matched with the image processing chip; and inputting the target image frame data into the image processing chip for the image processing chip to perform data processing on the target image frame data.
In an implementation manner, the receiving interface through multiple protocols receives image frame data to be processed from multiple camera sensor chips, including: if the camera sensing chip interface is a serial input interface, converting and mapping image frame data from the serial input interface through a preset number of serial receiving interfaces to obtain image frame data to be processed in a specified format; if the camera sensing chip interface is a parallel input interface, determining the number of the parallel input interfaces, and presetting the parallel receiving interfaces with the same number based on the number of the parallel input interfaces; and converting and mapping the image frame data from the parallel input interface through a parallel receiving interface to obtain the image frame data to be processed in a specified format.
In an embodiment, the receiving interface via multiple protocols receives image frame data to be processed from multiple camera sensor chips, and further includes: if the pixel bit width corresponding to the image frame data generated by the camera sensing chip exceeds the transmission pixel bit width of a serial receiving interface, splitting the pixels of the generated image frame data, and sequentially sending the split pixels to the image processing chip; and if the pixel bit width corresponding to the image frame data generated by the camera sensing chip exceeds the transmission pixel bit width of the parallel receiving interface, splitting the pixels of the generated image frame data, wherein the split pixels are used for transmitting to the image processing chip.
In an implementation manner, the widening an image synchronization signal of the gated image frame data according to an analysis standard of the image processing chip includes: determining a frame synchronization signal and a line synchronization signal of the image synchronization signal; and based on the analysis standard, prolonging the data width of the frame synchronization signal and the line synchronization signal to obtain the widened image synchronization signal.
In an implementation manner, before the remapping the image data of the gated image frame data to obtain the target image frame data adapted to the image processing chip, the method further includes: if the data width of a single clock cycle corresponding to the image data comprises a plurality of effective pixels; and uniformly distributing each effective pixel to each clock period to obtain image data with uniformly distributed effective pixels.
In one embodiment, the remapping image data of the gated image frame data includes: remapping the image data according to a high-low bit alignment mode to obtain image data adaptive to the image processing chip; if the input bit width of the image data is lower than the designated bit width of the image processing chip, performing zero padding alignment operation on the image data at a designated position to obtain image data adapted to the image processing chip; and if the input bit width of the image data is higher than the designated bit width of the image processing chip, performing data interception operation on the image data at a designated position to obtain the image data adapted to the image processing chip.
In one embodiment, before inputting the target image frame data to the image processing chip, the method further comprises: and if the image pixel width generated by the camera sensing chip exceeds the interface transmission pixel width, performing pixel splicing on the image data of the target image frame data to obtain spliced image data, wherein the spliced image data is used for being input into the image processing chip.
In an embodiment, the pixel stitching the image data of the target image frame data to obtain stitched image data includes: if the target image frame data comes from a serial input interface, vertically splicing pixels corresponding to the image data of the target image frame data to obtain spliced image data; if the target image frame data come from a parallel input interface, determining at least two parallel receiving interfaces corresponding to the target image frame data; and horizontally splicing the image data of the target image frame data from at least two parallel receiving interfaces to obtain spliced image data.
According to a second aspect of embodiments of the present application, there is provided an image exchange apparatus for multiple cameras, the apparatus including: the receiving module is used for receiving image frame data to be processed from a plurality of camera sensing chips through receiving interfaces of a plurality of protocols, and the receiving interfaces correspond to the interface protocols of the camera sensing chips; the gating module is used for gating the image frame data to be processed based on the number of the image interfaces corresponding to the image processing chip to obtain the gated image frame data with the specified number; the processing module is used for widening the image synchronous signal of the gated image frame data according to the analysis standard of the image processing chip and remapping the image data of the gated image frame data to obtain target image frame data adaptive to the image processing chip; and the input module is used for inputting the target image frame data into the image processing chip for the image processing chip to perform data processing on the target image frame data.
In an embodiment, the receiving module includes: the acquisition submodule is used for converting and mapping image frame data from the serial input interfaces through a preset number of serial receiving interfaces if the camera sensing chip interface is a serial input interface, and acquiring the image frame data to be processed in a specified format; the setting submodule is used for determining the number of the interfaces of the parallel input interface if the camera sensing chip interface is a parallel input interface, and presetting the parallel receiving interfaces with the same number based on the number of the interfaces of the parallel input interface; the acquisition submodule is used for converting and mapping the image frame data from the parallel input interface through the parallel receiving interface to acquire the image frame data to be processed in the specified format.
In an implementation manner, the receiving module further includes: the splitting sub-module is used for splitting pixels of the generated image frame data and sequentially sending the split pixels to the image processing chip if the pixel bit width corresponding to the image frame data generated by the camera sensing chip exceeds the transmission pixel bit width of a serial receiving interface; the splitting submodule is further configured to split pixels of the generated image frame data if a pixel bit width corresponding to the image frame data generated by the camera sensing chip exceeds a transmission pixel bit width of the parallel receiving interface, where the split pixels are used to send to the image processing chip.
In one embodiment, the processing module includes: a determination submodule for determining a frame synchronization signal and a line synchronization signal of the image synchronization signal; and the widening submodule is used for prolonging the data width of the frame synchronization signal and the line synchronization signal based on the analysis standard to obtain a widened image synchronization signal.
In one embodiment, the apparatus further comprises: and the averaging module is used for uniformly distributing each effective pixel to each clock cycle if the data width of a single clock cycle corresponding to the image data comprises a plurality of effective pixels, so as to obtain the image data with uniformly distributed effective pixels.
In one embodiment, the processing module includes: the remapping submodule is used for remapping the image data according to a high-low alignment mode to obtain image data adaptive to the image processing chip; if the input bit width of the image data is lower than the designated bit width of the image processing chip, performing zero padding alignment operation on the image data at a designated position to obtain image data adapted to the image processing chip; and if the input bit width of the image data is higher than the designated bit width of the image processing chip, performing data interception operation on the image data at a designated position to obtain the image data adapted to the image processing chip.
In one embodiment, the apparatus further comprises: and the splicing module is used for performing pixel splicing on the image data of the target image frame data to obtain spliced image data if the image pixel width generated by the camera sensing chip exceeds the interface transmission pixel width, and the spliced image data is used for being input into the image processing chip.
In one embodiment, the splicing module includes: the horizontal splicing submodule is used for vertically splicing pixels corresponding to the image data of the target image frame data to obtain spliced image data if the target image frame data comes from a serial input interface; the vertical splicing submodule is used for determining at least two parallel receiving interfaces corresponding to the target image frame data if the target image frame data comes from a parallel input interface; and horizontally splicing the image data of the target image frame data from at least two parallel receiving interfaces to obtain spliced image data.
According to a third aspect of the present application, there is provided an image data exchange system for multiple cameras, the system comprising: the device comprises a camera sensing chip, a data exchange device and an image processing chip; the camera sensing chip is used for acquiring image frame data collected by the camera; the data exchange device comprises: the receiving module is used for receiving image frame data to be processed from a plurality of camera sensing chips through receiving interfaces of a plurality of protocols, and the receiving interfaces correspond to the interface protocols of the camera sensing chips; the gating module is used for gating the image frame data to be processed based on the number of the image interfaces corresponding to the image processing chip to obtain the gated image frame data with the specified number; the processing module is used for widening the image synchronization signal of the gated image frame data according to the analysis standard of the image processing chip and remapping the image data of the gated image frame data to obtain target image frame data adaptive to the image processing chip; the input module is used for inputting the target image frame data into the image processing chip; and the image processing chip is used for carrying out data processing on the target image frame data.
According to a fourth aspect of the present application, an automobile is provided, where the automobile includes a plurality of cameras and corresponding camera sensing chips, and image frame data generated by the plurality of camera sensing chips is processed by the image data exchange method according to any one of the above-mentioned implementable embodiments.
According to a fifth aspect of the present application, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described herein.
According to a sixth aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method described herein.
According to the image data exchange method, the device, the system and the automobile for the multiple cameras, the receiving interfaces with different protocols are used for receiving the image frame data to be processed from the multiple types of camera sensing chips of the multiple cameras, the image frame data to be processed are subjected to gating, widening, remapping and other operations, unified support can be provided for different working modes and different data formats of the different types of camera sensing chips, therefore, effective information in the image frame data to be processed from the multiple camera sensing chips is extracted and preprocessed, and unified scheduling and preprocessing for the image frame data to be processed are achieved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 is a schematic flow chart illustrating an implementation of an image data exchange method for multiple cameras according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating an implementation scenario of an image data exchange method for multiple cameras according to an embodiment of the present application;
FIG. 3 is a schematic view showing a flow chart of an implementation scenario of an image data exchange method for multiple cameras according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating an implementation module of an image data exchange apparatus for multiple cameras according to an embodiment of the present application;
FIG. 5 is a schematic diagram showing an implementation apparatus of an image data exchange system for multiple cameras according to an embodiment of the present application;
fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 shows a schematic flow chart of an implementation of an image data exchange method for multiple cameras according to an embodiment of the present application. Fig. 2 shows a schematic view of an implementation scenario of an image data exchange method for multiple cameras according to an embodiment of the present application.
Referring to fig. 1 and 2, according to a first aspect of embodiments of the present application, there is provided an image data exchange method for multiple cameras, the method including: operation 101, receiving image frame data to be processed from a plurality of camera sensing chips through receiving interfaces of a plurality of protocols, where the receiving interfaces correspond to interface protocols of the camera sensing chips; operation 102, gating image frame data to be processed based on the number of image interfaces corresponding to the image processing chip, and acquiring a specified number of gated image frame data; operation 103, widening the image synchronization signal of the gated image frame data according to the analysis standard of the image processing chip, and remapping the image data of the gated image frame data to obtain target image frame data adapted to the image processing chip; in operation 104, the target image frame data is input into the image processing chip for the image processing chip to perform data processing on the target image frame data.
The image data exchange method for the multiple cameras can be applied to an image exchange device connected between a camera sensing chip and an image processing chip. The image exchange device may be any one of an image exchange chip, an image exchanger, an image exchange system, an image exchange apparatus, and an image exchange module. The image exchange device can also be directly integrated on an image processing chip and used as an image processing module of the image processing chip.
According to the embodiment of the application, the receiving interfaces of different protocols are used for receiving the image frame data to be processed of the camera sensing chips of multiple protocol types from multiple cameras, the image frame data to be processed are gated, broadened, remapped and the like, unified support can be provided for different working modes and different data formats of the camera sensing chips of different types, effective information in the image frame data to be processed from the multiple camera sensing chips is extracted and preprocessed, unified scheduling and preprocessing for the image frame data to be processed are achieved, and the image processing chips can be enabled to be refined more efficiently.
In operation 101 of the method, image frame data to be processed transmitted by different camera sensor chips includes an image synchronization signal and image data, and specific data contents include, but are not limited to: pixel clock signal (pixclk), frame Sync signal (vsync), line Sync signal (hsync), valid input data (data _ en), pixel data (pixdata). Based on different interfaces of the camera sensing chip, in the process that the camera sensing chip transmits the image frame data to be processed to the image processing chip in a physical mode through the corresponding interfaces, the image switching device provided by the embodiment of the application is utilized to convert the image frame data to be processed through a physical layer, a protocol layer and an application layer, and the image frame data to be processed is restored to a specified data packet format by adopting a corresponding mapping relation according to different data types of the image frame data to be processed. The specified packet format can be set between the raw6 bit-raw24 bit width as required.
The input interface of the camera sensing chip may include a serial input interface or a parallel input interface for different data types, and the input interface of the camera sensing chip is related to the design of a corresponding manufacturer, which is not limited in the embodiments of the present application. After the input interface and the corresponding data type of the camera sensing chip are determined, the corresponding image receiving interface is set on the image exchange device based on the input interface and the corresponding data type of the camera sensing chip, so that the purpose of supporting different camera sensing chip interface protocols and data formats is achieved.
The data format of the camera sensing chip includes but is not limited to: and (4) a raw data format corresponding to the interface of the main stream supplier. Interface protocol types include, but are not limited to: interface protocol types of Mipi D phy, csi para, etc. Based on the fact that the image exchange device has a plurality of receiving interfaces with different protocols, the image exchange device can generally receive a large amount of image frame data to be processed from a plurality of camera sensor chips at the same time.
In operation 102 of the method, there is a limit on the number of image interfaces of the image processing chip, and generally, the number of image interfaces of the image processing chip will be smaller than the number of image frame data to be processed transmitted by the image exchange device. In the embodiment of the application, the image frame data to be processed may be gated by a data selector (mux), and the gated image frame data to be input to the image processing chip for data processing may be determined from the image frame data to be processed.
For example, in one implementation scenario, the number of image interfaces of the image processing chip is N, where N < X, and X pieces of to-be-processed image frame data are received by the image swapping device from different camera sensor chips and are gated by the data selector, so as to determine N pieces of gated image frame data.
In operation 103 and operation 104 of the method, the gated image frame data is analyzed based on that the image processing chip has a preset analysis method, and the corresponding analysis standard may be determined according to the analysis method of the image processing chip, where the analysis standard includes, but is not limited to: and (4) limitation on various data contents in the gated image frame data, such as limitation on effective input data width. The method is based on the analytic standard, the gated image frame data are broadened and remapped, the processed target image frame data can be adapted to the analytic method of the image processing chip, the target image frame data are input into the image processing unit of the image processing chip, and the image processing unit can perform more efficient fine processing on the target image frame data through the corresponding analytic method. By applying the method, different camera sensing chips and image processing chips can be suitable for application scenes in various image formats and various working modes through the image exchange device.
The method can also fixedly connect the image exchange device with the image processing chip to form a fixed interface data mapping relation, thereby effectively realizing the multiplexing of the image exchange device and the image processing chip under the use scene of various different types of camera sensing chips and solving the problem of matching conflict between increasingly abundant camera sensing chips and image processing chip interface signals at present.
In an implementation, the operation 101, receiving image frame data to be processed from a plurality of camera sensor chips through a receiving interface of a plurality of protocols, includes: if the camera sensing chip interface is a serial input interface, converting and mapping image frame data from the serial input interface through a preset number of serial receiving interfaces to obtain the image frame data to be processed in a specified format.
Specifically, the input interface of the conventional camera sensor chip is divided into a serial input interface and a parallel input interface.
When the camera sensing chip adopts a serial input interface, such as a Mipi D phy Tx interface. In this scenario, the image switching apparatus sets a corresponding receiving interface, for example, sets a D phy Datalane analog port, and receives image frame data to be processed. The image exchange device converts the differential signal corresponding to the image frame data to be processed through a physical layer, a protocol layer and an application layer, and restores the converted data into a format of a multi-bit-width data packet according to different data types and a mapping relation preset by the image exchange device.
Because the at most four channels of the D phy receiving interface are supported to transmit image information at the same time, when the camera sensing chip transmits images in the mode that the channels of the D phy receiving interface are fully opened, the D phy receiving interface performs physical transmission, signal conversion and mapping reduction of the images, and for each D phy receiving interface, the image exchange device can reduce four images from the camera sensing chip and corresponding image synchronization signals in real time. The image exchange device provided by the embodiment of the application can be provided with n D phy receiving interfaces, so that in a mode of fully opening a channel, the image exchange device can restore 4n images in real time, wherein n is a positive integer greater than or equal to 1.
In one embodiment, operation 101 includes: firstly, if the camera sensing chip interface is a parallel input interface, determining the number of the parallel input interfaces, and presetting the parallel receiving interfaces with the same number based on the number of the parallel input interfaces; then, the image frame data from the parallel input interface is converted and mapped through the parallel receiving interface, and the image frame data to be processed in the specified format is obtained.
When the camera sensing chip adopts a parallel input interface, such as the csi para format. The interface in the csi para format can be directly connected with a pipe leg (PAD) for input and output outside the image processing chip. Because the parallel input interface usually has no uniform protocol specification, the image synchronization signal of the image frame data to be processed from the parallel input interface derives a plurality of working modes according to the actual design conditions of different camera manufacturers. The image exchange device in the embodiment of the application is provided with a corresponding receiving interface format according to an interface input format of a camera sensing chip, so as to provide a corresponding transmission channel for a parallel input interface, the number of the transmission channels of the parallel receiving interface can be determined according to the number of the camera sensing chips using the interface, for example, m parallel input interfaces are determined to be needed according to the camera sensing chip, the image exchange device is provided with m parallel receiving interfaces to obtain image frame data to be processed from the camera sensing chip, wherein m can be a positive integer greater than or equal to 1.
In an implementation manner, the operation 101, receiving image frame data to be processed from a plurality of camera sensor chips through a plurality of protocol receiving interfaces, further includes: and if the pixel bit width corresponding to the image frame data generated by the camera sensing chip exceeds the transmission pixel bit width of the serial receiving interface, splitting the pixels of the generated image frame data, and sequentially sending the split pixels to the image processing chip.
For a serial input interface, in practical application, the serial input interface is limited by different camera sensing chips to support different versions of the Mipi D _ phy protocol, a bit width of a single pixel of image frame data to be processed generated by the camera sensing chips may exceed a bit width limit of a transmission channel in a part of application scenes, and in this case, the input interface of the camera sensing chips splits effective information of the single pixel into two pixels to be sequentially transmitted front and back during analysis, thereby achieving a transmission purpose of the image frame data to be processed. Correspondingly, before transmitting the image frame data to the image processing chip, the image exchange device needs to vertically splice the transmitted pixels, so as to restore the original image frame data to be processed.
In one embodiment, operation 101 includes: and if the pixel bit width corresponding to the image frame data generated by the camera sensing chip exceeds the transmission pixel bit width of the parallel receiving interface, splitting the pixels of the generated image frame data, wherein the split pixels are used for analyzing and sending to the image processing chip.
For parallel input interfaces, limited by hardware design, the interface resources are usually very limited, e.g. connecting the parallel input interface directly to the PAD of the image processing chip. The single pixel bit width of the image frame data to be processed generated by the camera sensing chip exceeds the maximum pixel bit width of the parallel input interface. In this scenario, resources of other parallel receiving interfaces can be borrowed, and the borrowing amount can be determined according to the multiple that the single-pixel bit width exceeds the maximum pixel bit width of the parallel receiving interface. For example, if the bit width of a single pixel exceeds 1.5 times of the maximum pixel bit width of the parallel receiving interface, a parallel receiving interface can be borrowed. And after the single pixel is split, transmitting the image frame data to be processed to an image exchange chip through two parallel receiving interfaces. Correspondingly, the image exchange device needs to horizontally splice the transmitted pixels to restore the original image frame data to be processed.
In one embodiment, the widening an image synchronization signal of the gated image frame data according to an analysis standard of the image processing chip in operation 102 includes: firstly, determining a frame synchronizing signal and a line synchronizing signal of an image synchronizing signal; then, the data widths of the frame synchronization signal and the line synchronization signal are extended based on the analysis standard, and the widened image synchronization signal is obtained.
In the data stretching operation of the present application, the strobe image frame data may be from the serial input interface or from the serial input interface. According to the requirement of an analysis standard, the data width of a frame synchronizing signal and a line synchronizing signal in gating image frame data is extended, so that the image synchronizing signal can adapt to an analysis method of an image processing chip for target image frame data.
In an implementation manner, before the remapping the image data of the gated image frame data to obtain the target image frame data adapted to the image processing chip in operation 103, the method further includes: if the data width of a single clock period corresponding to the image data comprises a plurality of effective pixels; and uniformly distributing each effective pixel to each clock period to obtain image data with uniformly distributed effective pixels.
While the image synchronization signal is widened, if a data width of a single clock cycle of image data of gated image frame data from the input interface includes a plurality of effective pixels, the effective pixels may be pixels in an effective raw format. Each pixel can be intercepted and evenly distributed to each clock cycle, and the data width of the image data can be matched with the effective data width of the image processing chip.
In an implementation manner, the operation 103 of remapping the image data of the gated image frame data to obtain the target image frame data adapted to the image processing chip includes: and remapping the image data according to a high-low bit alignment mode to obtain the image data adaptive to the image processing chip.
After the image data is uniformly distributed to each clock cycle, the image data needs to be remapped in a high-low bit alignment manner, so that the image data can be conveniently reprocessed and processed by the image processing chip.
If the input bit width of the image data is lower than the designated bit width of the image processing chip, zero padding and alignment operation are carried out on the image data at the designated position, and the image data adaptive to the image processing chip is obtained. If the pixel bit width in the image data with uniformly distributed effective pixels is lower than the pixel bit width required by the input of the image processing chip, zero padding alignment operation can be performed on the image data at a specified position, and the specified position for performing zero padding alignment operation on the image data can be any position of a high bit or a low bit.
If the input bit width of the image data is higher than the designated bit width of the image processing chip, performing data interception operation on the image data at the designated position to obtain target image frame data adapted to the image processing chip.
If the pixel bit width in the image data with uniformly distributed effective pixels is lower than the pixel bit width required by the input of the image processing chip, the image data may need to be intercepted at the specified position, and the specified position of the image data for intercepting the overflow data may be any position of a high position or a low position.
In one embodiment, before inputting the target image frame data to the image processing chip, the method further includes: and if the pixel width of the image generated by the camera sensing chip exceeds the pixel width of the interface transmission, performing pixel splicing on the image data of the target image frame data to obtain spliced image data, wherein the spliced image data is used for being input into the image processing chip.
If the pixel bit width of the image data from the input interface is greater than the transmission pixel bit width of the input interface in the above embodiment, the image data may be subjected to pixel stitching according to the interface type of the receiving interface after the image data is remapped and before the target image frame data is input to the image processing chip.
In an embodiment, pixel stitching image data of target image frame data to obtain stitched image data includes: if the target image frame data comes from the serial input interface, vertically splicing pixels corresponding to the image data to obtain spliced image data;
the method is limited by different versions of the Mipi D _ phy protocol supported by different camera sensing chips, the situation that the width of a single pixel of data input by the camera sensing chips is too large exists, effective information of the single pixel is split into two pixels to be sequentially sent back and forth when a receiving interface analyzes, and in this scene, an image exchange device restores and splices original pixels in advance before target image frame data is input to an image processing chip, namely, vertically splices the original pixels.
For example, if the width of a single pixel corresponding to the Mipi D _ phy interface is too long, the single pixel may be split into two or more pixels for analysis, and then the image exchange device splices two or more pixels in front and behind to obtain spliced image data, that is, the original pixels are restored, and then the spliced image data is input to the image processing chip together.
In an embodiment, pixel stitching image data of target image frame data to obtain stitched image data includes: if the target image frame data come from the parallel input interface, determining at least two parallel receiving interfaces corresponding to the target image frame data; and horizontally splicing the target image frame data of the two parallel receiving interfaces to obtain spliced image data.
If the width of a single pixel corresponding to the input interface of the csi para protocol is too wide, resources of receiving interfaces of other csi para protocols can be borrowed, for example, receiving interfaces csi para1 and csi para2 exist. If the csi para1 receiving interface is a main receiving data interface, the csi para2 receiving interface can play a role of filling the pixel bit width under the condition of idle, namely, all interface resources of the csi para2 are temporarily changed into an interface of pixdata of the csi para 1. For example, interface resources of csi para2 for vsync, hsync, data _ en and pixdata are temporarily changed into interfaces of pixdata of csi para1, so that raw data format support of the csi para receiving interface can be maximally expanded. Under the condition that the design area is not additionally increased, the interfaces of the csi para can work independently and can be combined into one channel. Both splicing modes are favorable for better supporting an original data format, so that the widest range of input data of an input interface of the camera sensing chip can be from raw6 bit to raw24 bit. Therefore, the idle interface resources can be flexibly borrowed to achieve the purpose of expanding the data bit width.
Fig. 3 shows a flowchart of an implementation scenario of an image data exchange method for multiple cameras according to an embodiment of the present application.
To facilitate a general understanding of the above embodiments, a specific implementation scenario for applying the method is provided below.
In this embodiment, with reference to fig. 2 and 3, the method is applied to an image exchange device, which may be integrated on an image processing chip; the device can also be arranged into an image processing chip for connecting a camera sensing chip and an image processing chip corresponding to a plurality of cameras.
When the image exchange device is an independent image processing chip, the image processing device comprises a receiving interface connected with the camera sensing chip, an input interface connected with the image processing chip and a processing module for preprocessing image frame data.
When the image exchange device is integrated on the image processing chip, the receiving interface of the image processing device can directly utilize the receiving interface of the image processing chip, and an input interface aiming at the image processing chip is not required to be arranged.
Firstly, when a plurality of camera sensing chips generate image frame data to be processed, the camera sensing chips select corresponding data channel types, wherein the data channel types are Mipi csi D phy data and csi pata data.
If the data channel type corresponding to the camera sensing chip is Mipi csi data, transmitting the image frame data to be processed to a receiving interface of the image switching device through a Mipi D _ phy Tx interface. When the image switching device is integrated on the image processing chip, a Mipi D _ phy Tx analog port for the image switching device may be provided on the image processing chip.
In the step, the image frame data to be processed through the Mipi D _ phy Tx interface is physically transmitted to the analog port of the Mipi D _ phy Rx of the image processing chip or the Mipi D _ phy Tx receiving interface of the image switching device, the image switching device converts the differential signal of the image frame data to be processed through the physical layer, the protocol layer and the application layer through the Mipi csi controller, and the image frame data to be processed is restored to the format of the multi-bit-width data packet according to different data types and a certain mapping relationship. Because the maximum port of the Mipi D _ phy Rx port supports 4 channels to transmit image information at the same time, if the camera sensing chip outputs an image in a mode of fully opening the channels, 4 images and image synchronization signals can be restored in real time at the maximum output port of the Mipi csi controller through the physical transmission of the Mipi D _ phy Rx interface and the conversion of the Mipi csi controller. The Mipi D _ phy Tx analog ports can be integrated with n ports on the image processing chip, and 4n images and image synchronization signals can be correspondingly restored.
If the data channel type corresponding to the camera sensing chip is csi pata data, transmitting the image frame data to be processed to a receiving interface of the image switching device through a csi para format interface.
In the step, the interface of the csi para format can be directly connected with the external PAD of the image processing chip, and because the csi para format does not have a uniform protocol specification, the image synchronization signal will derive a plurality of working modes according to the types of different camera manufacturers, so in order to enhance the compatibility of the supported format of the image processing chip, the image exchange device needs to provide transmission channels for the image frame of the csi para format, the number of the interfaces of the csi para can be set to be m according to actual requirements, and the m interfaces of the csi para are used for image transmission.
In addition, it is known that, for image frame data to be processed of a csi para format interface, various data included in the image frame data to be processed are transmitted through different channels, and at a csi para format receiving interface, data frames need to be merged, so as to obtain image frame data to be processed.
Then, the image switching device can obtain 4n + m images through a Mipi D _ phy Tx interface or a csi para format interface, the number of image ports supported by the image processing chip is limited to k, and at most k (k < =4n + m) images can effectively enter the image processing unit of the image processing chip. Based on this, images of 4n + m are gated by a mux switch in an image exchange device, k effective images are determined, and image synchronization signals of the k effective images are broadened, specifically hsync _ pulse and vsync _ pulse are used, so that the image synchronization signals can adapt to an image frame analysis method of an image processing chip.
When the image synchronization signal is widened, if the data width of a single clock cycle of the input end contains a plurality of effective pixels in raw format, each pixel can be intercepted and uniformly distributed on each clock cycle, and the purpose of this is also to match the effective data width of the receiving port of the image processing chip.
Then, after the image data is uniformly distributed to each clock cycle, the image data needs to be remapped in a high-low bit alignment manner, so as to facilitate reprocessing and processing of a subsequent image processing chip. The specific method comprises the following steps: if the pixel bit width is lower than the pixel bit width required by the input of the image processing chip, zero padding alignment operation is required to be carried out on high bits or low bits. If the pixel bit width is higher than the pixel bit width required by the input of the image processing chip, the interception operation of overflow data needs to be carried out on the high bit or the low bit.
After data remapping, data merging may be performed if required for data merging.
In a specific implementation scenario, because different camera sensing chips support different versions of the Mipi D _ phy protocol, if the width of a single pixel received by the receiving interface is too large, the image processing apparatus splits effective information of the single pixel into two pixels to be sequentially sent back and forth during Mipi csi D _ phy parsing, and in this scenario, it is necessary to restore and splice out the original pixels in advance before an image frame is input to the image processing unit of the image processing chip. Based on this, if the data of a single pixel from the Mipi csi D _ phy is too long in the vertical direction, the two previous and next valid data can be spliced, and the original pixel can be restored.
In another specific implementation scenario, when the single pixel bit width exceeds the maximum pixel bit width supported by the csi para, the image of the camera cannot be matched with the data port of the ISP, and the image processing cannot be satisfied to the maximum. Assuming that the number of the parallel receiving interfaces is 2, if the csi para1 port is a main receiving data port, the csi para2 port can play a role of filling the pixel bit width under the condition of idle, that is, the port resources of the csi para2 for receiving data such as vsync, hsync, data _ en, and pixdata are all temporarily changed into the port of the pixdata of the csi para1, so that the raw data format support of the csi para port can achieve the maximum extension. The ports of the csi para can work independently and can be combined into one channel without additional increase of design area. Both splicing modes are beneficial to better supporting an original data format, so that the widest range of data at the input end of the image processing chip can reach raw24 bits from raw6 bits.
Fig. 4 shows a schematic block diagram of an implementation module of an image data exchange device for multiple cameras according to an embodiment of the present application.
Referring to fig. 4, according to a second aspect of embodiments of the present application, there is provided an image exchange apparatus for multiple cameras, the apparatus including: the receiving module 301 is configured to receive image frame data to be processed from multiple camera sensing chips through receiving interfaces of multiple protocols, where the receiving interfaces correspond to interface protocols of the camera sensing chips; the gating module 302 is configured to gate image frame data to be processed based on the number of image interfaces corresponding to the image processing chip, and acquire a specified number of gated image frame data; the processing module 303 is configured to widen an image synchronization signal of the gated image frame data according to an analysis standard of the image processing chip, and remap the image data of the gated image frame data to obtain target image frame data adapted to the image processing chip; and the input module 304 is configured to input the target image frame data into the image processing chip, so that the image processing chip performs data processing on the target image frame data.
In one embodiment, the receiving module 301 includes: the obtaining sub-module 3011, configured to, if the camera sensor chip interface is a serial input interface, convert and map image frame data from the serial input interface through a preset number of serial receiving interfaces, and obtain to-be-processed image frame data in a specified format; the setting submodule 3012 is configured to determine the number of interfaces of the parallel input interface if the camera sensor chip interface is a parallel input interface, and preset parallel receiving interfaces of the same number based on the number of interfaces of the parallel input interface; the obtaining sub-module 3011 is configured to further perform conversion and mapping on the image frame data from the parallel input interface through the parallel receiving interface, and obtain to-be-processed image frame data in a specified format.
In an implementation manner, the receiving module 301 further includes: the splitting submodule 3013 is configured to split pixels of generated image frame data if a pixel bit width corresponding to the image frame data generated by the camera sensor chip exceeds a transmission pixel bit width of the serial receiving interface, and sequentially send the split pixels to the image processing chip; the splitting submodule 3013 is further configured to split pixels of the generated image frame data if a pixel bit width corresponding to the image frame data generated by the camera sensing chip exceeds a transmission pixel bit width of the parallel receiving interface, where the split pixels are used to send to the image processing chip.
In one embodiment, the processing module 303 includes: a determination submodule 3031 for determining a frame synchronization signal and a line synchronization signal of the image synchronization signal; the stretching sub-module 3032 is configured to extend the data widths of the frame synchronization signal and the line synchronization signal based on the parsing standard, and obtain a stretched image synchronization signal.
In one embodiment, the apparatus further comprises: and if the data width of a single clock period corresponding to the image data comprises a plurality of effective pixels, uniformly distributing each effective pixel to each clock period to obtain the image data with uniformly distributed effective pixels.
In one embodiment, the processing module 303 includes: the remapping submodule 3033 is configured to remap the image data according to a high-low alignment mode to obtain image data adapted to the image processing chip; if the input bit width of the image data is lower than the designated bit width of the image processing chip, performing zero padding alignment operation on the image data at a designated position to obtain image data adaptive to the image processing chip; and if the input bit width of the image data is higher than the designated bit width of the image processing chip, performing data interception operation on the image data at the designated position to obtain target image frame data adapted to the image processing chip.
In one embodiment, the apparatus further comprises: and a stitching module 305, configured to perform pixel stitching on image data of target image frame data to obtain stitched image data if an image pixel width generated by the camera sensing chip exceeds an interface transmission pixel width, where the stitched image data is used for being input to the image processing chip.
In one embodiment, the splicing module 305 includes: the horizontal splicing submodule 3051, configured to, if target image frame data comes from the serial input interface, vertically splice pixels corresponding to the image data to obtain spliced image data; the vertical splicing submodule 3052 is configured to, if the target image frame data comes from the parallel input interface, determine at least two parallel receiving interfaces corresponding to the target image frame data; and horizontally splicing the target image frame data of the two parallel receiving interfaces to obtain spliced image data.
Fig. 5 shows a schematic diagram of an implementation apparatus of an image data exchange system for multiple cameras according to an embodiment of the present application.
In conjunction with fig. 2 and 5, according to a third aspect of the present application, there is provided an image data exchange system for multiple cameras, the system comprising: a camera sensing chip 401, a data exchange device 402 and an image processing chip 403; the camera sensing chip 401 is used for acquiring image frame data acquired by a camera; the data exchange device 402 includes: the receiving module is used for receiving image frame data to be processed from the multiple camera sensing chips through receiving interfaces of multiple protocols, and the receiving interfaces correspond to the interface protocols of the camera sensing chips; the gating module is used for gating the image frame data to be processed based on the number of the image interfaces corresponding to the image processing chip to acquire the gated image frame data with the specified number; the processing module is used for widening the image synchronous signal of the gated image frame data according to the analysis standard of the image processing chip and remapping the image data of the gated image frame data to obtain target image frame data adaptive to the image processing chip; the input module is used for inputting the target image frame data into the image processing chip; and an image processing chip 403, configured to perform data processing on the target image frame data.
According to a fourth aspect of the present application, an automobile is provided, where the automobile includes multiple cameras and corresponding camera sensing chips, and image frame data generated by the multiple camera sensing chips is processed by the image data exchange method according to any one of the above-mentioned implementation manners. The method can preprocess data generated by cameras produced by different manufacturers on the automobile through a data exchange device formed by hardware, and inputs the data into an image processing unit of the image processing chip for image processing.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
FIG. 6 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the device 500 comprises a computing unit 501 which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 executes the respective methods and processes described above, such as an image data exchange method for multiple cameras. For example, in some embodiments, the image data exchange method for multiple cameras can be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 500 via ROM 502 and/or communications unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the image data exchange method for multiple cameras described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the image data exchange method for multiple cameras by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method for image data exchange for multiple cameras, the method comprising:
receiving image frame data to be processed from a plurality of camera sensing chips through receiving interfaces of a plurality of protocols, wherein the receiving interfaces correspond to the interface protocols of the camera sensing chips;
gating the image frame data to be processed based on the number of image interfaces corresponding to the image processing chip to obtain the gated image frame data with the specified number;
widening the image synchronization signal of the gated image frame data according to the analysis standard of the image processing chip, and remapping the image data of the gated image frame data to obtain target image frame data matched with the image processing chip;
and inputting the target image frame data into the image processing chip for the image processing chip to perform data processing on the target image frame data.
2. The method according to claim 1, wherein the receiving interface via multiple protocols receives image frame data to be processed from multiple camera sensor chips, comprising:
if the camera sensing chip interface is a serial input interface, converting and mapping image frame data from the serial input interface through a preset number of serial receiving interfaces to obtain image frame data to be processed in a specified format;
if the camera sensing chip interface is a parallel input interface, determining the number of the parallel input interfaces, and presetting the parallel receiving interfaces with the same number based on the number of the parallel input interfaces;
and converting and mapping the image frame data from the parallel input interface through a parallel receiving interface to acquire the image frame data to be processed in a specified format.
3. The method of claim 2, wherein the receiving interface via multiple protocols receives image frame data to be processed from multiple camera sensor chips, further comprising:
if the pixel bit width corresponding to the image frame data generated by the camera sensing chip exceeds the transmission pixel bit width of a serial receiving interface, splitting the pixels of the generated image frame data, and sequentially sending the split pixels to the image processing chip;
and if the pixel bit width corresponding to the image frame data generated by the camera sensing chip exceeds the transmission pixel bit width of the parallel receiving interface, splitting the pixels of the generated image frame data, wherein the split pixels are used for transmitting to the image processing chip.
4. The method of claim 1, wherein stretching the image synchronization signal of the gated image frame data according to an analysis standard of the image processing chip comprises:
determining a frame synchronization signal and a line synchronization signal of the image synchronization signal;
and based on the analysis standard, prolonging the data width of the frame synchronization signal and the line synchronization signal to obtain the widened image synchronization signal.
5. The method of claim 1, wherein before the remapping the image data of the gated image frame data to obtain target image frame data adapted to the image processing chip, the method further comprises:
if the data width of a single clock period corresponding to the image data comprises a plurality of effective pixels;
and uniformly distributing each effective pixel to each clock period to obtain image data with uniformly distributed effective pixels.
6. The method of claim 1, wherein remapping the image data of the gated image frame data comprises:
remapping the image data according to a high-low bit alignment mode to obtain image data adaptive to the image processing chip;
wherein the content of the first and second substances,
if the input bit width of the image data is lower than the designated bit width of the image processing chip, performing zero padding alignment operation on the image data at a designated position to obtain image data adapted to the image processing chip;
and if the input bit width of the image data is higher than the designated bit width of the image processing chip, performing data interception operation on the image data at a designated position to obtain the image data adapted to the image processing chip.
7. The method of claim 1, wherein prior to inputting the target image frame data into the image processing chip, the method further comprises:
and if the image pixel width generated by the camera sensing chip exceeds the interface transmission pixel width, performing pixel splicing on the image data of the target image frame data to obtain spliced image data, wherein the spliced image data is used for being input into the image processing chip.
8. The method of claim 7, wherein the pixel stitching the image data of the target image frame data to obtain stitched image data comprises:
if the target image frame data comes from a serial input interface, vertically splicing pixels corresponding to the image data of the target image frame data to obtain spliced image data;
if the target image frame data come from a parallel input interface, determining at least two parallel receiving interfaces corresponding to the target image frame data;
and horizontally splicing the image data of the target image frame data from at least two parallel receiving interfaces to obtain spliced image data.
9. An image exchange apparatus for multiple cameras, the apparatus comprising:
the receiving module is used for receiving image frame data to be processed from a plurality of camera sensing chips through receiving interfaces of a plurality of protocols, and the receiving interfaces correspond to the interface protocols of the camera sensing chips;
the gating module is used for gating the image frame data to be processed based on the number of the image interfaces corresponding to the image processing chip to obtain the gated image frame data with the specified number;
the processing module is used for widening the image synchronization signal of the gated image frame data according to the analysis standard of the image processing chip and remapping the image data of the gated image frame data to obtain target image frame data adaptive to the image processing chip;
and the input module is used for inputting the target image frame data into the image processing chip so that the image processing chip can perform data processing on the target image frame data.
10. An image data exchange system for multiple cameras, the system comprising:
the device comprises a camera sensing chip, a data exchange device and an image processing chip;
the camera sensing chip is used for acquiring image frame data collected by the camera;
the data exchange device comprises:
the receiving module is used for receiving image frame data to be processed from a plurality of camera sensing chips through receiving interfaces of a plurality of protocols, and the receiving interfaces correspond to the interface protocols of the camera sensing chips;
the gating module is used for gating the image frame data to be processed based on the number of the image interfaces corresponding to the image processing chip to obtain the gated image frame data with the specified number;
the processing module is used for widening the image synchronization signal of the gated image frame data according to the analysis standard of the image processing chip and remapping the image data of the gated image frame data to obtain target image frame data adaptive to the image processing chip;
the input module is used for inputting the target image frame data into the image processing chip;
and the image processing chip is used for carrying out data processing on the target image frame data.
11. An automobile, which comprises a plurality of cameras and corresponding camera sensing chips, wherein image frame data generated by the camera sensing chips is processed by the image data exchange method of any one of claims 1 to 8.
CN202211587425.8A 2022-12-12 2022-12-12 Image data exchange method, device and system for multiple cameras and automobile Active CN115623339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211587425.8A CN115623339B (en) 2022-12-12 2022-12-12 Image data exchange method, device and system for multiple cameras and automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211587425.8A CN115623339B (en) 2022-12-12 2022-12-12 Image data exchange method, device and system for multiple cameras and automobile

Publications (2)

Publication Number Publication Date
CN115623339A CN115623339A (en) 2023-01-17
CN115623339B true CN115623339B (en) 2023-02-28

Family

ID=84881045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211587425.8A Active CN115623339B (en) 2022-12-12 2022-12-12 Image data exchange method, device and system for multiple cameras and automobile

Country Status (1)

Country Link
CN (1) CN115623339B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5971923A (en) * 1997-12-31 1999-10-26 Acuson Corporation Ultrasound system and method for interfacing with peripherals
CN106027860A (en) * 2016-05-20 2016-10-12 北京理工大学 Remapping circuit applied to micro-camera interface and application method
CN206472223U (en) * 2016-11-11 2017-09-05 深圳市道通智能航空技术有限公司 A kind of MIPI interfaces multi-wad join structure and unmanned plane
CN110636240A (en) * 2019-08-19 2019-12-31 南京芯驰半导体科技有限公司 Signal regulation system and method for video interface
CN114513259A (en) * 2022-04-19 2022-05-17 电子科技大学 Method and device for sampling and quantizing intensity-modulated optical signals

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100842335B1 (en) * 2007-01-26 2008-07-01 삼성전자주식회사 Cmos image sensor and method for using the cmos image sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5971923A (en) * 1997-12-31 1999-10-26 Acuson Corporation Ultrasound system and method for interfacing with peripherals
CN106027860A (en) * 2016-05-20 2016-10-12 北京理工大学 Remapping circuit applied to micro-camera interface and application method
CN206472223U (en) * 2016-11-11 2017-09-05 深圳市道通智能航空技术有限公司 A kind of MIPI interfaces multi-wad join structure and unmanned plane
CN110636240A (en) * 2019-08-19 2019-12-31 南京芯驰半导体科技有限公司 Signal regulation system and method for video interface
CN114513259A (en) * 2022-04-19 2022-05-17 电子科技大学 Method and device for sampling and quantizing intensity-modulated optical signals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
用DSP和CPLD实现的纸币图像采集系统;贺科学等;《电子技术》(第02期);第64-67页 *

Also Published As

Publication number Publication date
CN115623339A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
JP6401716B2 (en) Synchronous signal processing method and apparatus for stereoscopic display of splice screen, splice screen
CN103440117B (en) The method and system of Computer Vision
US11314457B2 (en) Data processing method for data format conversion, apparatus, device, and system, storage medium, and program product
WO2019226296A1 (en) Correlation of video stream frame timestamps based on a system clock
US6446155B1 (en) Resource bus interface
US11902706B2 (en) Method for transmitting high bandwidth camera data through SerDes links
US20200267363A1 (en) Data processing method, data sending end, data receiving end, and communication system
CN101303638A (en) Multi-USB interface high performance image adaptation apparatus as well as signal transmission method
CN112492247B (en) Video display design method based on LVDS input
CN115623339B (en) Image data exchange method, device and system for multiple cameras and automobile
CN101944006A (en) Information display technology of spliced large screen
CN111263095B (en) Split-screen display system and method based on domestic platform and storage medium
WO2023184754A1 (en) Configurable real-time disparity point cloud computing apparatus and method
CN113554721B (en) Image data format conversion method and device
CN114697512B (en) Configuration method and device
CN113487524B (en) Image format conversion method, apparatus, device, storage medium, and program product
CN205385561U (en) Tiled display systems of shielding more
WO2021101037A1 (en) System and method for dynamic selection of reference image frame
CN115967784A (en) Image transmission processing system and method based on MIPI CSI-PHY protocol
CN113434551B (en) Data processing method, device, equipment and computer storage medium
CN114727051B (en) Media resource transmission device, system and method
Lysakov et al. Implementation of FPGA algorithms for identification of image distortion due to compression
CN111586259B (en) Image simulation method, image computer and target simulator
WO2021147754A1 (en) 3d image processing method and device, and 3d display terminal
CN110751600B (en) Miniaturized camera alink digital acquisition device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant