CN117499601B - Method for calling multi-camera data for SoC - Google Patents

Method for calling multi-camera data for SoC Download PDF

Info

Publication number
CN117499601B
CN117499601B CN202410001817.4A CN202410001817A CN117499601B CN 117499601 B CN117499601 B CN 117499601B CN 202410001817 A CN202410001817 A CN 202410001817A CN 117499601 B CN117499601 B CN 117499601B
Authority
CN
China
Prior art keywords
camera data
camera
channel
display
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410001817.4A
Other languages
Chinese (zh)
Other versions
CN117499601A (en
Inventor
梁新坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lichi Semiconductor Co ltd
Original Assignee
Shanghai Lichi Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lichi Semiconductor Co ltd filed Critical Shanghai Lichi Semiconductor Co ltd
Priority to CN202410001817.4A priority Critical patent/CN117499601B/en
Publication of CN117499601A publication Critical patent/CN117499601A/en
Application granted granted Critical
Publication of CN117499601B publication Critical patent/CN117499601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to a method for invoking multi-camera data for SoC, comprising opening a single camera device node by an application; determining storage positions of camera data corresponding to various layout modes in a cache region according to the layout modes of the multi-camera data on the display; under the condition that the video stream is started and camera data are acquired through each camera channel, the camera data of each camera channel are stored in the corresponding position of the buffer area according to the current layout mode; after the filling of the camera data of one group of each channel is completed, the application program sequentially reads the camera data in the buffer area and outputs the camera data to the display for direct display. The method has the advantages of lower system resource occupation, lower storage resource and lower computing resource requirements, shorter data processing time and better user experience.

Description

Method for calling multi-camera data for SoC
Technical Field
The present application relates to the field of chip technology, and more particularly, to a method for invoking multi-camera data for SoC.
Background
In an application scene such as a vehicle provided with a plurality of cameras, there is generally a need to generate, for example, a 4-grid image using data of these cameras, and simultaneously generate a through-view image.
In the prior art, the implementation mode for carrying out 4 palace check display based on the multi-camera data comprises the following steps: each camera corresponds to one camera node, 4 paths of cameras are opened, after the application reads 4 paths of camera data, the 4 paths of camera data are sent to a post-processing module for data fusion, 4 grid views are generated through merging, and the 4 grid views are sent to a4 grid display for display. Accordingly, the implementation of 360 panoramic display is typically: and opening the 4 paths of cameras, after the application takes the 4 paths of camera data, sending the 4 paths of camera data to a post-processing module for data fusion, merging to generate 360 panoramic big images, and sending the 360 panoramic big images to a panoramic display for display. In the above display mode, whether the display is multi-view display such as 4-grid display or panoramic display, the multi-path camera data needs to be spliced, and additional storage units and computing resources are needed.
Therefore, a technical solution capable of realizing multi-view display such as 4-palace-grid display with lower system resource occupation and without additional storage resources and image processing resources has not been found yet.
Disclosure of Invention
The present application is provided to solve the above-mentioned problems occurring in the prior art.
There is a need for a method for invoking multi-camera data for SoC that is capable of displaying the multi-camera data in different layouts on a display with low system resource occupation and without additional memory resources and computing resources such as image stitching processing.
According to a first aspect of the present application, there is provided a method for invoking multi-camera data for a SoC, the method comprising: opening a single camera equipment node corresponding to an application program by the application program calling the multi-camera data; defining a frame size enumeration structure according to display configuration information containing various layout modes of the multi-camera data on a display, wherein the frame size enumeration structure contains frame sizes corresponding to the various layout modes; based on the frame size enumeration structure, calculating camera data storage modes corresponding to various layout modes, wherein the camera data storage modes comprise base addresses and offset addresses for storing camera data of various channels; based on camera data storage modes corresponding to various layout modes, applying for a buffer area with a corresponding size for storing camera data; under the condition that the video stream is started and camera data are acquired through each camera channel, the camera data of each camera channel are stored in the corresponding position of the buffer area according to a camera data storage mode corresponding to the current layout mode; and after the filling of the camera data of one group of each channel is completed, the application program sequentially reads the camera data in the buffer area and outputs the camera data to the display for displaying.
By using the method for calling the multi-camera data for the SoC according to the embodiments of the present application, by calculating the multi-camera data storage modes corresponding to the various different layout modes of the multi-camera data on the display, each camera can directly store the camera data into the corresponding data storage address in the buffer area allocated for the single camera equipment node, so that the application program can perform multi-view display of the layout mode corresponding to the multi-camera data on the display only by sequentially reading the data in the buffer area.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The accompanying drawings illustrate various embodiments by way of example in general and not by way of limitation, and together with the description and claims serve to explain the disclosed embodiments. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Such embodiments are illustrative and not intended to be exhaustive or exclusive of the present apparatus or method.
Fig. 1 shows a flow diagram of a method for invoking multi-camera data for a SoC according to an embodiment of the present application.
Fig. 2 (a) shows a schematic diagram of a camera data storage manner in a 4-camera vertical ordering layout manner according to an embodiment of the present application.
Fig. 2 (b) shows a schematic diagram of a camera data storage manner in a 4-camera horizontal arrangement manner according to an embodiment of the present application.
Fig. 2 (c) shows a schematic diagram of a camera data storage manner in a 4-camera field ordering layout manner according to an embodiment of the present application.
Fig. 3 shows a hardware configuration diagram of an SoC and its peripheral components according to an embodiment of the present application.
Detailed Description
In order to make the technical solution of the present application better understood by those skilled in the art, the present application will be described in detail with reference to the accompanying drawings and detailed description. Embodiments of the present application will now be described in further detail with reference to the accompanying drawings and specific examples, but are not intended to be limiting of the present application.
The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises" and the like means that elements preceding the word encompass the elements recited after the word, and not exclude the possibility of also encompassing other elements. The order in which the steps of the methods described in the present application with reference to the accompanying drawings are performed is not intended to be limiting. As long as the logical relationship between the steps is not affected, several steps may be integrated into a single step, the single step may be decomposed into multiple steps, or the execution order of the steps may be exchanged according to specific requirements.
It should also be understood that the term "and/or" in this application is merely an association relationship describing the associated object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In this application, the character "/" generally indicates that the associated object is an or relationship.
The SoC (System on Chip) according to the embodiments of the present application may be a semiconductor device packaged together, which may include a single wafer (Die), or may include multiple identical or different wafers, and may also include a Chip manufactured by Chip technology design.
Fig. 1 shows a flow diagram of a method for invoking multi-camera data for a SoC according to an embodiment of the present application.
As shown in fig. 1, in step 101, a single camera device node created in advance may be opened by an application calling for multi-camera data.
Taking an android operating system (Android Operating System) running on an SoC as an example, a typical android operating system sequentially comprises a Kernel layer (also called an LINUX Kernel layer), a Hal layer (Hardware abstraction layer, a hardware abstraction layer), a Framework layer (also called an application Framework layer), a User layer (also called an application layer) and the like from bottom to top on a hierarchical structure of the system, wherein the Hal layer of the android operating system is a soft foundation with specific writing specifications, and is used for downwards docking the Kernel layer containing various hardware device drivers and interfaces such as a camera driver, a display driver, a network module driver and the like through a hardware module interface, packaging a service logic code part in a hardware device driver code in a Hal module mode, and upwards docking the Framework layer containing an application API Framework, so that isolation between an upper operating system and an application program and an underlying hardware driver and interfaces is realized.
Taking the example that the application program APP1 calls 4 camera devices camera 1-camera 4, in order to implement the call of APP1 to camera1, in general, in the prior art, a module dedicated to camera devices in the Hal layer, for example, a node rca 1 corresponding to camera device camera1 is created in Android CameraHal, and a corresponding data buffer is applied for the module, and similarly, in order to implement the call of APP1 to camera2, camera3, and camera4, nodes rca 2, rca 3, and rca 4 corresponding to these camera devices are also created in Android CameraHal, and corresponding data buffers are applied for. Then, when the APP1 needs to call the data of the Camera1, the data of the Camera1 can be obtained from top to bottom according to the call flow of the Camera data of the buffer area of APP1 (User layer) — > Camera service (frame layer) — > rcamer 1 (Hal layer) — > V4L2 interface+camera Driver (Kernel layer) — > Camera1 under the V4L2 (Video for Linux two) frame. The data of the camera2-camera4 called by the APP1 also follow a similar flow, and are not repeated here. Therefore, multiple cameras need to create multiple independent device nodes, each device node has an independent data buffer area, and when multiple camera data need to be acquired, multiple processes must be started, so that system resources occupy a large amount.
Unlike the above prior art, in the method according to the embodiment of the present application, only a single camera device node needs to be created in advance in the Hal layer, so that the application program can call the camera data of multiple cameras according to the following steps 102-104.
Specifically, in step 102, the storage position of the camera data of each channel in the buffer area under each layout mode is determined according to the display configuration information including the various layout modes of the multi-camera data on the display, where the buffer area is a unified buffer area shared by the camera data of each channel. That is, unlike the prior art in which each camera corresponds to one camera device node and has an independent data buffer, in the embodiment according to the present application, an application program can read camera data of each channel in the same large data buffer by only calling a single camera device node created in advance.
Next, in step 103, when the video stream is turned on and camera data is acquired through each camera channel, the camera data of each camera channel is stored in a position corresponding to the buffer according to the current layout mode.
Then, in step 104, after the filling of the camera data of a group of each channel is completed, the application program sequentially reads the camera data in the buffer area and outputs the camera data to the display for displaying.
According to the method for calling the multi-camera data for the SoC, firstly, storage positions of the corresponding multi-channel camera data in the unified cache area are determined according to different layout modes of the multi-camera data on the display, so that each camera can directly unify the cache area shared by the camera data according to the storage positions allocated to the camera data, namely, the storage positions of the multi-camera data in the unified cache area are in one-to-one correspondence with the presentation modes of the multi-camera data on the display, therefore, an application program does not need to apply for a plurality of camera equipment nodes, and can display the multi-camera data in the corresponding layout mode by only calling a single camera equipment node and sequentially reading the camera data from the cache area allocated to the single node. The method according to the embodiment of the application is not limited to the number of cameras, only a single equipment node is required to be established, and a plurality of equipment call flows are not required to be started, so that the occupation of system resources can be greatly reduced. In addition, the data of the cameras can be directly displayed on the display after being read in sequence without splicing or fusing, so that the requirements on SoC storage resources and computing resources are reduced, the data processing time can be shortened, the data display performance of the cameras is improved, and the user experience is improved.
Determining storage positions of camera data of each channel in a buffer area under various layout modes according to display configuration information containing various layout modes of the multi-camera data on a display specifically comprises: defining a frame size enumeration structure according to the display configuration information, wherein the frame size enumeration structure comprises frame sizes corresponding to various layout modes; and calculating the base address and the offset address of the multi-camera data corresponding to various layout modes in the buffer area based on the frame size enumeration structure.
In some embodiments, the display configuration information further includes a single camera data size and a number of camerasThus, a buffer of a corresponding size can be configured for storing camera data based on the individual camera data size and the number of cameras. For example, in a single camera data sizeIn the case of (1), the buffer size required for 4 cameras is +.>
In V4L2, where a given pixel format needs to be defined by a frame size enumeration structure, various available frame sizes are therefore available, and the existing V4L2 frame size enumeration structure includes a frame size index, a pixel format, and so on, while embodiments of the present application may add a layout manner of defining multi-camera data on a display in the frame size enumeration structure, thus, the newly defined frame size enumeration structure and description are as follows:
struct v4l2_frmsizeenum {
__ u32 index;/frame size index
__ u32pixel format;
__ u32 type;/(frame size type supported by device
unit {/frame size
struct v4l2_frmsize_discretediscrete;
struct v4l2_frmsize_stepwise stepwise;
};
__ u32sort,/layout of multiple camera data on display
__u32reserved[2]/(reserved for future use
};
In some embodiments, the layout means includes at least one of a vertical ordering, a horizontal ordering, and a multi-row multi-column ordering. Taking the example that the number of cameras is 4, the layout mode at least comprises three types of vertical sorting in which 4 cameras are arranged in a column, horizontal sorting in which 4 cameras are arranged in a row and field sorting in a form of 2 rows and 2 columns and four grids. In other embodiments, the number of cameras may be smaller or larger, and the layout manner specifically supported depends on the display configuration of the display, which is not described herein.
The specific calculation method of the frame size corresponding to the different layout modes of the 4 cameras on the display will be described in detail below by taking 4 cameras as an example.
Fig. 2 (a), 2 (b) and 2 (c) respectively show schematic diagrams of camera data storage modes of the 4 cameras in three different layout modes of vertical sorting, horizontal sorting and field sorting according to an embodiment of the present application.
In some embodiments, for example, a single camera data size and a number of cameras may be included in the display configuration information, and the frame sizes corresponding to the various layouts are determined jointly based on the single camera data size, the number of cameras, and the layout.
Taking the video format v4l2_pix_fmt_grey as an example, when 4 camera data are laid out in the vertical order in fig. 2 (a), the base address (i.e., base, referring to the start address) and offset address (i.e., stride, referring to the element spacing between different rows) of each camera channel can be calculated as in the following equations (1) and (2).
Where x is the base address of the integral buffer, w is the number of pixels of a single camera channel in the horizontal direction (in terms of bytes), h is the number of pixels of a single camera channel in the vertical direction (also in terms of bytes), f is a value corresponding to the video format (for this embodiment, v4l2_pix_fmt_grey format, f=1; if the video format is v4l2_pix_fmt_yuyv, the value of f is 2), and n is the camera channel number (in the case of 4 camera channels, the value range of n is 0-3, corresponding to channel 0, channel 1, channel 2, and channel 3, respectively).
According to the above formula (1) and formula (2), the output size of the single camera isThat is, w=1280, h=720, f=1, and assuming that the base address x of the entire buffer area is 0, the base addresses of the respective camera channels in the vertical order layout, that is, the channel 0 base address (ch 0 base), the channel 1 base address (ch 1 base), the channel 2 base address (ch 2 base), and the channel 3 base address (ch 3 base), and the offset address stride are as shown in fig. 2 (a), the calculation method and the respective values are as follows, respectively.
Thus, with 4 camera channels, and with a vertically ordered layout on the display, the overall frame size is
Still taking the video format v4l2_pix_fmt_grey as an example, when 4 camera data are laid out in the horizontal arrangement in fig. 2 (b), the base address base and offset address stride of each camera channel can be calculated as in the following equations (3) and (4).
Formula (3)
Formula (4)
Where x is the base address of the whole buffer, w is the number of pixels (in terms of Byte) of a single camera channel in the horizontal direction, f is the value corresponding to the video format (for this embodiment, v4l2_pix_fmt_grey format, f=1; if the video format is v4l2_pix_fmt_yuyv, the value of f is 2), and n is the camera channel number (in the case of 4 camera channels, the value range of n is 0-3, corresponding to channels 0, 1, 2 and 3, respectively).
According to the above formula (3) and formula (4), the output size at a single camera isThat is, w=1280, h=720, f=1, and assuming that the base address x of the entire buffer area is 0, the base addresses of the respective camera channels, that is, the channel 0 base address (ch 0 base), the channel 1 base address (ch 1 base), the channel 2 base address (ch 2 base), and the channel 3 base address (ch 3 base), and the offset address stride are as shown in fig. 2 (b), the calculation method and the respective values are as follows, respectively.
Thus, with 4 camera channels and a laterally ordered layout on the display, the overall frame size is
In other embodiments, when the 4 camera data are laid out in the field ordering of fig. 2 (c), the base address base and offset address stride of each camera channel can be calculated as in the following equations (5) and (6).
Formula (5)
Formula (6)
Where x is the base address of the whole buffer, w is the number of pixels (in Byte as a basic unit) of a single camera channel in the horizontal direction, f is a value corresponding to the video format (f=1 for v4l2_pix_fmt_grey format of this embodiment; if the video format is v4l2_pix_fmt_yuyv, the value of f is 2), n is the camera channel number (in the case of 4 camera channels, the value range of n is 0-3), (n% 2) indicates that n takes the remainder for 2, the value is 0 when n can be divided by 2, otherwise the value is 1,indicating that n/2 is rounded down.
According to the above formula (5) and formula (6), the output size at a single camera isThat is, w=1280, h=720, f=1, and assuming that the base address x of the entire buffer area is 0, the base addresses of the respective camera channels, that is, the channel 0 base address (ch 0 base), the channel 1 base address (ch 1 base), the channel 2 base address (ch 2 base), and the channel 3 base address (ch 3 base), and the offset address stride are as shown in fig. 2 (c), the calculation method and the respective values are as follows, respectively.
Thus, with 4 camera channels, and with a display field ordering layout, the overall frame size is
Thus, the output size at a single camera isIn the case of (a), the frame size enumeration structure returns 3 frame sizes corresponding to 3 layout modes, respectively, namely: vertical ordering->The transverse order->Tian Paixu, tian Paixu
It should be noted that, the video format in the embodiment of the present application is not limited to v4l2_pix_fmt_grey or v4l2_pix_fmt_yuyv, but may be other formats, such as v4l2_pix_fmt_nv12, and the base and stride values of each camera channel may be calculated according to the data storage mode corresponding to the actual video format, which is not described herein.
After the base address and stride offset of each camera channel corresponding to each different layout mode are determined through calculation, a cache region for storing the camera data of each camera channel can be further applied. In some embodiments, in the case that the storage space required by different layouts is different, the size of the buffer area may be determined based on the layout mode with the largest storage space required in the various layout modes, so as to ensure that there is enough available buffer area when switching between the different layout modes, without having to re-apply.
In some embodiments, in addition to multi-view display, it may be necessary to display 360 panorama based on multi-camera data, or based on other views generated by fusion processing of at least some of the camera data of the channels, in which case, still based on the camera data in the buffer area sequentially read by the application program, the camera data of each camera channel to be used may be obtained by parsing according to the current layout mode, and the camera data of each camera channel may be subjected to processing such as splicing, fusion, etc. to obtain 360 panorama or other views to be displayed, and displayed on the corresponding display. Therefore, under the condition that multiple views and panoramic views of independent channels are required to be displayed at the same time, corresponding image processing is only required according to the display requirement of the panoramic view, and the requirement on computing resources is greatly reduced.
In the following, with reference to fig. 3, an on-vehicle display system with a 4-way camera is taken as an example, and the SoC hardware structure and display flow of the present application are illustrated. Fig. 3 shows a hardware configuration diagram of an SoC and its peripheral components according to an embodiment of the present application.
The vehicle-mounted display system shown in fig. 3 is provided with 4 paths of cameras, namely a camera channel 0, a camera channel 1, a camera channel 2 and a camera channel 3, which are connected with the serializer 0-serializer 3 in a one-to-one correspondence manner, wherein the serializer 0-serializer 3 works in cooperation with the deserializer, and the function of the vehicle-mounted display system is to convert RGB or YUV output of each camera into an RGB data format which can be accepted by a standard display. The deserializer and the SoC perform control and data transmission through a MIPI-CSI interface, wherein the MIPI-CSI interface is a special interface of a camera chip, and the special interface conforms to MIPI-CSI-2 protocol (a sub-protocol of MIPI alliance protocol) and is provided with a MIPI-CSI Clock channel (MIPI-CSI Clock) and a data channel 0-data channel 3 (dlane 0-dlane 3). In addition, a bus control interface such as an I2C clock (I2C clk)/I2C data (I2 Cdata) is also provided between the SoC and the deserializer, wherein I2C is a bidirectional two-wire synchronous serial bus. The SoC performs control and data transfer between the display and the MIPI-DSI interface, which follows the MIPI-DSI protocol (a sub-protocol of the MIPI alliance protocol, defining a high-speed serial interface between the processor and the display module). In some embodiments, the display may be an LCD display, or any other suitable display, and the present application is not limited in detail. In addition, the number of camera channels that the system can support is not limited to 4 channels shown in fig. 3, but may be 8 channels, 16 channels or more, which is not limited in this application.
In connection with the hardware architecture shown in fig. 3, an exemplary flow for displaying multi-camera data on a display is as follows: first, a V4L2 camera device node is registered in the Hal layer by the camera bottom layer driver, then the upper layer application opens the device node and enumerates the frame size (i.e., frame size) based on the single camera data size, the number of channels of the camera, and various layout modes, and returns frame size parameters corresponding to the different layout modes, in this embodiment 3 frame size parameters, by the driver.
And then, the mipi-csi interface calculates the base address base and the offset address stride of the camera channels in various layout modes according to the frame size parameter, and stores the base and the stride of each camera channel in the internal register of the csi in the current layout mode so as to be used for calculating the data storage address of the camera subsequently.
Further, the drive calculates the size of a buffer area required for buffering the image frames, opens up a frame buffer with a corresponding size, and then controls the video stream to be displayed through a mipi-csi interface and controls the video stream to be displayed through a mipi-dsi interface.
After each camera channel, namely, camera channel 0, camera channel 1, camera channel 2 and camera channel 3, acquire image data, the image data is sent to the SoC through a Mipi-csi interface after being subjected to corresponding format conversion by a serializer and a deserializer, then, according to display configuration information of the Mipi-csi interface, layout mode information (vertical ordering, horizontal ordering, tian Paixu and the like) of a current display interface is acquired, and the acquired camera data is directly stored to a corresponding position of a opened frame buffer by utilizing base addresses base and stride of own channels stored in a register. In addition, when the layout mode of the display is changed, the storage mode of the camera data in the frame buffer can be correspondingly modified by modifying the base and stride of the csi internal registers.
When Csi waits for a complete group of camera data in the current layout mode, for example, after the images of 4 camera channels in the application are all filled, the frame buffer can be pushed out to the application program, and the application program can realize the display of the multichannel camera data according to the preset layout mode by only reading the camera data in the buffer area in sequence and outputting the camera data to the display. Therefore, by using the method of the embodiment of the application, under the condition that only independent multi-camera data is required to be displayed, additional storage, splicing, fusion processing and the like are not required to be carried out on the data acquired by the cameras, so that the storage resources and the computing resources of the SoC are greatly saved, and meanwhile, the image processing time is shortened, therefore, the display performance can be improved, and the user experience is improved.
If a panoramic image such as an AVM (Around View Monitoring, whole car imaging system) is required to be displayed, image data corresponding to the camera channel is read from the frame buffer in combination with display configuration information of the mipi-csi interface, and then a required image is generated through image fusion and other processes, and the fused image is output to a corresponding display for display.
In various embodiments of the present application, the sequence number of each step or process does not mean that the execution sequence of each step or process should be determined by the function and the internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Furthermore, although exemplary embodiments have been described in this application, the scope thereof includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of the various embodiments across schemes), adaptations or alterations based on the present application. Elements in the claims are to be construed broadly based on the language employed in the claims and are not limited to examples described in the present specification or during the practice of the present application, which examples are to be construed as non-exclusive. Accordingly, it is intended that the specification and examples be considered as exemplary only, and that the scope of the application is not limited thereto, as variations or alternatives will be apparent to those skilled in the art within the scope of the present disclosure, and it is intended that the true scope and spirit be indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the application. This is not to be interpreted as an intention that the disclosed features not being claimed are essential to any claim. Rather, the subject matter of the present application is capable of less than all of the features of a particular disclosed embodiment. Thus, the claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with one another in various combinations or permutations. The scope of the application should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (6)

1. A method for invoking multi-camera data for a SoC, comprising:
opening a single camera equipment node which is created in advance by an application program for calling the multi-camera data;
determining storage positions of camera data of each channel in a buffer area according to display configuration information containing various layout modes of the multi-camera data on a display, wherein the buffer area is a unified buffer area shared by the camera data of each channel;
under the condition that the video stream is started and camera data are acquired through each camera channel, the camera data of each camera channel are stored in the corresponding position of the buffer area according to the current layout mode;
and after filling of a group of camera data of each channel is completed, the application program sequentially reads the camera data in the buffer area and outputs the camera data to the display for displaying.
2. The method according to claim 1, wherein determining the storage location of the camera data of each channel in the buffer area under each layout mode according to the display configuration information including each layout mode of the multi-camera data on the display specifically includes:
defining a frame size enumeration structure according to the display configuration information, wherein the frame size enumeration structure comprises frame sizes corresponding to various layout modes;
and calculating the base address and the offset address of the multi-camera data corresponding to various layout modes in the buffer area based on the frame size enumeration structure.
3. The method of claim 2, wherein the display configuration information further includes a single camera data size and a number of cameras, the method further comprising:
the frame sizes corresponding to the various layout modes are commonly determined based on the single camera data size, the number of cameras and the layout modes.
4. The method of claim 1 or 2, wherein the display configuration information further includes a single camera data size and a number of cameras, the method further comprising:
a buffer of a corresponding size is configured for storing camera data based on the size of the individual camera data and the number of cameras.
5. The method according to claim 1 or 2, wherein the layout means includes at least one of a vertical order, a horizontal order, and a multi-row multi-column order.
6. The method according to claim 1 or 2, characterized in that the method further comprises:
under the condition that 360 panoramic views need to be displayed, based on the camera data in the buffer area read by the application program in sequence and the current layout mode, analyzing to obtain the camera data of each camera channel, and carrying out fusion processing on the camera data of each camera channel to obtain the 360 panoramic views to be displayed.
CN202410001817.4A 2024-01-02 2024-01-02 Method for calling multi-camera data for SoC Active CN117499601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410001817.4A CN117499601B (en) 2024-01-02 2024-01-02 Method for calling multi-camera data for SoC

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410001817.4A CN117499601B (en) 2024-01-02 2024-01-02 Method for calling multi-camera data for SoC

Publications (2)

Publication Number Publication Date
CN117499601A CN117499601A (en) 2024-02-02
CN117499601B true CN117499601B (en) 2024-04-05

Family

ID=89683346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410001817.4A Active CN117499601B (en) 2024-01-02 2024-01-02 Method for calling multi-camera data for SoC

Country Status (1)

Country Link
CN (1) CN117499601B (en)

Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4691238A (en) * 1982-10-21 1987-09-01 Dainippon Screen Mfg. Co., Ltd. Method and apparatus of storing image data into a memory in a layout scanner system
JP2000165820A (en) * 1998-11-26 2000-06-16 Dainippon Printing Co Ltd Image recording and reproducing method and device
US6125432A (en) * 1994-10-21 2000-09-26 Mitsubishi Denki Kabushiki Kaisha Image process apparatus having a storage device with a plurality of banks storing pixel data, and capable of precharging one bank while writing to another bank
CN1413414A (en) * 1999-12-24 2003-04-23 三洋电机株式会社 Digital camera, memory control device usable for it, image processing device and method
US6593960B1 (en) * 1999-08-18 2003-07-15 Matsushita Electric Industrial Co., Ltd. Multi-functional on-vehicle camera system and image display method for the same
JP2004246187A (en) * 2003-02-14 2004-09-02 Mitsubishi Electric Corp Content display device
JP2005080077A (en) * 2003-09-02 2005-03-24 Mitsubishi Electric Corp Image reproducing device and frame memory control method of image reproducing device
JP2006065903A (en) * 2004-08-24 2006-03-09 Sharp Corp Storage reproducing device and image pickup device
JP2006277738A (en) * 2005-03-03 2006-10-12 Nissan Motor Co Ltd On-vehicle image processor and image processing method for vehicle
JP2006345489A (en) * 2005-05-10 2006-12-21 Canon Inc Image reproduction apparatus and image reproduction method
JP2007074136A (en) * 2005-09-05 2007-03-22 Seiko Epson Corp Layout editing device, method, and program, and server
JP2007336137A (en) * 2006-06-14 2007-12-27 Mitsubishi Electric Corp Multi-channel image transfer system
JP2008054061A (en) * 2006-08-24 2008-03-06 Ikegami Tsushinki Co Ltd Network camera repeater and camera monitoring system
CN101241420A (en) * 2008-03-20 2008-08-13 杭州华三通信技术有限公司 Method and memory apparatus for promoting write address incontinuous data storage efficiency
JP2011015353A (en) * 2009-07-06 2011-01-20 Toshiba Alpine Automotive Technology Corp Image display device for vehicle
KR101082845B1 (en) * 2011-07-05 2011-11-11 (주)씨앤에스아이 Image providing system for smart phone using ip camera
CN103782263A (en) * 2011-09-13 2014-05-07 索尼电脑娱乐公司 Information processing device, information processing method, content file data structure, GUI placement simulator, and GUI placement setting assistance method
CN104851076A (en) * 2015-05-27 2015-08-19 武汉理工大学 Panoramic 360-degree-view parking auxiliary system for commercial vehicle and pick-up head installation method
CN107396068A (en) * 2017-08-30 2017-11-24 广州杰赛科技股份有限公司 The synchronous tiled system of panoramic video, method and panoramic video display device
CN110944107A (en) * 2019-12-29 2020-03-31 徐书诚 Computer system for realizing single-screen restoration display of all-round network camera set
CN111078168A (en) * 2019-11-13 2020-04-28 联想(北京)有限公司 Information processing method, first electronic equipment and storage medium
CN111107323A (en) * 2019-12-30 2020-05-05 徐书诚 Computer system for realizing panoramic image single-screen circular view window display
WO2020207403A1 (en) * 2019-04-10 2020-10-15 杭州海康威视数字技术股份有限公司 Image acquisition method and device
KR102239848B1 (en) * 2021-03-17 2021-04-13 (주)지비유 데이터링크스 System for providing and storing cctv video based on user customized video layout for securing installation space
KR102239850B1 (en) * 2021-03-17 2021-04-13 (주)지비유 데이터링크스 System for providing and storing cctv video based on recommended video layout considering the characteristics of monitoring target for saving electricity energy
CN113033468A (en) * 2021-04-13 2021-06-25 中国计量大学 Specific person re-identification method based on multi-source image information
CN113141487A (en) * 2021-04-13 2021-07-20 合肥宏晶微电子科技股份有限公司 Video transmission module, method, display device and electronic equipment
CN113556497A (en) * 2020-04-26 2021-10-26 北京君正集成电路股份有限公司 Method for transmitting multi-camera data
CN113891039A (en) * 2021-09-17 2022-01-04 长春一汽富晟集团有限公司 Image acquisition and processing system and method for vehicle-mounted all-round viewing system
CN215474810U (en) * 2021-02-24 2022-01-11 三一汽车起重机械有限公司 Engineering vehicle panoramic display system and engineering vehicle
CN215793603U (en) * 2021-06-25 2022-02-11 深圳道可视科技有限公司 Wide dynamic panoramic driving auxiliary system
CN114598843A (en) * 2022-02-09 2022-06-07 上海赫千电子科技有限公司 Image processing system and method applied to multi-path cameras of large automobile
CN115988341A (en) * 2022-12-15 2023-04-18 杭州海康威视数字技术股份有限公司 Camera and image processing method based on Android system
WO2023093438A1 (en) * 2021-11-25 2023-06-01 Oppo广东移动通信有限公司 Image display method and apparatus, and electronic device and computer-readable storage medium
CN117112083A (en) * 2023-10-23 2023-11-24 南京芯驰半导体科技有限公司 Method for calling camera data for multi-hardware-domain SoC and multi-hardware-domain SoC

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0949818A3 (en) * 1998-04-07 2000-10-25 Matsushita Electric Industrial Co., Ltd. On-vehicle image display apparatus, image transmission system, image transmission apparatus, and image capture apparatus
JP4971594B2 (en) * 2004-03-31 2012-07-11 キヤノン株式会社 Program and display control device
JP4561353B2 (en) * 2004-12-24 2010-10-13 日産自動車株式会社 Video signal processing apparatus and method, and in-vehicle camera system
US7782374B2 (en) * 2005-03-03 2010-08-24 Nissan Motor Co., Ltd. Processor and processing method for generating a panoramic image for a vehicle
US8564644B2 (en) * 2008-01-18 2013-10-22 Sony Corporation Method and apparatus for displaying and editing 3D imagery
KR101127962B1 (en) * 2008-12-22 2012-03-26 한국전자통신연구원 Apparatus for image processing and method for managing frame memory in image processing
US8952973B2 (en) * 2012-07-11 2015-02-10 Samsung Electronics Co., Ltd. Image signal processor and method of operating the same
US20140333779A1 (en) * 2013-05-13 2014-11-13 Electronics And Telecommunications Research Institute Apparatus for distributing bus traffic of multiple camera inputs of automotive system on chip and automotive system on chip using the same
KR102079918B1 (en) * 2013-12-26 2020-04-07 한화테크윈 주식회사 System and method for controlling video wall
JP6412337B2 (en) * 2014-05-08 2018-10-24 キヤノン株式会社 Management device, management method, and program
US10719286B2 (en) * 2018-03-29 2020-07-21 Microsoft Technology Licensing, Llc Mechanism to present in an atomic manner a single buffer that covers multiple displays

Patent Citations (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4691238A (en) * 1982-10-21 1987-09-01 Dainippon Screen Mfg. Co., Ltd. Method and apparatus of storing image data into a memory in a layout scanner system
US6125432A (en) * 1994-10-21 2000-09-26 Mitsubishi Denki Kabushiki Kaisha Image process apparatus having a storage device with a plurality of banks storing pixel data, and capable of precharging one bank while writing to another bank
JP2000165820A (en) * 1998-11-26 2000-06-16 Dainippon Printing Co Ltd Image recording and reproducing method and device
US6593960B1 (en) * 1999-08-18 2003-07-15 Matsushita Electric Industrial Co., Ltd. Multi-functional on-vehicle camera system and image display method for the same
CN1413414A (en) * 1999-12-24 2003-04-23 三洋电机株式会社 Digital camera, memory control device usable for it, image processing device and method
JP2004246187A (en) * 2003-02-14 2004-09-02 Mitsubishi Electric Corp Content display device
JP2005080077A (en) * 2003-09-02 2005-03-24 Mitsubishi Electric Corp Image reproducing device and frame memory control method of image reproducing device
JP2006065903A (en) * 2004-08-24 2006-03-09 Sharp Corp Storage reproducing device and image pickup device
JP2006277738A (en) * 2005-03-03 2006-10-12 Nissan Motor Co Ltd On-vehicle image processor and image processing method for vehicle
JP2006345489A (en) * 2005-05-10 2006-12-21 Canon Inc Image reproduction apparatus and image reproduction method
JP2007074136A (en) * 2005-09-05 2007-03-22 Seiko Epson Corp Layout editing device, method, and program, and server
JP2007336137A (en) * 2006-06-14 2007-12-27 Mitsubishi Electric Corp Multi-channel image transfer system
JP2008054061A (en) * 2006-08-24 2008-03-06 Ikegami Tsushinki Co Ltd Network camera repeater and camera monitoring system
CN101241420A (en) * 2008-03-20 2008-08-13 杭州华三通信技术有限公司 Method and memory apparatus for promoting write address incontinuous data storage efficiency
JP2011015353A (en) * 2009-07-06 2011-01-20 Toshiba Alpine Automotive Technology Corp Image display device for vehicle
KR101082845B1 (en) * 2011-07-05 2011-11-11 (주)씨앤에스아이 Image providing system for smart phone using ip camera
CN103782263A (en) * 2011-09-13 2014-05-07 索尼电脑娱乐公司 Information processing device, information processing method, content file data structure, GUI placement simulator, and GUI placement setting assistance method
CN104851076A (en) * 2015-05-27 2015-08-19 武汉理工大学 Panoramic 360-degree-view parking auxiliary system for commercial vehicle and pick-up head installation method
CN107396068A (en) * 2017-08-30 2017-11-24 广州杰赛科技股份有限公司 The synchronous tiled system of panoramic video, method and panoramic video display device
WO2020207403A1 (en) * 2019-04-10 2020-10-15 杭州海康威视数字技术股份有限公司 Image acquisition method and device
CN111818295A (en) * 2019-04-10 2020-10-23 杭州海康威视数字技术股份有限公司 Image acquisition method and device
CN111078168A (en) * 2019-11-13 2020-04-28 联想(北京)有限公司 Information processing method, first electronic equipment and storage medium
CN110944107A (en) * 2019-12-29 2020-03-31 徐书诚 Computer system for realizing single-screen restoration display of all-round network camera set
CN111107323A (en) * 2019-12-30 2020-05-05 徐书诚 Computer system for realizing panoramic image single-screen circular view window display
CN113556497A (en) * 2020-04-26 2021-10-26 北京君正集成电路股份有限公司 Method for transmitting multi-camera data
CN215474810U (en) * 2021-02-24 2022-01-11 三一汽车起重机械有限公司 Engineering vehicle panoramic display system and engineering vehicle
KR102239848B1 (en) * 2021-03-17 2021-04-13 (주)지비유 데이터링크스 System for providing and storing cctv video based on user customized video layout for securing installation space
KR102239850B1 (en) * 2021-03-17 2021-04-13 (주)지비유 데이터링크스 System for providing and storing cctv video based on recommended video layout considering the characteristics of monitoring target for saving electricity energy
CN113141487A (en) * 2021-04-13 2021-07-20 合肥宏晶微电子科技股份有限公司 Video transmission module, method, display device and electronic equipment
CN113033468A (en) * 2021-04-13 2021-06-25 中国计量大学 Specific person re-identification method based on multi-source image information
CN215793603U (en) * 2021-06-25 2022-02-11 深圳道可视科技有限公司 Wide dynamic panoramic driving auxiliary system
CN113891039A (en) * 2021-09-17 2022-01-04 长春一汽富晟集团有限公司 Image acquisition and processing system and method for vehicle-mounted all-round viewing system
WO2023093438A1 (en) * 2021-11-25 2023-06-01 Oppo广东移动通信有限公司 Image display method and apparatus, and electronic device and computer-readable storage medium
CN114598843A (en) * 2022-02-09 2022-06-07 上海赫千电子科技有限公司 Image processing system and method applied to multi-path cameras of large automobile
CN115988341A (en) * 2022-12-15 2023-04-18 杭州海康威视数字技术股份有限公司 Camera and image processing method based on Android system
CN117112083A (en) * 2023-10-23 2023-11-24 南京芯驰半导体科技有限公司 Method for calling camera data for multi-hardware-domain SoC and multi-hardware-domain SoC

Also Published As

Publication number Publication date
CN117499601A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
US7868890B2 (en) Display processor for a wireless device
US7456804B2 (en) Display control apparatus and display control method
US20220030193A1 (en) Method and device of transmitting video signal, method and device of receiving video signal, and display device
US20140064637A1 (en) System for processing a digital image using two or more defined regions
CN113628304B (en) Image processing method, image processing device, electronic equipment and storage medium
US8511829B2 (en) Image processing apparatus, projection display apparatus, video display system, image processing method, and computer readable storage medium
US20140333838A1 (en) Image processing method
CN103618869B (en) Many picture video joining methods and device
CN113946301B (en) Tiled display system and image processing method thereof
WO2012113923A1 (en) Display list mechanism and scalable display engine structures
CN117112083B (en) Method for calling camera data for multi-hardware-domain SoC and multi-hardware-domain SoC
US9047846B2 (en) Screen synthesising device and screen synthesising method
CN117499601B (en) Method for calling multi-camera data for SoC
CN108540689B (en) Image signal processor, application processor and mobile device
CN116248956B (en) Method and device for flexibly optimizing bandwidth and superposing multiple OSD videos
US7502075B1 (en) Video processing subsystem architecture
CN106534839A (en) High-definition camera video processing system and method
US20220253183A1 (en) Display device and display method thereof
CN112367557B (en) Display method of LED television wall, television and computer readable storage medium
CN114554126B (en) Baseboard management control chip, video data transmission method and server
CN113573098A (en) Image transmission method and device and electronic equipment
US8488897B2 (en) Method and device for image filtering
KR20040082601A (en) Memory access control apparatus
CN108243293B (en) Image display method and system based on virtual reality equipment
KR20000073709A (en) Data processing apparatus and method usable software/hardware compounded method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant