CN114339070A - Multi-channel image processing and displaying interactive system based on MPSoC - Google Patents

Multi-channel image processing and displaying interactive system based on MPSoC Download PDF

Info

Publication number
CN114339070A
CN114339070A CN202111629288.5A CN202111629288A CN114339070A CN 114339070 A CN114339070 A CN 114339070A CN 202111629288 A CN202111629288 A CN 202111629288A CN 114339070 A CN114339070 A CN 114339070A
Authority
CN
China
Prior art keywords
module
video
decoding
video data
mpsoc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111629288.5A
Other languages
Chinese (zh)
Other versions
CN114339070B (en
Inventor
刘鹏飞
杨炳伟
李连桂
张锋
吴佳彬
陈天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Changfeng Aviation Electronics Co Ltd
Original Assignee
Suzhou Changfeng Aviation Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Changfeng Aviation Electronics Co Ltd filed Critical Suzhou Changfeng Aviation Electronics Co Ltd
Priority to CN202111629288.5A priority Critical patent/CN114339070B/en
Publication of CN114339070A publication Critical patent/CN114339070A/en
Application granted granted Critical
Publication of CN114339070B publication Critical patent/CN114339070B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides a multi-path image processing and displaying interactive system based on MPSoC, which comprises a video acquisition module, a video processing module and a video display module, wherein the video acquisition module is used for converting different video sources into uniform video data formats in a corresponding processing mode respectively to obtain first video data; the video zooming and splicing module is connected with the video acquisition module and is used for zooming and splicing the first video data to obtain second video data; the display interaction module is connected with the video zooming and splicing module, displays the second video data and controls each module; the video processing module is connected with the video acquisition module and the display interaction module, and compresses and encapsulates the first video data according to the control of the display interaction module to obtain third video data; the video output module is connected with the video processing module and the display interaction module and outputs the third video data according to the control of the display interaction module. By the processing scheme, real-time display, coding storage and transmission of multiple paths of different video input sources are achieved.

Description

Multi-channel image processing and displaying interactive system based on MPSoC
Technical Field
The application relates to the technical field of image processing and displaying, in particular to a multi-channel image processing and displaying interaction system based on MPSoC.
Background
With the development of video applications, the application scenes of collecting coded output by various video input sources are increased. In some important fields, such as the field of on-board display, there are often diverse needs for real-time display, encoding storage and transmission of multiple different video input sources.
At present, the common solutions include an FPGA + haisi 3559 scheme, an FPGA + DSP scheme and the like, wherein the FPGA + haisi 3559 scheme has weak GPU performance and high desktop system transplantation difficulty; the FPGA + DSP scheme does not have an interactive system and is difficult to develop, so that the requirements on real-time display, code storage, transmission and the like of multiple paths of different video input sources still have more problems in the related fields of airborne display and the like.
Disclosure of Invention
In view of this, the embodiment of the present application provides a multi-channel image processing and displaying interactive system based on an MPSoC, which is used for meeting the requirements of real-time display, encoding storage, transmission and the like for multiple different video input sources in the related fields such as on-board display and the like.
The embodiment of the application provides a multichannel image processing and displaying interactive system based on MPSoC, which comprises:
the video acquisition module is connected with the video sources and used for converting different video sources into a uniform video data format by adopting corresponding processing modes respectively to obtain first video data;
the input end of the video zooming and splicing module is connected with the video acquisition module and is used for zooming and splicing the first video data to obtain second video data;
the input end of the display interaction module is connected with the output end of the video zooming and splicing module and is used for displaying the second video data and controlling each module connected with the display interaction module;
the input end of the video processing module is respectively connected with the output end of the video acquisition module and the output end of the display interaction module, and the video processing module is used for compressing and packaging the first video data according to the control of the display interaction module to obtain third video data;
and the input end of the video output module is respectively connected with the output end of the video processing module and the output end of the display interaction module, and the video output module is used for outputting the third video data according to the control of the display interaction module.
According to a specific implementation manner of the embodiment of the application, the video acquisition module comprises one or more of a BT1120 decoding module, an HDMI decoding module, a Mipi decoding module, a BT656 decoding module, an SDI decoding module and a network stream decoding module,
the BT1120 decoding module is used for decoding an input BT1120 video signal and separating a line signal, a field signal and an enabling signal required by the video in the transmission process in the FPGA;
the HDMI decoding module is used for realizing the analysis of an HDMI video protocol;
the Mipi decoding module is used for realizing the analysis of the MIPI video;
the BT656 decoding module is used for decoding an input BT656 video signal and separating a line signal, a field signal and an enabling signal which are required by the video in the transmission process in the FPGA;
the SDI decoding module is used for decoding an input SDI video signal;
the network flow decoding module is used for decoding the input network flow video signal.
According to a specific implementation manner of the embodiment of the application, the BT656 decoding module adopts a TW9984 chip to implement PAL decoding, and generates a BT656 video signal.
According to a specific implementation manner of the embodiment of the application, the output end of the BT656 decoding module is connected with a Deinterlace module, and the Deinterlace module is used for the Deinterlace function of PAL video to realize the conversion from interlaced video to progressive video.
According to a specific implementation manner of the embodiment of the application, the system is further provided with a VPSS module, the VPSS module is connected to the output end of the video acquisition module, and the VPSS module is used for preprocessing an input image, wherein the preprocessing comprises image mosaic decoding, Gamma calibration and white balance.
According to a specific implementation manner of the embodiment of the application, the Display interaction module is provided with three output modes including a Display mode, a Record mode and a Stream mode,
the Display mode is used for selecting a displayed video and a Display arrangement mode;
the Record mode is used for setting compression parameters, packaging modes and storage parameters;
the Stream mode is used for setting the compression parameters, the encapsulation mode and the streaming media parameters.
According to a specific implementation manner of the embodiment of the application, the compression parameters comprise a coding mode, a code rate control mode, an output code rate, the number of B frames and a GOP length; the packaging modes comprise ts, mkv and mp 4; the storage parameters comprise selection of a storage medium, storage duration, selection of a storage medium file system and formatting operation.
According to a specific implementation manner of the embodiment of the application, when each output mode operates, the current video source, resolution, original data format, compression code rate, compression manner, IP information, port number and utilization condition of system resources are displayed.
According to a specific implementation manner of the embodiment of the application, the video processing module includes an encoding module, a packaging module and a storage module, the encoding module is used for encoding and compressing the first video data, the packaging module is used for packaging the compressed first video data, and the storage module is used for caching the video data.
According to a specific implementation manner of the embodiment of the application, the video output module comprises a video signal transmission module, a PCIE module, a Uart module and a DP module, the video signal transmission module is used for transmitting third video data, the PCIE module is used for connecting a Solid State Disk (SSD) of an Nvme interface, the Uart module is used for debugging the whole system, and the DP module is used for displaying video signals.
Advantageous effects
In the multi-path image processing and displaying interaction system based on the MPSoC, the multi-path image processing, displaying and interacting functions are realized based on the MPSoC chip, the GPU display interaction function and the PL terminal zooming and splicing function are adopted, and the switching of modes such as an output mode, an input source and a display mode can be realized, so that the requirements of real-time display, coding storage, transmission and the like of multiple paths of different video input sources in the related fields such as airborne display are met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a multi-path image processing display interactive system based on an MPSoC according to an embodiment of the present invention;
FIG. 2 is a hardware block diagram of a MPSoC-based multi-channel image processing display interactive system according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a display mode of the MPSoC-based multi-channel image processing display interactive system according to an embodiment of the present invention.
Detailed Description
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present application, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present application, and the drawings only show the components related to the present application rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the application provides a multi-path image processing and displaying interactive system based on an MPSoC, the system is designed based on an MPSoC chip, and each module in the system is described in detail below with reference to FIGS. 1 to 3.
Referring to fig. 1, the multi-path image processing and displaying interactive system based on the mposc specifically includes the following modules: the device comprises a video acquisition module, a video scaling and splicing module, a display interaction module, a video processing module and a video output module.
The input end of the video acquisition module is connected with the video source, the output end of the video acquisition module is connected with the input end of the video scaling and splicing module, the video acquisition module is used for converting different video sources into a unified video data format by adopting corresponding processing modes respectively, first video data are obtained, and the universal video data format comprises NV16 or NV12 and the like.
The input end of the video zooming and splicing module is connected with the video acquisition module and used for zooming and splicing the first video data to obtain second video data, and the second video data is transmitted to the display interaction module to be output.
The input end of the display interaction module is connected with the output end of the video zooming and splicing module, and is used for receiving and displaying the second video data, controlling each module connected with the display interaction module, and a master control desk for video acquisition, zooming, splicing, compression, packaging, storage and network transmission.
The display interaction module is implemented based on a GPU, and since the resolution of a video displayed by the GPU is a fixed resolution (1080P is taken as an example), if multiple 1080P or 720 × 576 videos need to be tiled and displayed on the same display, the videos need to be correspondingly scaled and tiled. The commonly used tiled display modes are shown in fig. 3, and there are four display modes, namely 2x1, 2x2, 3x2 and 4x 2. In different display modes, the resolution of each video filling area is determined, and the video scaling and splicing module needs to scale the selected video to the resolution of the target area and splice the selected video into one video according to the display modes.
The input end of the video processing module is respectively connected with the output end of the video acquisition module and the output end of the display interaction module, and is used for receiving control of the display interaction module, compressing and packaging the first video data according to the compression parameters and the packaging mode configured by the display interaction module, obtaining third video data and transmitting the third video data to the video output module.
The input end of the video output module is respectively connected with the output end of the video processing module and the output end of the display interaction module, and is used for receiving the control of the display interaction module and outputting the third video data according to the output parameters configured by the display interaction module.
In one embodiment, the video capture module comprises a combination of one or more of a TPG decode module, a BT1120 decode module, an HDMI decode module, a Mipi decode module, a BT656 decode module, an SDI decode module, and a network stream decode module.
The TPG decoding module is used for decoding the input color bar video signal; the BT1120 decoding module is used for decoding an input BT1120 video signal and separating a line signal, a field signal and an enabling signal which are required by the video in the transmission process in the FPGA; the HDMI decoding module is used for realizing the analysis of an HDMI video protocol; the Mipi decoding module is used for realizing the analysis of the MIPI video; the BT656 decoding module is used for decoding an input BT656 video signal and separating a line signal, a field signal and an enabling signal which are required by the video in the transmission process in the FPGA; the SDI decoding module is used for decoding an input SDI video signal; the network flow decoding module is used for decoding the input network flow video signal.
Further, the BT656 decoding module adopts a TW9984 chip to realize PAL decoding and generate a BT656 video signal, the output end of the BT656 decoding module is connected with a deinterlacer module, and the deinterlacer module is used for the Deinterlace function of PAL video and realizing the conversion from interlaced video to progressive video.
Preferably, the system is further provided with a VPSS module, the VPSS module is connected to an output end of the video acquisition module, and the VPSS module is used for preprocessing an input image, and the preprocessing includes image mosaic decoding, Gamma calibration and white balance.
In one embodiment, the Display interaction module is provided with three output modes, including a Display mode, a Record mode and a Stream mode, and when running in each mode, the current video source, resolution, original data format, compression rate, compression mode, IP information and port number, and utilization of system resources are displayed. The Display mode is used to select the video to be displayed and the Display arrangement. The Record mode is used for setting compression parameters, packaging modes and storage parameters, and the compression parameters comprise encoding modes (H264 and H265), code rate control modes (CBR, VBR and low-delay mode), output code rates, the number of B frames and GOP length; the packaging mode comprises common packaging forms such as ts, mkv, mp4 and the like; the storage parameters comprise selection of a storage medium, storage duration, selection of a storage medium file system, formatting operation and the like. The Stream mode is used for setting compression parameters, an encapsulation mode and streaming media parameters, the compression parameters and the encapsulation mode are set to be the same as those of the Record mode, and the streaming media parameters comprise a board card IP, a server IP and a port number.
In one embodiment, the video processing module includes an encoding module, a packaging module and a storage module, the encoding module is configured to encode the first video data, the packaging module is configured to package the encoded first video data, and the storage module is configured to cache the video data.
In one embodiment, the video output module includes a video signal transmission module, a PCIE module, a Uart module, and a DP module, where the video signal transmission module is used to transmit third video data, the PCIE module is used to connect to the solid state disk SSD of the Nvme interface, the Uart module is used to debug the whole system, and the DP module is used to display the video signal.
Preferably, the video signal transmission module comprises a network port module and a USB module.
In order to facilitate understanding of the MPSoC-based multi-path image processing display interactive system, a hardware configuration thereof is described below by an embodiment, and a hardware block diagram refers to fig. 2.
The hardware part comprises a PL part and a PS part based on an MPSoC chip, wherein the PL part comprises a BT1120 decoding module, an HDMI decoding module, a Mipi decoding module, a VPSS module, a BT656 decoding module, a Deinterlace module, a Scaler scaling module, a DMA module, a VCU module, a USB module, a PCIE module, a network port module, a Uart module and a DP module of the PS part, and the module at the PL end and the module at the PS end are interconnected through an AXI bus.
The BT1120 decoding module, the HDMI decoding module, the Mipi decoding module, the VPSS module, the BT656 decoding module and the Deinterlace module are all used for converting input video into a universal video data format to obtain first video data. The BT1120 decoding module realizes the decoding of SDI by a GV7704 chip of Semtech company and generates a video signal of BT 1120. The HDMI decoding module is used for realizing the analysis of the HDMI video protocol, and the analyzed video data is cached in a certain block address space of the DDR by the DMA module. The Mipi decoding module is used for realizing the analysis of the MIPI video, and the analyzed video data is cached in a certain block address space of the DDR through the VPSS module and the DMA module. The VPSS module is used for processing the basic image output by the Mipi decoding module, and comprises algorithms of image mosaic removal, Gamma calibration, white balance and the like. The BT656 decoding module implements PAL decoding by a TW9984 chip of Intersil corporation, generating a BT656 video signal. The Deinterlace module is used for the Deinterlace function of the PAL video, and realizes the conversion from the interlaced video to the progressive video.
The Scaler scaling module is used for realizing the scaling function in the video scaling and splicing module, is respectively connected with the BT1120 decoding module, the HDMI decoding module and the Deinterlace module, realizes the scaling function of first video data on a link, and configures scaling parameters through an AXI Lite bus by a PS end to form second video data.
The DMA module is used for caching the second video data, is connected with the output ends of the Scaler scaling module and the VPSS module, converts the video signals into AXI bus signals, transmits the AXI bus signals to the PS end, and interacts with the DDR through an internal bus of the PS end to realize video caching. The DMA module can be configured with a buffer area by the PS end through an AXI Lite bus.
The VCU module is used for coding and compressing the second video data through the AXI bus to form third video data.
The network port module and the USB module of the PS end are used for realizing transmission of coded video signals, the DP module is used for displaying the video signals, the PCIE module is used for connecting the SSD of the Nvme interface, and the Uart module is used for debugging the whole system.
In another embodiment, a method for using the multi-channel image processing display interactive system based on the MPSoC is described by taking 8 channels of video (2 channels of SDI, 2 channels of PAL, 1 channel of TPG, 1 channel of network video stream, one channel of MIPI, and one channel of HDMI) as an example.
The flow of the MPSoC display interactive system is as follows:
(1) and selecting a Display option on an interface of the Display interaction module, and selecting a certain video or certain videos from the 8-channel video sources for output and Display. And the PL terminal of the MPSoC selects to zoom and splice the corresponding video source according to the control information of the interactive interface, outputs the spliced video to the display terminal for display, and simultaneously displays the information of the source, the resolution, the format and the like of the corresponding video.
(2) Selecting Record option on the interactive interface, the MPSoC sends 8 paths of collected videos to a VCU module of a PL (programmable logic unit) end through Frame buffer write for compression coding, and finally, the videos are packaged and stored in an SD card through a PS (packet switching) end.
(3) And selecting a Stream option on the interactive interface, transmitting the acquired 8 paths of videos to a VCU module of a PL (personal computer) end through Frame buffer Write (Fb Write) by the MPSoC for compression coding, and finally transmitting the videos to a client through RTSP after packaging by a PS (packet switched) end.
According to the embodiment provided by the invention, the Zynq UltraScale + MPSoC EV compression platform supports various commonly used video input interfaces such as PAL interface, SDI interface, MIPI interface and network interface, and can configure the splicing display, compression coding, storage and network plug flow of the video through the desktop interactive system displayed by the GPU, so that the operation and the interactivity are very strong. Therefore, the Zynq UltraScale + MPSoC EV series chip is selected to realize the multi-channel image processing display and interaction functions, the GPU display interaction function and the PL terminal zooming and splicing function are adopted, the switching of modes such as an output mode, an input source and a display mode can be realized, and the requirements of real-time display, coding storage and transmission of multi-channel different video input sources in the relevant fields such as airborne display are met.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A multi-path image processing and displaying interactive system based on MPSoC is characterized in that the system is designed based on an MPSoC chip and comprises the following components:
the video acquisition module is connected with the video sources and used for converting different video sources into a uniform video data format by adopting corresponding processing modes respectively to obtain first video data;
the input end of the video zooming and splicing module is connected with the video acquisition module and is used for zooming and splicing the first video data to obtain second video data;
the input end of the display interaction module is connected with the output end of the video zooming and splicing module and is used for displaying the second video data and controlling each module connected with the display interaction module;
the input end of the video processing module is respectively connected with the output end of the video acquisition module and the output end of the display interaction module, and the video processing module is used for compressing and packaging the first video data according to the control of the display interaction module to obtain third video data;
and the input end of the video output module is respectively connected with the output end of the video processing module and the output end of the display interaction module, and the video output module is used for outputting the third video data according to the control of the display interaction module.
2. The MPSoC-based multi-channel image processing display interaction system of claim 1, wherein the video capture module comprises one or more of a BT1120 decoding module, an HDMI decoding module, a Mipi decoding module, a BT656 decoding module, a SDI decoding module and a network stream decoding module in combination,
the BT1120 decoding module is used for decoding an input BT1120 video signal and separating a line signal, a field signal and an enabling signal required by the video in the transmission process in the FPGA;
the HDMI decoding module is used for realizing the analysis of an HDMI video protocol;
the Mipi decoding module is used for realizing the analysis of the MIPI video;
the BT656 decoding module is used for decoding an input BT656 video signal and separating a line signal, a field signal and an enabling signal which are required by the video in the transmission process in the FPGA;
the SDI decoding module is used for decoding an input SDI video signal;
the network flow decoding module is used for decoding the input network flow video signal.
3. The MPSoC-based multi-channel image processing display interaction system of claim 2, wherein the BT656 decoding module adopts a TW9984 chip to realize PAL decoding, and generates BT656 video signals.
4. The MPSoC-based multi-channel image processing and displaying interaction system of claim 3, wherein the output end of the BT656 decoding module is connected with a deinterlacer module, and the deinterlacer module is used for the Deinterlace function of PAL video to realize the conversion of interlaced video to progressive video.
5. The MPSoC-based multi-channel image processing and display interaction system of claim 1, further comprising a VPSS module connected to the output of the video capture module for pre-processing the input image, the pre-processing including de-image mosaic, Gamma calibration and white balance.
6. The MPSoC-based multi-path image processing Display interaction system of claim 1, wherein the Display interaction module is provided with three output modes including Display mode, Record mode and Stream mode,
the Display mode is used for selecting a displayed video and a Display arrangement mode;
the Record mode is used for setting compression parameters, packaging modes and storage parameters;
the Stream mode is used for setting the compression parameters, the encapsulation mode and the streaming media parameters.
7. The MPSoC-based multi-channel image processing and displaying interactive system according to claim 6, wherein the compression parameters include a coding mode, a code rate control mode, an output code rate, a number of B frames and a GOP length; the packaging modes comprise ts, mkv and mp 4; the storage parameters comprise selection of a storage medium, storage duration, selection of a storage medium file system and formatting operation.
8. The MPSoC-based multi-channel image processing and displaying interaction system of claim 6, wherein each output mode displays current video source, resolution, original data format, compression code rate, compression mode, IP information, port number and utilization of system resources when running.
9. The MPSoC-based multi-path image processing and displaying interaction system according to claim 1, wherein the video processing module comprises an encoding module, a packaging module and a storage module, the encoding module is used for encoding and compressing the first video data, the packaging module is used for packaging the compressed first video data, and the storage module is used for caching the video data.
10. The MPSoC-based multi-path image processing and displaying interaction system of claim 1, wherein the video output module comprises a video signal transmission module, a PCIE module, a Uart module and a DP module, the video signal transmission module is used for transmitting the third video data, the PCIE module is used for connecting a Solid State Disk (SSD) of an Nvm interface, the Uart module is used for debugging the whole system, and the DP module is used for displaying the video signals.
CN202111629288.5A 2021-12-28 2021-12-28 MPSoC-based multipath image processing display interaction system Active CN114339070B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111629288.5A CN114339070B (en) 2021-12-28 2021-12-28 MPSoC-based multipath image processing display interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111629288.5A CN114339070B (en) 2021-12-28 2021-12-28 MPSoC-based multipath image processing display interaction system

Publications (2)

Publication Number Publication Date
CN114339070A true CN114339070A (en) 2022-04-12
CN114339070B CN114339070B (en) 2024-04-16

Family

ID=81014739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111629288.5A Active CN114339070B (en) 2021-12-28 2021-12-28 MPSoC-based multipath image processing display interaction system

Country Status (1)

Country Link
CN (1) CN114339070B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115243072A (en) * 2022-07-19 2022-10-25 上海晨驭信息科技有限公司 Distributed video splicing system without switch

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080074497A1 (en) * 2006-09-21 2008-03-27 Ktech Telecommunications, Inc. Method and Apparatus for Determining and Displaying Signal Quality Information on a Television Display Screen
US20120185864A1 (en) * 2011-01-18 2012-07-19 Mindspeed Technologies, Inc. Integrated Environment for Execution Monitoring and Profiling of Applications Running on Multi-Processor System-on-Chip
US20160291917A1 (en) * 2013-11-18 2016-10-06 Hangzhou Hikvision Digital Technology Co., Ltd. Screen splicing system and video data stream processing method
CN108259829A (en) * 2018-01-24 2018-07-06 中国航空工业集团公司洛阳电光设备研究所 A kind of imaging processing device and display equipment
KR101869453B1 (en) * 2017-10-13 2018-07-23 주식회사 제노시스 Video wall system using multiple decoder devices
CN108769755A (en) * 2018-04-27 2018-11-06 湖北工业大学 High-resolution full view frequency live streaming camera system and method
CN109309788A (en) * 2018-10-18 2019-02-05 广州市盛光微电子有限公司 More lens image splicing apparatus and method
CN110691203A (en) * 2019-10-21 2020-01-14 湖南泽天智航电子技术有限公司 Multi-path panoramic video splicing display method and system based on texture mapping
CN112367537A (en) * 2020-11-02 2021-02-12 上海无线电设备研究所 Video acquisition-splicing-display system based on ZYNQ
CN214014396U (en) * 2020-12-25 2021-08-20 北京中科戎大科技股份有限公司 Multi-channel video image processing device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080074497A1 (en) * 2006-09-21 2008-03-27 Ktech Telecommunications, Inc. Method and Apparatus for Determining and Displaying Signal Quality Information on a Television Display Screen
US20120185864A1 (en) * 2011-01-18 2012-07-19 Mindspeed Technologies, Inc. Integrated Environment for Execution Monitoring and Profiling of Applications Running on Multi-Processor System-on-Chip
US20160291917A1 (en) * 2013-11-18 2016-10-06 Hangzhou Hikvision Digital Technology Co., Ltd. Screen splicing system and video data stream processing method
KR101869453B1 (en) * 2017-10-13 2018-07-23 주식회사 제노시스 Video wall system using multiple decoder devices
CN108259829A (en) * 2018-01-24 2018-07-06 中国航空工业集团公司洛阳电光设备研究所 A kind of imaging processing device and display equipment
CN108769755A (en) * 2018-04-27 2018-11-06 湖北工业大学 High-resolution full view frequency live streaming camera system and method
CN109309788A (en) * 2018-10-18 2019-02-05 广州市盛光微电子有限公司 More lens image splicing apparatus and method
CN110691203A (en) * 2019-10-21 2020-01-14 湖南泽天智航电子技术有限公司 Multi-path panoramic video splicing display method and system based on texture mapping
CN112367537A (en) * 2020-11-02 2021-02-12 上海无线电设备研究所 Video acquisition-splicing-display system based on ZYNQ
CN214014396U (en) * 2020-12-25 2021-08-20 北京中科戎大科技股份有限公司 Multi-channel video image processing device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115243072A (en) * 2022-07-19 2022-10-25 上海晨驭信息科技有限公司 Distributed video splicing system without switch

Also Published As

Publication number Publication date
CN114339070B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
US8891963B2 (en) Hybrid signal router
US5903261A (en) Computer based video system
CN102646032B (en) Distributed screen splicing control system and control method
CN104244088B (en) Display controller, screen picture transmission device and screen picture transfer approach
US9601156B2 (en) Input/output system for editing and playing ultra-high definition image
CN1976429B (en) Video frequency transmitting system and method based on PC and high-resolution video signal collecting card
CN102404578A (en) Multi-channel video transmitting system and method
CN112367509B (en) Method for realizing domestic four-way super-definition image comprehensive display device
US9602794B2 (en) Video processing system and video processing method
US20140308017A1 (en) Imaging device, video recording device, video display device, video monitoring device, video monitoring system, and video monitoring method
CN114339070B (en) MPSoC-based multipath image processing display interaction system
TWI381737B (en) Digital video/audio capture device and method thereof
CN112468763A (en) Video transmission and display method, device, equipment and storage medium of conference television
US20110085023A1 (en) Method And System For Communicating 3D Video Via A Wireless Communication Link
CN2509797Y (en) Image encoder with picture dividing function
CN115396621A (en) Network push flow control method, device, equipment and storage medium based on RK628D
CN115734004A (en) Video processing method, device, system and equipment
US9098674B2 (en) Data processing apparatus for segmental processing of input data, systems using the apparatus and methods for data transmittal
JP7350744B2 (en) Image processing device
CN113190476A (en) Data transmission method and device
CN101888551A (en) Image transmission and display method, device and system
KR20100054586A (en) System and method for multiplexing stereoscopic high-definition video through gpu acceleration and transporting the video with light-weight compression and storage media having program source thereof
CN104660871A (en) Record-broadcast all-in-one machine based on embedded mixed hardware architecture
CN115988155B (en) Spliced display method and display system
CN218920471U (en) 8K video bidirectional transmission system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant