CN115733940A - Multi-source heterogeneous video processing display device and method for ship system - Google Patents

Multi-source heterogeneous video processing display device and method for ship system Download PDF

Info

Publication number
CN115733940A
CN115733940A CN202211377465.XA CN202211377465A CN115733940A CN 115733940 A CN115733940 A CN 115733940A CN 202211377465 A CN202211377465 A CN 202211377465A CN 115733940 A CN115733940 A CN 115733940A
Authority
CN
China
Prior art keywords
layer
video
unit
superposition
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211377465.XA
Other languages
Chinese (zh)
Inventor
龙小军
张正华
万凯
胡硕
郭浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
709th Research Institute of CSSC
Original Assignee
709th Research Institute of CSSC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 709th Research Institute of CSSC filed Critical 709th Research Institute of CSSC
Priority to CN202211377465.XA priority Critical patent/CN115733940A/en
Publication of CN115733940A publication Critical patent/CN115733940A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a multi-source heterogeneous video processing and displaying device and method for a ship system, belonging to the field of video data processing, wherein a protocol unloading unit acquires uncompressed original radar video data and photoelectric video data; the video frame format packaging unit forms an original video layer; the radar scanning conversion unit forms a radar video layer; the pattern distribution unit forms a pattern bottom layer; the layer separating unit forms a mouse plotting layer; the primary fusion superposition unit processes data of a radar video layer and data of a mouse plotting layer to form a primary fusion superposition layer; the secondary fusion superposition unit processes the graphic bottom layer and the primary fusion superposition layer to form a secondary fusion superposition layer; the coding and decoding unit forms a compressed video layer; and the video comprehensive processing unit performs video superposition processing on the secondary fusion superposition layer, the original video layer and the compressed video layer and then outputs and displays the processed video. The invention can not only improve the display quality and stability, but also realize the information sharing of each display console under the condition of reducing the coding equipment.

Description

Multi-source heterogeneous video processing display device and method for ship system
Technical Field
The invention belongs to the field of video data processing, and particularly relates to a multi-source heterogeneous video processing and displaying device and method for a ship system.
Background
The ship-based display console is multi-source access equipment, can be connected with signal sources of optical frequency/radio frequency, satellite communication, analog/digital and other systems, and comprises various video or media resources such as radar, sonar, infrared, photoelectricity and the like; the shipborne display and control terminal needs to complete the superposition of graphics and image videos, the fusion transparent processing of the graphics and radar videos, and the display of multiple layers and multiple windows of a plotting layer, a mouse layer and the like according to the requirements of a combat system.
The same-window display of the multi-source heterogeneous video is generally realized by using an FPGA (field programmable gate array) embedded platform or by using the computing power of a CPU (central processing unit) through software programming. The former not only needs to complete the radar scan conversion function, the plotting layer mouse layer separation processing, the fusion transparent processing of graphics and radar video, but also needs to complete the display function of a multi-size asynchronous window. The method is limited by the influence of factors such as FPGA logic resources, time sequence control and the like, and is more used for processing video superposition and windowing display with lower resolution and fewer paths. The multi-window fusion and superposition display capability of the access of more than 2 paths of ultrahigh-resolution multi-source heterogeneous videos is careless, and the display quality and the stability are seriously influenced; the latter computing power is processed by the CPU, so that the computing power requirement is higher, more CPU resources are occupied, the processing power of the CPU to other functions is weakened, and the timeliness of multi-window display is reduced.
Disclosure of Invention
Compared with the traditional scheme, the invention not only can improve the display path number and the display stability of the videos on the same screen of the display and control console, but also can achieve the purpose of meeting the network data sharing of the graphical interface of each display and control console under the condition of reducing one coding device.
In order to achieve the purpose, the invention provides a multi-source heterogeneous video processing and displaying device and a method for a ship system, and the overall thought is as follows:
by adopting an FPGA + ARM architecture mode, parallel computing and pipeline processing capacity of the FPGA and the characteristics of low power consumption and high performance of an ARM architecture are fully utilized, the mixed overlapping display function of multiple paths of FC videos (photoelectric videos and radar videos) with ultrahigh resolution and no compression and compressed network videos can be solved, the display quality and stability of the display are improved, and the strong real-time performance of video display can be ensured. Meanwhile, the invention can synchronously encode the output pictures after fusion and superposition display and output the encoded output pictures to a network interface to realize data sharing, or receive network video stream information through the network interface to decode and superpose the information on a graphical interface for display. More specifically:
in one aspect, the present invention provides a multi-source heterogeneous video processing and displaying device for a ship system, including: the device comprises a protocol unloading unit, a video frame format packaging unit, a radar scanning conversion unit, a graph distribution unit, a layer separation unit, a primary fusion superposition unit, a secondary fusion superposition unit, a coding and decoding unit and a video comprehensive processing unit; the system comprises a FPGA processing module, a secondary fusion superposition unit, an encoding and decoding unit and a video comprehensive processing unit, wherein the protocol unloading unit, the video frame format packaging unit, the radar scan conversion unit, the primary fusion superposition unit and the layer separation unit are arranged on the FPGA processing module;
the input end of the protocol unloading unit is connected with the optical fiber channel, and the output end of the protocol unloading unit is connected with the video frame format packaging unit and the radar scanning conversion unit; the system is used for analyzing an FC or gigabit protocol to acquire radar video data and photoelectric video data;
the output end of the video frame format packaging unit is connected with the video comprehensive processing unit and is used for repackaging the optical television video data to obtain an original video layer; the output end of the radar scan conversion unit is connected with the primary fusion superposition unit and is used for performing scan conversion of multiple modes on a radar video and performing various window configurations, afterglow functions and PPI trail function configurations to obtain a radar video layer;
the input end of the graphic distribution unit is connected with a graphic input interface, the output end of the graphic distribution unit is connected with the layer separation unit and the secondary fusion superposition unit, and the graphic distribution unit is used for parallelizing input video interface signals, converting one path of the input video interface signals into RGB888 color space data and sending the RGB888 color space data to the layer separation unit, and converting the other path of the input video interface signals into YUV422 color space data and sending the YUV422 color space data to the secondary fusion superposition unit as a graphic bottom layer; the output end of the layer separation unit is connected with the primary fusion superposition unit and is used for carrying out layer separation processing on RGB888 color space data, extracting a mouse layer and a plotting layer, and discarding the data of a chart layer to obtain the mouse plotting layer;
the output end of the primary fusion superposition unit is connected with the secondary fusion superposition unit and is used for performing fusion superposition on the radar video layer and the mouse plotting layer to obtain a primary fusion superposition layer; the output end of the secondary fusion superposition unit is connected with the video comprehensive processing unit and is used for carrying out secondary fusion superposition processing on the graphic bottom layer and the primary fusion superposition layer to form a secondary fusion superposition layer;
the coding and decoding unit is externally connected with an Ethernet channel and internally connected with a video comprehensive processing unit; the video comprehensive processing unit is used for receiving and decoding the video code stream sent by the Ethernet channel and sending the decoded data to the video comprehensive processing unit to form a compressed video layer; the video signal processing unit is used for receiving the mixed window output video signal sent by the video comprehensive processing unit, coding the mixed window output video signal and sending the coded mixed window output video signal out through an Ethernet channel;
and the video comprehensive processing unit is used for superposing and windowing displaying the secondary fusion superposition layer, the original video layer and the compressed video layer to form a mixed windowing output video signal.
Further preferably, the fibre channel runs an FC protocol based on 4.25G/8.5G rate, or a gigabit network based on ethernet protocol; if the fiber channel runs the FC protocol, the protocol unloading unit supports the FC-AE-ASM protocol; if the fiber channel runs the ten-gigabit network protocol, the protocol unloading unit is compatible with the IEEE802.3ae/ap protocol.
Further preferably, the RGB color space data adopts an RGB888 format, the color depth of each color of red, green and blue is 8 bits, and each pixel point is composed of 24-bit data; the YUV422 color space data is in a YUV422 data format obtained by RGB888 format through color space conversion and an interpolation method.
Further preferably, the method for extracting the mouse layer and the plotting layer by the layer separating unit is as follows:
according to a color key value set by an upper computer, the layer separating unit conducts traversal judgment on each RGB888 color value in the RGB888 color space data, screens out the color key value set by the upper computer in a lookup table mode, and reserves the color key value to obtain a mouse layer and a plotting layer; the color key values not in the lookup table are subjected to data fill-in 0 processing to be expressed as black pixels, thereby forming a mouse plotting layer.
Further preferably, the primary fusion superposition unit carries out superposition processing on pixel values according to the principle that the information of a mouse plotting layer is on the upper layer and the information of a radar video layer is on the lower layer; if the pixel value of the mouse plotting layer is M and the pixel value of the radar video layer is N on the same pixel point, the pixel values on the fused pixel points are as follows: p = λ M + (1- λ) N,0 ≦ λ ≦ 1.
Further preferably, the secondary fusion superposition unit superposes the primary fusion superposition layer sent by the primary fusion superposition unit and the graphic bottom layer sent by the graphic distribution unit according to the sequence that the primary fusion superposition layer is on the upper graphic bottom layer and the lower graphic bottom layer; the superposition rule is that all black pixels in the primary fusion superposition layer are replaced by pixel point values of a graphic bottom layer during secondary fusion superposition, and non-black pixel point values are kept unchanged to form a secondary fusion superposition layer.
Further preferably, the encoding and decoding unit is configured to receive network video stream information, decode up to 9 channels of 1920 × 1080@30hz resolution compressed code streams, and send the decoded compressed video layer information to the video comprehensive processing unit for corresponding windowing display; meanwhile, the device is used for receiving the pictures output by the mixed window, coding the pictures and then sending the coded pictures into an Ethernet channel; and meanwhile, configuring different coding parameters through an upper computer according to the user requirements, and forming a compressed video layer during decoding.
On the other hand, the invention provides a multi-source heterogeneous video processing and displaying method for a ship system, which comprises the following steps:
analyzing an FC or ten-gigabit protocol according to the configuration condition of the optical fiber channel to obtain uncompressed original radar video data and photoelectric video data;
repackaging the optical television video data to form an original video layer; carrying out scan conversion of multiple modes on the radar video, and carrying out various window configurations, afterglow function and PPI trail function configurations to form a radar video layer;
parallelizing input video interface signals, converting one path of the input video interface signals into RGB888 color space data, and converting the other path of the input video interface signals into YUV422 color space data to form a graph bottom layer;
performing layer separation processing on RGB888 color space data, extracting a mouse layer and a plotting layer, and discarding the chart layer data to form a mouse plotting layer;
performing first fusion and superposition on data of a radar video layer and data of a mouse plotting layer, wherein the mouse plotting layer is on the upper layer, and the radar video layer is on the lower layer to form a first fusion and superposition layer;
fusing and superposing the bottom graph layer and the first fusion and superposition layer again, wherein the second fusion and superposition layer is arranged on the upper layer, the bottom graph layer is arranged on the lower layer, and the second fusion and superposition layer is formed after fusion;
receiving compressed network video stream information through an Ethernet channel, and correspondingly decoding the compressed network video stream information to form a compressed video layer;
and superposing and mixed windowing display are carried out on the secondary fusion superposition layer, the original video layer and the compressed video layer, wherein the secondary fusion superposition layer is always displayed on a full screen, and the original video layer and the compressed video layer are displayed on a full screen or a non-full screen according to the configuration of the size and the position information of the window of the upper computer instruction.
And coding the video information output by the mixing window to form a network video stream and sending the network video stream out through an Ethernet channel.
Further preferably, the first fusion and superposition is performed with the superposition processing of the pixel values according to the principle that the plotting layer information is on the upper layer and the radar layer information is on the lower layer; if the pixel value of the mouse plotting layer is M and the pixel value of the radar video layer is N on the same pixel point, the pixel value on the pixel point is as follows: p = λ M + (1- λ) N,0 ≦ λ ≦ 1.
Further preferably, the second fusion and superposition is carried out on the primary fusion and superposition layer and the graph bottom layer according to the sequence that the primary fusion and superposition layer is arranged at the upper graph bottom layer and at the lower graph bottom layer; the superposition rule is that all black pixel values in the primary fusion superposition layer are completely replaced by pixel point values of the bottom layer of the graph during secondary fusion superposition; while the non-black pixel values remain unchanged.
In general, the above technical solution conceived by the present invention has the following advantages compared to the prior art
Has the advantages that:
for comprehensive display of multi-source heterogeneous videos, a traditional processing mode uniformly adopts FPGA processing, or the purpose is achieved through CPU computing power and software programming. The FPGA chip has the advantages of multiple concurrent processing, a pipeline mechanism, flexibility, customization and the like, but with the increase of integrated functions such as ultrahigh resolution, multi-channel video parallel processing and the like, the defects of compiling efficiency, resource shortage and the like are shown, and the function effect of the FPGA which is finally compiled is poor and the stability is not enough. The latter computing power is processed by the CPU, so that the computing power requirement is higher, more CPU resources are occupied, the processing power of the CPU to other functions is weakened, and the timeliness of multi-window display is reduced.
The invention adopts the function separation processing technology of FPGA + ARM architecture, the FPGA processes FC or ten-gigabit protocol unloading, video frame encapsulation, radar scan conversion, layer separation function, one-time fusion superposition and other functions; the ARM completes functions of secondary fusion superposition, video coding and decoding, video comprehensive processing and the like. By adopting an FPGA + ARM architecture mode, parallel computing and pipeline processing capacity of the FPGA and low-power consumption and high-performance characteristics of an ARM architecture are fully utilized, a mixed overlapping display function of multiple paths of ultrahigh-resolution uncompressed FC (photoelectric video and radar video) videos and compressed network videos can be achieved, display quality and stability of a screen of a display and control console are improved, and strong real-time performance of video display can be guaranteed.
The invention can synchronously encode and output the video information output and displayed by the mixed window to the network interface to realize data sharing, or receive the network video stream information through the network interface to decode and superpose the network video stream information on the graphical interface to display. Compared with the traditional display and control console graphical interface sharing function solution, the scheme can reduce one coding and decoding device, and can meet the purpose of network data sharing.
Drawings
Fig. 1 is a block diagram of a multi-source heterogeneous video processing display device of a ship system according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of a graphics distribution unit provided by an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a primary fusion overlay process provided by an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a secondary fusion overlay process provided by an embodiment of the present invention;
fig. 5 is a schematic block diagram of video integration processing according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In one aspect, as shown in fig. 1, the present invention provides a multi-source heterogeneous video processing display device for a ship system, including: the device comprises a protocol unloading unit, a video frame format packaging unit, a radar scanning conversion unit, a graph distribution unit, a layer separation unit, a primary fusion superposition unit, a secondary fusion superposition unit, a coding and decoding unit and a video comprehensive processing unit; the system comprises a protocol unloading unit, a video frame format packaging unit, a radar scan conversion unit, a primary fusion superposition unit and a layer separation unit, wherein the protocol unloading unit, the video frame format packaging unit, the radar scan conversion unit, the primary fusion superposition unit and the layer separation unit are arranged on an FPGA processing module; the device can receive uncompressed original radar video data or infrared/photoelectric video data from the optical fiber channel, and according to local display requirements, the device can complete multilayer fusion and superposition processing of a mouse layer, a plotting layer, a graphic layer and a radar video layer, and functions of superposition display of a graphic bottom layer and a multi-source mixed window of a primary fusion and superposition layer, an original video layer and a compressed video layer, and the like.
(1) Protocol offload unit
The input of the protocol unloading unit is an externally accessed fiber channel; the fiber channel is used as a carrier and can run an FC protocol based on 4.25G/8.5G speed or a ten-gigabit network based on an Ethernet protocol, and the two protocols can be dynamically loaded on line according to the requirements of users; the protocol unloading unit is used as a functional unit for communicating with the outside and is used for operating and maintaining a protocol stack; if the FC protocol is operated, the FC-AE-ASM protocol is supported, and the FC-FS and GJB 6411-2008 protocol specifications are met; if the running is the ten-gigabit network protocol, the IEEE802.3ae/ap protocol is compatible, and the specification of the 10GBASE-SR protocol is met;
more specifically:
the mixed video of a plurality of uncompressed original radar videos and a plurality of photoelectric videos is transmitted by the optical fiber channel, the radar videos and the photoelectric videos are distinguished through different groups, and the radar videos or the photoelectric videos are distinguished through different mark bits; for the FC protocol, radar videos and photoelectric videos are distinguished through different SIDs; for the ten-gigabit network, different multicast addresses are adopted to distinguish radar videos from photoelectric videos; the protocol unloading unit analyzes the FC/gigabit protocol to acquire payload video data, distributes the radar video data to a radar scanning conversion unit and distributes the photoelectric video data to a video frame format encapsulation unit; the format description of each video frame is shown in table 1;
TABLE 1
Figure BDA0003927326260000071
(2) Video frame format encapsulation unit
The input end of the video frame format packaging unit is connected with the protocol unloading unit, and the output end of the video frame format packaging unit is connected with the video processing unit; the video layer is used for receiving the photoelectric video data transmitted by the protocol unloading unit, repackaging the photoelectric video data to enable the frame format to meet the BT1120 video frame format, namely, a data structure conforming to the ITU-R (International telecommunication Union radio communication department) regulation, and forming an original video layer; in this embodiment, with a 3840 × 2160 resolution, for example, the packed data format is shown in table 2:
TABLE 2
Figure BDA0003927326260000081
(3) Radar scan conversion unit
The input end of the radar scan conversion unit is connected with the protocol unloading unit, the output end of the radar scan conversion unit is connected with the primary fusion superposition unit, and the radar scan conversion unit is used for performing scan conversion of multiple modes such as P display, B display, E display and the like on a received radar video, performing various window configurations such as PPI window, AR window, small window and the like, and performing operations such as afterglow function and PPI trail function configuration, and forming a radar video layer at the moment;
more specifically: the radar scan conversion unit firstly completes scan conversion of radar video signals, and carries out pretreatment on the size, position information, afterglow, PPI trail, shielding relation and the like of a radar video window according to configuration information of an upper computer;
(4) Graphic distribution unit
The input end of the graphic distribution unit is connected with a graphic input interface, the output end of the graphic distribution unit is connected with the layer separation unit and the secondary fusion superposition unit, and the graphic distribution unit is used for parallelizing the input differential serial video signals, converting one path of the input differential serial video signals into RGB888 color values and sending the RGB888 color values to the layer separation unit, and converting the other path of the input differential serial video signals into YUV422 color values and sending the YUV422 color values to the secondary fusion superposition unit;
more specifically, the data converted into the RGB color space adopts an RGB888 format, the color depth of each color of red, green and blue is 8 bits, and each pixel point is composed of 24-bit data; the data converted into BT1120 data format is a YUV422 data format obtained by RGB888 through color space transformation and an interpolation method; in this embodiment, the graphic allocation unit needs to allocate different video data formats according to actual requirements of each rear-end unit, the data sent to the layer separation unit is in RGB888 format, each pixel value cannot have data loss, and three color values of each pixel point can be completely represented; the data sent to the secondary fusion superposition unit is in YUV222 format, and meets the format requirement of the secondary fusion superposition unit on the received data. The RGB888 data format and the YUV222 data format represent the same graphic signal although the color characteristic values are different; in this embodiment, the schematic block diagram of the graphics distribution unit is shown in FIG. 2.
(5) Layer separating unit
The input end of the layer separating unit is connected with the graph distributing unit, the output end of the layer separating unit is connected with the primary fusion and superposition unit, and the layer separating unit is used for carrying out layer separating processing on the graph input by the graph distributing unit, extracting a mouse layer and a plotting layer, and discarding the data of the chart layer at the same time to form a mouse plotting layer; more specifically, the layer separation unit receives RGB888 color space data, the data includes multiple layers of information such as a mouse layer, a plotting layer, a chart layer, and the like, each layer information is identified by a different color space set, wherein the mouse layer and the plotting layer need to be extracted, and the chart layer data is discarded; according to the color key values set by the upper computer, the layer separating unit conducts traversal judgment on each RGB888 color value, screens out the color key values set by the upper computer in a table lookup mode, and reserves the color key values to obtain a mouse layer and a plotting layer; the color values not in the lookup table are subjected to a data fill-0 process to appear as "black" pixels, thereby forming a mouse plotting layer. The output of the processed image layer separation unit only reserves a mouse layer and a plotting layer, and the position information and the color value of the pixel point of the processed image layer separation unit are not changed, so that the whole frame of the image layer only reserves the mouse and the plotting information, the rest part of the image layer is represented as 'black' data (a black layer), and the image layer after the frame processing can be represented as that only the mouse and the plotting information are displayed on the whole black picture if the image layer is sent to a display unit for display; for one line of data in the 3840 × 2160 resolution image used in this embodiment, the data before processing is shown in table 3;
TABLE 3
Column number 1 2 3 4 5 6 3837 3838 3839 3840
Data of RGB RGB Color key RGB RGB RGB RGB Color key RGB RGB
The processed data are shown in table 4;
TABLE 4
Column number 1 2 3 4 5 6 3837 3838 3839 3840
Data of 0 0 Color key 0 0 0 0 Color key 0 0
(6) One-time fusion superposition unit
The input end of the primary fusion superposition unit is connected with the radar scan conversion unit and the layer separation unit, and the output end of the primary fusion superposition unit is connected with the secondary fusion superposition unit and used for data superposition and fusion of a radar video layer and a mouse plotting layer;
more specifically, the primary fusion superposition unit receives radar video layers sent by the radar scan conversion unit and mouse plotting layer data sent by the layer separation unit; the primary fusion superposition unit carries out superposition processing according to the principle that the information of the mouse plotting layer is on the upper layer and the information of the radar video layer is on the lower layer; because the resolutions of the radar video layer and the mouse plotting layer are consistent, each pixel value of the radar video layer and each pixel value of the mouse plotting layer need to be correspondingly processed in the overlapping process;
more specifically, the primary fusion superposition unit receives the radar video layer transmitted by the radar scan conversion unit and the mouse plotting layer transmitted by the layer separation unit respectively, and performs data traversal judgment processing on each pixel of the two layers of graphs. On the same position pixel point, if the value of the mouse plotting layer is M and the value of the radar video layer is N, the value P of the pixel is calculated according to the formula: p = λ M + (1- λ) N,0 ≦ λ ≦ 1. The processed complete data frame is called a primary fusion superposition layer; the value of lambda can be configured according to the instruction of an upper computer so as to achieve different effects after data fusion; in the present embodiment, the flow of the primary fusion overlay processing is shown in fig. 3;
if the radar video layer data is represented as RD, the primary fusion overlay data after the overlay fusion processing can be represented as shown in Table 5;
TABLE 5
Column number 1 2 3 4 5 6 3837 3838 3839 3840
Data of 0 0 Color key 0 0 RD 0 Color key RD 0
(7) Secondary fusion superposition unit
The input end of the secondary fusion superposition unit is connected with the primary fusion superposition unit and the graphic distribution unit, and the output end of the secondary fusion superposition unit is connected with the video comprehensive processing unit and is used for carrying out fusion superposition processing on the graphic bottom layer and the primary fusion superposition layer again; more specifically, the layer sent by the primary fusion superposition unit belongs to a high priority and needs to be arranged on the upper layer of the superposition layer, and the video information contained in the primary fusion superposition layer comprises a mouse layer, a plotting layer, a radar layer and a black layer which are arranged from top to bottom; the graphics bottom layer sent by the graphics distribution unit is used as the bottommost layer of the video output graphics, so that the primary fusion overlay is overlapped on the graphics bottom layer when the secondary fusion overlay is performed; because the 'black layer' in the primary fusion superposition layer has obvious characteristics and is completely composed of data 0, all pixel point values of the 'black layer' can be replaced by pixel point values of the bottom layer of the graph according to the characteristic values during superposition processing, pixel point values of non 'black layers' are kept unchanged, the fused data can be represented as that all layers except the 'black layer' in the primary fusion superposition layer are arranged on the upper layer, and the 'black layer' is arranged on the bottom layer after being replaced by the bottom layer of the graph; in this embodiment, the flow of the secondary fusion and superposition processing is shown in fig. 4;
in the present embodiment, if the graphics underlying data is represented as PD, the data after the secondary fusion overlay processing can be represented as shown in table 6;
TABLE 6
Column number 1 2 3 4 5 6 3837 3838 3839 3840
Data of PD PD Color key PD PD RD PD Color key RD PD
Further preferably, when the video comprehensive processing unit displays, the secondary fusion and superposition layer always displays on the full screen because the secondary fusion and superposition layer contains graphic information such as a mouse, a plotting, a radar video, a chart and the like; and the original video layer and the compressed video layer are displayed in a full screen or non-full screen mode according to the configuration of the size and the position information of the window of the upper computer instruction.
(8) Coding and decoding unit
The coding and decoding unit is externally connected with an Ethernet interface and internally connected with a video comprehensive processing unit. The video comprehensive processing unit is mainly responsible for receiving and decoding a video code stream sent by Ethernet and then sending the decoded data to the video comprehensive processing unit; meanwhile, the video processing unit also receives the information of the superposed display pictures sent by the video comprehensive processing unit, codes the information and sends the information out through a network, and at the moment, a compressed video layer is formed;
the coding and decoding unit receives network video stream information, can decode up to 9 paths of 1920 x 1080@30HZ resolution compressed code streams at the same time, and sends the decoded compressed video layer information to the video comprehensive processing unit for corresponding windowing display; meanwhile, the coding and decoding unit can also receive the pictures output by the mixed window, carry out coding processing and send the pictures into an Ethernet channel. Meanwhile, the coding and decoding unit can configure different coding parameters, such as H.264/H.265 algorithm, code rate, frame rate and other information, through the upper computer according to the requirements of users.
(9) Video integrated processing unit
The input end of the video comprehensive processing unit is connected with the secondary fusion superposition unit, the video frame format encapsulation unit and the coding and decoding unit, and the output end of the video comprehensive processing unit is connected with the display unit. The system is used for carrying out video overlapping processing on the received secondary fusion overlapping layer, the original video layer and the compressed video layer and then carrying out mixed window display; more specifically, the video comprehensive processing unit outputs the secondary fusion superposition layer as a graphic bottom layer, and the output resolution and frame rate are consistent with the input secondary fusion superposition layer; when the video comprehensive processing unit displays, the secondary superposition layer always displays on a full screen because the secondary superposition layer contains graphic information such as a mouse, plotting, radar video, a chart and the like; and the original video layer and the compressed video layer are displayed in a full screen or non-full screen mode according to the configuration of the size and the position information of the window of the upper computer instruction.
The original video layer and the compressed video layer have independent video information, and can adjust the information such as the position, the size, the overlapping sequence and the like of respective windows according to the requirement, and the configured windowing pictures are overlapped on the bottom layer of the graph and then are sent to a display terminal for displaying; in order to adjust the display effect, parameters such as brightness, chroma and contrast of an output video can be adjusted; in this embodiment, a schematic block diagram of the video integrated processing unit is shown in fig. 5.
On the other hand, the invention provides a ship system multi-source heterogeneous video processing and displaying method corresponding to the ship system multi-source heterogeneous video processing and displaying device, which comprises the following steps:
s1: the protocol unloading unit is used for analyzing uncompressed original video payload data sent by the optical fiber channel according to the configured protocol requirement, sending the photoelectric video data to the video frame format packaging unit and sending the radar video data to the radar scanning conversion unit;
s2: adopting a video frame format packaging unit to package the photoelectric video data in a BT1120 video frame format to form an original video layer;
s3: the radar scan conversion unit performs scan conversion of multiple modes such as P display, B display and E display on the received radar video, performs position size configuration on various windows such as a PPI window and an AR window, and performs afterglow function and PPI trail function configuration to form a radar video layer;
s4: the image distribution unit performs color space conversion on the input image signals, one path is converted into an RGB888 mode, the other path is converted into a YUV422 mode, and an image bottom layer is formed at the moment;
s5: the image layer separating unit extracts a mouse layer and a plotting layer, and discards the data of the chart layer at the same time to form the mouse plotting layer;
s6: the primary fusion superposition unit carries out superposition fusion processing on the two video layer information sent by the S3 and the S5 to form a primary fusion superposition layer;
s7: the secondary fusion overlapping unit carries out the overlapping and fusion processing on the two kinds of video layer information sent by the S6 and the S4 again to form a secondary fusion overlapping layer;
s8: the coding and decoding unit decodes the received Ethernet channel video stream data to form compressed video layer information and sends the compressed video layer information to the video comprehensive processing unit;
s9: the video comprehensive processing unit performs video overlapping processing on the received secondary fusion overlapping layer, the original video layer and the compressed video layer, and then performs mixed window output display, and can adjust the position, the size, the overlapping sequence and the like of the window according to requirements. In order to adjust the display effect, parameters such as brightness, chroma, contrast and the like of the output video can be adjusted;
s10: the coding and decoding unit receives the output information of the mixing window, and sends the output information to the Ethernet channel after coding processing. Meanwhile, the coding and decoding unit can configure different coding parameters, such as H.264/H.265 algorithm, code rate, frame rate and other information, through the upper computer according to the requirements of users.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A ship system multi-source heterogeneous video processing display device is characterized by comprising: the device comprises a protocol unloading unit, a video frame format packaging unit, a radar scanning conversion unit, a graph distribution unit, a layer separation unit, a primary fusion superposition unit, a secondary fusion superposition unit, a coding and decoding unit and a video comprehensive processing unit; the system comprises a FPGA processing module, a secondary fusion superposition unit, an encoding and decoding unit and a video comprehensive processing unit, wherein the protocol unloading unit, the video frame format packaging unit, the radar scan conversion unit, the primary fusion superposition unit and the layer separation unit are arranged on the FPGA processing module;
the input end of the protocol unloading unit is connected with the optical fiber channel, and the output end of the protocol unloading unit is connected with the video frame format packaging unit and the radar scanning conversion unit; the system is used for analyzing an FC or gigabit protocol to acquire radar video data and photoelectric video data; the output end of the video frame format packaging unit is connected with the video comprehensive processing unit and is used for repackaging the optical television video data to obtain an original video layer; the output end of the radar scan conversion unit is connected with the primary fusion superposition unit and is used for performing scan conversion of multiple modes on a radar video and performing various window configurations, afterglow functions and PPI trail function configurations to obtain a radar video layer;
the input end of the graphic distribution unit is connected with a graphic input interface, the output end of the graphic distribution unit is connected with the layer separation unit and the secondary fusion superposition unit, and the graphic distribution unit is used for parallelizing input video interface signals, converting one path of the input video interface signals into RGB888 color space data and sending the RGB888 color space data to the layer separation unit, converting the one path of the input video interface signals into YUV422 color space data and sending the YUV422 color space data to the secondary fusion superposition unit as a graphic bottom layer; the output end of the layer separation unit is connected with the primary fusion superposition unit and is used for carrying out layer separation processing on RGB888 color space data, extracting a mouse layer and a plotting layer, and discarding the data of a chart layer to obtain the mouse plotting layer; the output end of the primary fusion superposition unit is connected with the secondary fusion superposition unit and is used for performing fusion superposition on the radar video layer and the mouse plotting layer to obtain a primary fusion superposition layer;
the output end of the secondary fusion superposition unit is connected with the video comprehensive processing unit and is used for carrying out fusion superposition processing on the graphic bottom layer and the primary fusion superposition layer again to form a secondary fusion superposition layer; the coding and decoding unit is externally connected with an Ethernet channel and internally connected with a video comprehensive processing unit; the video comprehensive processing unit is used for receiving and decoding the video code stream sent by the Ethernet channel and sending the decoded data to the video comprehensive processing unit to form a compressed video layer; the video signal processing unit is used for receiving the mixed window output video signal sent by the video comprehensive processing unit, coding the mixed window output video signal and sending the coded mixed window output video signal out through an Ethernet channel; and the video comprehensive processing unit is used for superposing and windowing displaying the secondary fusion superposition layer, the original video layer and the compressed video layer to form a mixed windowing output video signal.
2. The multi-source heterogeneous video processing and displaying device of the ship system of claim 1, wherein the fiber channel runs an FC protocol based on 4.25G/8.5G rate or a gigabit network based on an ethernet protocol; if the fiber channel runs the FC protocol, the protocol unloading unit supports the FC-AE-ASM protocol; if the fiber channel runs the gigabit network protocol, the protocol unloading unit is compatible with the IEEE802.3ae/ap protocol.
3. The ship system multi-source heterogeneous video processing display device according to claim 1 or 2, wherein the RGB color space data is in RGB888 format, each color depth of red, green and blue is 8bit, and each pixel point is composed of one 24bit data; the YUV422 color space data is a YUV422 data format obtained by RGB888 format, color space conversion and interpolation.
4. The multi-source heterogeneous video processing and displaying device of the ship system of claim 3, wherein the layer separation unit extracts a mouse layer and a plotting layer by:
according to a color key value set by an upper computer, the layer separating unit conducts traversal judgment on each RGB888 color value in the RGB888 color space data, screens out the color key value set by the upper computer in a lookup table mode, and reserves the color key value to obtain a mouse layer and a plotting layer; the color key values not in the lookup table are subjected to data fill-in 0 processing to be expressed as black pixels, thereby forming a mouse plotting layer.
5. The multi-source heterogeneous video processing and displaying device of the ship system according to claim 4, wherein the primary fusion overlaying unit performs the overlaying processing of the pixel values according to the principle that the mouse plots the information of the layer on the upper layer and the radar video layer is on the lower layer; if the pixel value of the mouse plotting layer is M and the pixel value of the radar video layer is N on the same pixel point, the pixel values on the fused pixel points are as follows: p = λ M + (1- λ) N,0 ≦ λ ≦ 1.
6. The multi-source heterogeneous video processing and displaying device of the ship system of claim 5, wherein the secondary fusion and superposition unit superposes the primary fusion and superposition layer sent by the primary fusion and superposition unit and the graphic bottom layer sent by the graphic distribution unit in an order that the primary fusion and superposition layer is on the upper graphic bottom layer and the lower graphic bottom layer; the superposition rule is that all black pixels in the primary fusion superposition layer are replaced by pixel point values of a graphic bottom layer during secondary fusion superposition, and non-black pixel point values are kept unchanged to form a secondary fusion superposition layer.
7. The multi-source heterogeneous video processing and displaying device of the ship system according to claim 1 or 2, wherein the codec unit is configured to receive network video stream information, decode up to 9 channels of 1920 x 1080@30hz resolution compressed code streams, and send decoded compressed video layer information to the video comprehensive processing unit for corresponding windowing display; meanwhile, the device is used for receiving the pictures output by the mixed window, coding the pictures and then sending the coded pictures into an Ethernet channel; and meanwhile, configuring different coding parameters through an upper computer according to the user requirements, and forming a compressed video layer during decoding.
8. A multi-source heterogeneous video processing and displaying method for a ship system is characterized by comprising the following steps:
analyzing an FC or ten-gigabit protocol according to the configuration condition of the optical fiber channel to obtain uncompressed original radar video data and photoelectric video data;
repackaging the optical television video data to form an original video layer; carrying out scan conversion of multiple modes on the radar video, and carrying out various window configurations, afterglow function and PPI trail function configurations to form a radar video layer;
parallelizing input video interface signals, converting one path of the input video interface signals into RGB888 color space data, and converting the other path of the input video interface signals into YUV422 color space data to form a graph bottom layer;
performing layer separation processing on RGB888 color space data, extracting a mouse layer and a plotting layer, and discarding the chart layer data to form a mouse plotting layer;
performing first fusion and superposition on data of a radar video layer and data of a mouse plotting layer, wherein the mouse plotting layer is on the upper layer, and the radar video layer is on the lower layer to form a first fusion and superposition layer;
performing fusion and superposition processing on the graphic bottom layer and the first fusion and superposition layer again, wherein the second fusion and superposition layer is arranged on the upper layer, the graphic bottom layer is arranged on the lower layer, and the second fusion and superposition layer is formed after fusion;
receiving compressed network video stream information through an Ethernet channel, and correspondingly decoding the compressed network video stream information to form a compressed video layer;
superposing and carrying out mixed windowing display on the secondary fusion superposition layer, the original video layer and the compressed video layer, wherein the secondary fusion superposition layer is always displayed in a full screen, and the original video layer and the compressed video layer are displayed in a full screen or in a non-full screen according to the configuration of the size and the position information of a window of an upper computer instruction;
and coding the video information output by the mixing window to form a network video stream and sending the network video stream out through an Ethernet channel.
9. The multi-source heterogeneous video processing and displaying method of the ship system according to claim 8, wherein the first fusion overlay performs the overlay processing of the pixel values according to the principle that the plotting layer information is on the upper layer and the radar layer information is on the lower layer; if the pixel value of the mouse plotting layer is M and the pixel value of the radar video layer is N on the same pixel point, the pixel value on the pixel point is as follows: p = λ M + (1- λ) N,0 ≦ λ ≦ 1.
10. The multi-source heterogeneous video processing and displaying method of the ship system according to claim 8, wherein the second fusion overlay is performed on the primary fusion overlay and the graphics bottom layer in an order that the primary fusion overlay is on the upper graphics bottom layer and on the lower graphics bottom layer; the superposition rule is that all black pixel values in the primary fusion superposition layer are completely replaced by pixel point values of the graphic bottom layer during secondary fusion superposition; while the non-black pixel values remain unchanged.
CN202211377465.XA 2022-11-04 2022-11-04 Multi-source heterogeneous video processing display device and method for ship system Pending CN115733940A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211377465.XA CN115733940A (en) 2022-11-04 2022-11-04 Multi-source heterogeneous video processing display device and method for ship system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211377465.XA CN115733940A (en) 2022-11-04 2022-11-04 Multi-source heterogeneous video processing display device and method for ship system

Publications (1)

Publication Number Publication Date
CN115733940A true CN115733940A (en) 2023-03-03

Family

ID=85294705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211377465.XA Pending CN115733940A (en) 2022-11-04 2022-11-04 Multi-source heterogeneous video processing display device and method for ship system

Country Status (1)

Country Link
CN (1) CN115733940A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117310679A (en) * 2023-11-28 2023-12-29 中国人民解放军空军工程大学 Gridding sensing system for detecting low-low aircraft

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117310679A (en) * 2023-11-28 2023-12-29 中国人民解放军空军工程大学 Gridding sensing system for detecting low-low aircraft
CN117310679B (en) * 2023-11-28 2024-02-20 中国人民解放军空军工程大学 Gridding sensing system and method for detecting low-low aircraft

Similar Documents

Publication Publication Date Title
US11587490B2 (en) System and method for a six-primary wide gamut color system
US11600214B2 (en) System and method for a six-primary wide gamut color system
US11699376B2 (en) System and method for a six-primary wide gamut color system
US11043157B2 (en) System and method for a six-primary wide gamut color system
US11037482B1 (en) System and method for a six-primary wide gamut color system
US20210035487A1 (en) System and method for a multi-primary wide gamut color system
CN102119532B (en) Color gamut scalability techniques
US20210377542A1 (en) Video encoding and decoding method, device, and system, and storage medium
CN115733940A (en) Multi-source heterogeneous video processing display device and method for ship system
CN102710935A (en) Method for screen transmission between computer and mobile equipment through incremental mixed compressed encoding
CN117528098B (en) Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream
KR20240011045A (en) Integrated chip including interface and operating method of the same and electronic device including the same
KR20230097030A (en) Systems and methods for multi-primary wide gamut color systems
CN115695700A (en) Device and method for realizing multi-path PAL system picture display based on FPGA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination