CN114584823A - Flexible windowing video splicing processor, LED display system and storage medium - Google Patents

Flexible windowing video splicing processor, LED display system and storage medium Download PDF

Info

Publication number
CN114584823A
CN114584823A CN202011386445.XA CN202011386445A CN114584823A CN 114584823 A CN114584823 A CN 114584823A CN 202011386445 A CN202011386445 A CN 202011386445A CN 114584823 A CN114584823 A CN 114584823A
Authority
CN
China
Prior art keywords
video source
source data
video
intercepting
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011386445.XA
Other languages
Chinese (zh)
Inventor
梁德斌
吴振志
吴涵渠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aoto Electronics Co Ltd
Original Assignee
Shenzhen Aoto Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aoto Electronics Co Ltd filed Critical Shenzhen Aoto Electronics Co Ltd
Priority to CN202011386445.XA priority Critical patent/CN114584823A/en
Publication of CN114584823A publication Critical patent/CN114584823A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/22Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources
    • G09G3/30Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels
    • G09G3/32Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention relates to a flexible windowing video splicing processor, an LED display system and a storage medium, wherein in the video splicing processor, an output control unit determines a video source intercepting area according to layer setting parameters and the connection relation of output ports; the input acquisition unit receives and acquires externally input video source data; the transmission intercepting unit intercepts video source data of a corresponding area from the acquired video source data according to the received video source intercepting area; and the image processing unit processes the intercepted video source data according to a preset processing strategy and transmits the video source data to the spliced screen for display through the corresponding output port. In the transmission and processing process, only video source data of a local area is involved, so that the bandwidth requirement in the transmission process is greatly reduced, and the flexibility of the number of layers is improved; meanwhile, the layer distribution can be set at will, and the function of windowing multiple layers at will can be realized.

Description

Flexibly windowed video stitching processor, LED display system and storage medium
Technical Field
The invention relates to the field of LED display screens, in particular to a video splicing processor with flexible windowing, an LED display system and a storage medium.
Background
The LED display screen has the advantages of bright color, high brightness, long service life, energy conservation and the like, thereby being widely used. In practical use, especially in a scene of information collection such as a command center, a monitoring center, a dispatching center and the like, the LED display screen may receive image data from a plurality of video sources at the same time and then be spliced together for display. Therefore, a video splicing processor is needed, image layers are used to respectively correspond to image data of different video sources, and a plurality of image layers are laminated together to finish splicing display of a plurality of video sources.
When an existing video stitching processor performs multi-layer image display, because of the limitations of the transmission bandwidth of a processing link and the processing capability of a processing unit, the number of layers that can be created at an output port is generally limited in number, and the number of layers is often set to a fixed value, for example, one output only supports 2/4 layers.
Meanwhile, in an output port, even if only a partial area of a layer is included, the conventional video stitching processor still treats the layer as a complete layer for processing and transmission in the processing process, and cannot flexibly window, that is, cannot flexibly configure the number of layers, and cannot window at any position. If more layers need to be established to meet the requirements of arbitrary windowing, more output interfaces or layer processing units need to be configured, which increases the cost.
Disclosure of Invention
Therefore, it is necessary to provide a flexible windowing video stitching processor, an LED display system, and a storage medium, for solving the problem that the conventional video stitching processor cannot flexibly create layers and cannot meet any windowing requirements.
The embodiment of the application provides a video splicing processor with flexible windowing, which is used for processing videos of a spliced screen and comprises an input acquisition unit, a transmission interception unit, an image processing unit and an output control unit; the output control unit is provided with a plurality of output ports which are respectively connected with different display areas of the spliced screen;
the image processing unit receives the layer setting parameters; the output control unit determines a video source intercepting area corresponding to the layer and the output port according to the layer setting parameters and the connection relation of the output port, and feeds the video source intercepting area back to the transmission intercepting unit;
the input acquisition unit receives and acquires externally input video source data to obtain acquired video source data and transmits the acquired video source data to the transmission interception unit;
the transmission intercepting unit intercepts video source data of a corresponding area from the acquired video source data according to the received video source intercepting area to obtain the intercepted video source data;
and the image processing unit processes the intercepted video source data according to a preset processing strategy and transmits the video source data to the spliced screen for display through the output port of the output control unit.
In some embodiments, the layer setting parameters include the number and the position of layers and the size of a layer area.
In some embodiments, the layer setting parameters further include an area correspondence between the layer and the acquired video source data, and a scaling coefficient between the layer and the acquired video source data.
In some embodiments, the transmission interception unit has a plurality of interception channels, the image processing unit has a plurality of processing channels, and one interception channel, one processing channel, and one output port constitute one routing channel; the image processing unit is used for constructing a routing channel according to the layer setting parameters; and the video splicing processor transmits data according to the routing channel when transmitting and processing the acquired video source data.
In some embodiments, the input acquisition unit is provided with a plurality of input channels, each input channel receiving different video source data; the routing channel also includes an input channel.
In some embodiments, the image processing unit includes a plurality of image processing modules, each image processing module constitutes a processing channel, and is configured to process the intercepted video source data according to a preset processing policy, obtain port output data corresponding to each output port, and output the port output data to the corresponding output port for outward transmission.
In some embodiments, the transmission intercepting unit includes a plurality of transmission intercepting modules, each transmission intercepting module forms an intercepting channel for receiving a video source intercepting area transmitted by the image processing unit, and intercepts video source data of a corresponding area from the acquired video source data to obtain the intercepted video source data.
In some embodiments, the output control unit includes a plurality of output control modules, and each output control module is provided with an output port; the output control module is used for determining a video source intercepting area corresponding to the layer according to the layer setting parameters and the connection relation of the output ports, and feeding the video source intercepting area back to the image processing unit; and when receiving port output data transmitted by the image processing unit, outputting the port output data to the outside through an output port.
Another embodiment of the application discloses an LED display system, including LED display screen and video concatenation treater, the LED display screen includes a plurality of display areas, the video concatenation treater is used for controlling the demonstration work of LED display screen, the video concatenation treater be aforementioned any one embodiment the video concatenation treater.
An embodiment of the present application further discloses a video splicing processing method, which is applicable to the video splicing processor according to any one of the foregoing embodiments, and includes:
receiving layer setting parameters and the connection relation of output ports, and determining a video source intercepting area according to the layer setting parameters and the connection relation of the output ports;
receiving externally input video source data, and acquiring to obtain acquired video source data;
according to the video source intercepting area, intercepting video source data of a corresponding area from the acquired video source data to obtain intercepted video source data;
and processing the intercepted video source data according to a preset processing strategy, and outputting and displaying through a corresponding output port.
Another embodiment of the present application also discloses a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the video stitching processing method of the foregoing embodiment.
The video splicing processor provided by the embodiment of the application can determine the distribution area of each image layer in each output port according to the image layer setting parameters and the connection relation of the output ports, and directly cuts out the corresponding area in the acquired video source data for subsequent transmission and processing. Compared with the prior art that the video source data of the complete area needs to be completely transmitted and processed based on the video source data of the complete area, the video splicing processor only relates to the video source data of the local area in the transmission and processing processes, so that the bandwidth requirement in the transmission process is greatly reduced, and the flexibility of the number of layers is improved; meanwhile, the layer distribution can be set at will, and the function of windowing multiple layers at will can be realized.
Drawings
FIG. 1 is a block diagram of a video stitching processor according to an embodiment of the present application;
fig. 2 is a schematic distribution diagram of layers and output ports and display areas on a tiled screen according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a frame structure of an LED display system according to an embodiment of the present application;
FIG. 4 is a block diagram of a video stitching processor according to another embodiment of the present application;
fig. 5 is a schematic flowchart of a video stitching processing method according to an embodiment of the present application.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings. In addition, the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
As shown in fig. 1, an embodiment of the present application discloses a flexible windowing video stitching processor, which is used for processing videos of a stitched screen, and includes an input acquisition unit 100, a transmission capture unit 200, an image processing unit 300, and an output control unit 400; the output control unit 400 is provided with a plurality of output ports, which are respectively connected to different display areas of the tiled screen;
an image processing unit 300 that receives the layer setting parameter; the output control unit 400 determines a video source intercepting area corresponding to the layer and the output port according to the layer setting parameter and the connection relationship of the output port, and feeds the video source intercepting area back to the transmission intercepting unit 200;
the input acquisition unit 100 receives and acquires externally input video source data to obtain acquired video source data, and transmits the acquired video source data to the transmission interception unit 200;
the transmission intercepting unit 200 intercepts video source data of a corresponding area from the acquired video source data according to the received video source intercepting area to obtain the intercepted video source data;
the image processing unit 300 processes the captured video source data according to a preset processing policy, and transmits the processed video source data to the mosaic screen for display via the output port of the output control unit 400.
As shown in fig. 2, the scheme of the present embodiment will be described below by taking an example in which the output control unit 400 is provided with 4 output ports, the tiled display screen is an LED display screen, and 4 display areas are provided. It can be understood that the number of the output ports output and the number of the display areas on the tiled screen can be set to any other value according to actual needs.
As shown in fig. 2, the LED display screen may be provided with 4 display areas, as shown by the dotted lines. The four output ports output1-4 correspond to one display area respectively. As shown in FIG. 3, in use, the upper computer 30 may be used in conjunction with the video stitching processor 20 to control the display of the stitched screen 10. A user may configure layer setting parameters including the number and position of layers and the size of a layer area on the upper computer 30. The image processing unit 300 receives the layer setting parameters configured by the user from the upper computer 30, and forwards the layer setting parameters to the output control unit 400. The output control unit 400 may determine, according to the layer configuration parameters and the connection relationship between each output port and the display area, a partition area of a layer in the port of each layer in the display area corresponding to each output port, and further determine a video source capture area. The video source capture area is an area to be captured in the collected video source data, where the capture processing is performed by the image processing unit 300.
After the image processing unit 300 performs the clipping processing, the clipped video source data is transmitted to the image processing unit 300, and is processed according to a preset processing policy, such as one or more of video processing methods of scaling, superimposing, color processing, brightness adjustment, image enhancement, and the like. Meanwhile, when processing the intercepted video source data, the image processing unit 300 may also fuse the layer data located in the same display area according to the distribution condition of the layer in each display area, so as to form port output data corresponding to the output port output. The output control unit 400 receives the port output data, and transmits the port output data to the corresponding display area for display. All the output ports are matched together, and then the multi-layer display of the spliced screen can be completed.
The following describes a scheme of an embodiment of the present application by taking the processing procedure of layers 1 and 3 in fig. 2 as an example. As shown in FIG. 2, the tiled display is divided into 4 display areas, display areas 1-4, corresponding to output ports 1-4, respectively. The layer 1 spans 4 display areas and is divided into 4 sub-areas, including an upper left area, an upper right area, a lower left area and a lower right area. According to the setting parameters of the layer 1, the position and the area size of each sub-area and the proportional relation between the sub-areas and the complete area of the layer 1 can be obtained. Taking the upper left area as an example, according to the layer setting parameters, the rectangular areas between (x0, y 0; x1, y1) of the upper left area can be known, and the width and the height of the rectangular areas are w1 and h1 respectively. The complete area of layer 1 is a rectangular area between (x3, y 3; x1, y 1). Therefore, the position relation and the proportional relation of the upper left region relative to the whole layer 1 can be obtained, and the corresponding region of the upper left region in the acquired video source data, namely the video source intercepting region, can be obtained according to the corresponding relation between the layer 1 and the complete region of the acquired video source data. The transmission intercepting unit 200 intercepts video source data of a corresponding region from the acquired video source data according to the video source region, obtains the intercepted video source data, and sends the intercepted video source data to the image processing unit 300 for processing. Similar processing is done for the upper right region, the lower left region, and the lower right region.
As shown in fig. 2, in the display area corresponding to the output port output1, the display area includes an upper left portion of the layer 1, a whole layer 2, a left half portion of the layer 3, and an upper half portion of the layer 4. The processing procedure of the upper left part of the layer 1 is as shown above, and the whole layer 2, the left half part of the layer 3, and the upper half part of the layer 4 are also processed similarly. After obtaining data of the upper left portion of layer 1, all of layer 2, the left half of layer 3, and the upper half of layer 4, respectively, image processing unit 300 may fuse them together to form port output data for output of output port output 1.
It can be understood that each image layer may correspond to a complete region of the acquired video source data, or may correspond to only a partial region of the acquired video source data. The layer setting parameters may include a region correspondence between the layer and the acquired video source data, and a scaling coefficient between the layer and the acquired video source data.
The video splicing processor provided by the embodiment of the application can determine the distribution area of each image layer in each output port according to the image layer setting parameters and the connection relation of the output ports, and directly cuts out the corresponding area in the acquired video source data for subsequent transmission and processing. Compared with the prior art that the video source data of the complete area needs to be completely transmitted and processed based on the video source data of the complete area, the video splicing processor only relates to the video source data of the local area in the transmission and processing processes, so that the bandwidth requirement in the transmission process is greatly reduced, and the flexibility of the number of layers is improved; meanwhile, the layer distribution can be set at will, and the function of windowing multiple layers at will can be realized.
In some embodiments, the transmission interception unit 200 may have a plurality of interception channels, the image processing unit 300 has a plurality of processing channels, and one interception channel, one processing channel, and one output port may form one routing channel; the image processing unit 300 may construct a routing channel according to the layer setting parameter; and when the video splicing processor transmits and processes the acquired video source data, the data are transmitted according to the routing channel. In this way, a data path between the transmission intercept unit 200 to the output port can be freely constructed.
Further, the input acquisition unit 100 may also be provided with a plurality of input channels, and each input channel may receive different video source data; the routing channel may comprise an input channel.
It will be appreciated that the routing channel may be configured by the user on the host computer 30 and then transmitted to the image processing unit 300.
In some embodiments, as shown in fig. 4, the output control unit 400 may include a plurality of output control modules 410, where each output control module 410 is correspondingly provided with an output port output; the output control module 410 is configured to determine a video source capture area corresponding to the layer according to the layer setting parameter and the connection relationship of the output ports, and feed the video source capture area back to the image processing unit 300; when receiving the port output data transmitted by the image processing unit 300, it is output externally through the output port output.
In some embodiments, as shown in fig. 4, the transmission intercepting unit 200 may include a plurality of transmission intercepting modules 210, where each transmission intercepting module 210 forms an intercepting channel for receiving a video source intercepting area transmitted by the image processing unit 300, and intercepts video source data in a corresponding area from the acquired video source data to obtain the intercepted video source data.
In some embodiments, as shown in fig. 4, the image processing unit 300 may include a plurality of image processing modules 310, where each image processing module 310 forms a processing channel, and is configured to process the intercepted video source data according to a preset processing policy, obtain port output data corresponding to each output port output, and output the port output data to the corresponding output port output for external transmission.
In some embodiments, as shown in fig. 4, the input collecting unit 100 may control a plurality of input collecting modules 110, where each input collecting module 110 corresponds to an input channel, and is configured to receive externally input video source data and collect the video source data to obtain collected video source data.
It is understood that the number of the input acquisition module 110, the transmission interception module 210, the image processing module 310 and the output control module 410 may be the same or different. The number of each module is set, and the configuration can be carried out according to the processing performance requirement.
The video splicing processor provided by the embodiment of the application can determine the distribution area of each image layer in each output port according to the image layer setting parameters and the connection relation of the output ports, and directly cuts out the corresponding area in the acquired video source data for subsequent transmission and processing. Compared with the prior art that the video source data of the complete area needs to be completely transmitted and processed based on the video source data of the complete area, the video splicing processor only relates to the video source data of the local area in the transmission and processing processes, so that the bandwidth requirement in the transmission process is greatly reduced, and the flexibility of the number of layers is improved; meanwhile, the layer distribution can be set at will, and the function of windowing multiple layers at will can be realized.
As shown in fig. 3, an embodiment of the present application further provides an LED display system, which includes an LED display screen 10 and a video splicing processor 20, where the LED display screen 10 includes a plurality of display areas, the video splicing processor 20 is configured to control display operations of the LED display screen 10, and the video splicing processor 20 is the flexible windowing video splicing processor in the foregoing embodiment.
In some embodiments, the LED display system may further include an upper computer 30, configured to configure the layer setting parameters by a user. Meanwhile, the upper computer 30 may also be configured to establish a routing channel according to the layer configuration parameters.
It will be appreciated that the upper computer 30 may also be used to provide video source data.
Therefore, the video stitching processor in the foregoing embodiment is adopted, and thus the same advantageous effects as in the foregoing embodiment are also obtained.
As shown in fig. 5, an embodiment of the present application further discloses a video splicing processing method, which is applied to the video splicing processor according to the foregoing embodiment, and includes:
s100, receiving the layer setting parameters and the connection relation of the output ports, and determining a video source intercepting area according to the layer setting parameters and the connection relation of the output ports;
s200, receiving externally input video source data, and collecting to obtain collected video source data;
s300, according to the video source intercepting area, intercepting video source data of a corresponding area from the collected video source data to obtain the intercepted video source data;
and S400, processing the intercepted video source data according to a preset processing strategy, and outputting and displaying through a corresponding output port.
The specific working manners of steps S100, S200, S300 and S400 can refer to the contents of the input acquisition unit 100, the transmission interception unit 200, the image processing unit 300 and the output control unit 400 in the foregoing embodiments, and are not described herein again.
According to the video splicing processing method provided by the embodiment of the application, the distribution area of each image layer in each output port can be determined according to the image layer setting parameters and the connection relation of the output ports, and the corresponding area can be directly cut out from the acquired video source data for subsequent transmission and processing. Compared with the prior art that the video source data of the complete area needs to be completely transmitted and processed based on the video source data of the complete area, the video splicing processor only relates to the video source data of the local area in the transmission and processing processes, so that the bandwidth requirement in the transmission process is greatly reduced, and the flexibility of the number of layers is improved; meanwhile, the layer distribution can be set at will, and the function of windowing multiple layers at will can be realized.
Another embodiment of the present application further provides a storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the video stitching processing method according to any one of the above embodiments.
The system/computer device integrated components/modules/units, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. The computer-readable storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
In the several embodiments provided in the present invention, it should be understood that the disclosed system and method may be implemented in other ways. For example, the system embodiments described above are merely illustrative, and for example, the division of the components is only one logical division, and other divisions may be realized in practice.
In addition, each functional module/component in each embodiment of the present invention may be integrated into the same processing module/component, or each module/component may exist alone physically, or two or more modules/components may be integrated into the same module/component. The integrated modules/components can be implemented in the form of hardware, or can be implemented in the form of hardware plus software functional modules/components.
It will be evident to those skilled in the art that the embodiments of the present invention are not limited to the details of the foregoing illustrative embodiments, and that the embodiments of the present invention are capable of being embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the embodiments being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. Several units, modules or means recited in the system, device or terminal claims may also be implemented by one and the same unit, module or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A video splicing processor with flexible windowing is used for video processing of a spliced screen and is characterized by comprising an input acquisition unit, a transmission interception unit, an image processing unit and an output control unit; the output control unit is provided with a plurality of output ports which are respectively connected with different display areas of the spliced screen;
the image processing unit receives the layer setting parameters; the output control unit determines a video source intercepting area corresponding to the layer and the output port according to the layer setting parameters and the connection relation of the output port, and feeds the video source intercepting area back to the transmission intercepting unit;
the input acquisition unit receives and acquires externally input video source data to obtain acquired video source data and transmits the acquired video source data to the transmission interception unit;
the transmission intercepting unit intercepts video source data of a corresponding area from the acquired video source data according to the received video source intercepting area to obtain the intercepted video source data;
and the image processing unit processes the intercepted video source data according to a preset processing strategy and transmits the video source data to the spliced screen for display through the output port of the output control unit.
2. The video stitching processor according to claim 1, wherein the layer setting parameters include the number and position of layers and the size of layer areas.
3. The video stitching processor according to claim 2, wherein the layer setting parameters further include a region correspondence between layers and acquired video source data and a scaling factor therebetween.
4. The video stitching processor of claim 1, wherein the transmission clipping unit has a plurality of clipping channels, the image processing unit has a plurality of processing channels, and one clipping channel, one processing channel, and one output port constitute one routing channel; the image processing unit is used for constructing a routing channel according to the layer setting parameters; and the video splicing processor transmits data according to the routing channel when transmitting and processing the acquired video source data.
5. The video stitching processor of claim 3, wherein the input acquisition unit is provided with a plurality of input channels, each input channel receiving different video source data; the routing channel also includes an input channel.
6. The video splicing processor according to claim 5, wherein the image processing unit comprises a plurality of image processing modules, each image processing module constituting a processing channel for processing the intercepted video source data according to a preset processing policy to obtain port output data corresponding to each output port, and outputting the port output data to the corresponding output port for transmission.
7. The video splicing processor according to claim 6, wherein the transmission intercepting unit comprises a plurality of transmission intercepting modules, each transmission intercepting module forms an intercepting channel for receiving the intercepted area of the video source transmitted by the image processing unit, and intercepts the video source data of the corresponding area from the acquired video source data to obtain the intercepted video source data.
8. The video stitching processor of claim 7, wherein the output control unit includes a plurality of output control modules, and each output control module is provided with an output port; the output control module is used for determining a video source intercepting area corresponding to the layer according to the layer setting parameters and the connection relation of the output ports, and feeding the video source intercepting area back to the image processing unit; and when receiving port output data transmitted by the image processing unit, outputting the port output data to the outside through an output port.
9. An LED display system, comprising an LED display screen and a video stitching processor, wherein the LED display screen comprises a plurality of display areas, the video stitching processor is used for controlling the display work of the LED display screen, and the video stitching processor is the video stitching processor as claimed in any one of claims 1 to 8.
10. A video stitching processing method applied to the video stitching processor according to any one of claims 1 to 8, comprising:
receiving layer setting parameters and the connection relation of output ports, and determining a video source intercepting area according to the layer setting parameters and the connection relation of the output ports;
receiving externally input video source data, and acquiring to obtain acquired video source data;
according to the video source intercepting area, intercepting video source data of a corresponding area from the acquired video source data to obtain intercepted video source data;
and processing the intercepted video source data according to a preset processing strategy, and outputting and displaying through a corresponding output port.
11. A storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the video stitching processing method of claim 10.
CN202011386445.XA 2020-12-01 2020-12-01 Flexible windowing video splicing processor, LED display system and storage medium Pending CN114584823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011386445.XA CN114584823A (en) 2020-12-01 2020-12-01 Flexible windowing video splicing processor, LED display system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011386445.XA CN114584823A (en) 2020-12-01 2020-12-01 Flexible windowing video splicing processor, LED display system and storage medium

Publications (1)

Publication Number Publication Date
CN114584823A true CN114584823A (en) 2022-06-03

Family

ID=81768139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011386445.XA Pending CN114584823A (en) 2020-12-01 2020-12-01 Flexible windowing video splicing processor, LED display system and storage medium

Country Status (1)

Country Link
CN (1) CN114584823A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348469A (en) * 2022-07-05 2022-11-15 西安诺瓦星云科技股份有限公司 Picture display method and device, video processing equipment and storage medium
CN115880156A (en) * 2022-12-30 2023-03-31 芯动微电子科技(武汉)有限公司 Multi-layer splicing display control method and device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030231259A1 (en) * 2002-04-01 2003-12-18 Hideaki Yui Multi-screen synthesis apparatus, method of controlling the apparatus, and program for controlling the apparatus
CN103680470A (en) * 2012-09-03 2014-03-26 杭州海康威视数字技术股份有限公司 Large screen control image display method and system
CN103677701A (en) * 2012-09-17 2014-03-26 杭州海康威视数字技术股份有限公司 Large screen synchronous display method and system
US20160300549A1 (en) * 2015-04-10 2016-10-13 Boe Technology Group Co., Ltd. Splicer, splicing display system and splicing display method
CN106708459A (en) * 2017-03-17 2017-05-24 深圳市东明炬创电子有限公司 Large screen splicing controller
CN107071331A (en) * 2017-03-08 2017-08-18 苏睿 Method for displaying image, device and system, storage medium and processor
CN107172368A (en) * 2017-04-21 2017-09-15 西安诺瓦电子科技有限公司 Many video source splicing display methods and processing unit and application, PLD
CN107807807A (en) * 2017-11-16 2018-03-16 威创集团股份有限公司 The signal source Zoom method and system of display window
CN109218656A (en) * 2017-06-30 2019-01-15 杭州海康威视数字技术股份有限公司 Image display method, apparatus and system
CN109274901A (en) * 2018-11-07 2019-01-25 深圳市灵星雨科技开发有限公司 A kind of video processor and video display control system
CN109640026A (en) * 2018-12-26 2019-04-16 威创集团股份有限公司 A kind of high-resolution signal source spell wall display methods, device and equipment
CN110099224A (en) * 2018-01-31 2019-08-06 杭州海康威视数字技术股份有限公司 Premonitoring display methods, apparatus and system, computer equipment and storage medium
CN209517289U (en) * 2019-01-22 2019-10-18 西安诺瓦星云科技股份有限公司 Video display system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030231259A1 (en) * 2002-04-01 2003-12-18 Hideaki Yui Multi-screen synthesis apparatus, method of controlling the apparatus, and program for controlling the apparatus
CN103680470A (en) * 2012-09-03 2014-03-26 杭州海康威视数字技术股份有限公司 Large screen control image display method and system
CN103677701A (en) * 2012-09-17 2014-03-26 杭州海康威视数字技术股份有限公司 Large screen synchronous display method and system
US20160300549A1 (en) * 2015-04-10 2016-10-13 Boe Technology Group Co., Ltd. Splicer, splicing display system and splicing display method
CN107071331A (en) * 2017-03-08 2017-08-18 苏睿 Method for displaying image, device and system, storage medium and processor
CN106708459A (en) * 2017-03-17 2017-05-24 深圳市东明炬创电子有限公司 Large screen splicing controller
CN107172368A (en) * 2017-04-21 2017-09-15 西安诺瓦电子科技有限公司 Many video source splicing display methods and processing unit and application, PLD
CN109218656A (en) * 2017-06-30 2019-01-15 杭州海康威视数字技术股份有限公司 Image display method, apparatus and system
CN107807807A (en) * 2017-11-16 2018-03-16 威创集团股份有限公司 The signal source Zoom method and system of display window
CN110099224A (en) * 2018-01-31 2019-08-06 杭州海康威视数字技术股份有限公司 Premonitoring display methods, apparatus and system, computer equipment and storage medium
CN109274901A (en) * 2018-11-07 2019-01-25 深圳市灵星雨科技开发有限公司 A kind of video processor and video display control system
CN109640026A (en) * 2018-12-26 2019-04-16 威创集团股份有限公司 A kind of high-resolution signal source spell wall display methods, device and equipment
CN209517289U (en) * 2019-01-22 2019-10-18 西安诺瓦星云科技股份有限公司 Video display system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348469A (en) * 2022-07-05 2022-11-15 西安诺瓦星云科技股份有限公司 Picture display method and device, video processing equipment and storage medium
CN115348469B (en) * 2022-07-05 2024-03-15 西安诺瓦星云科技股份有限公司 Picture display method, device, video processing equipment and storage medium
CN115880156A (en) * 2022-12-30 2023-03-31 芯动微电子科技(武汉)有限公司 Multi-layer splicing display control method and device
CN115880156B (en) * 2022-12-30 2023-07-25 芯动微电子科技(武汉)有限公司 Multi-layer spliced display control method and device

Similar Documents

Publication Publication Date Title
CN114584823A (en) Flexible windowing video splicing processor, LED display system and storage medium
CN105047141B (en) Split screen detection method, device and the LCD TV of multi partition dynamic backlight
CN110233998A (en) A kind of method of transmitting video data, device, equipment and storage medium
WO2005072431A2 (en) A method and apparatus for combining a plurality of images
CN107172368A (en) Many video source splicing display methods and processing unit and application, PLD
CN105975238A (en) Display parameter adjusting method and terminal device
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
US10832621B2 (en) Backlight adjustment method of display panel, backlight adjustment device, and display device reducing red pixel color shift
CN106791551A (en) A kind of method of automatic regulating signal source resolution ratio, mosaic screen and system
CN1466678A (en) Image processing inspection system
CN209517289U (en) Video display system
CN107623833B (en) Control method, device and system for video conference
CN110691203B (en) Multi-path panoramic video splicing display method and system based on texture mapping
CN108600675A (en) Channel way extended method, equipment, Network Personal Video Recorder and storage medium
US11622169B1 (en) Picture processing method in embedded system
CN103674270A (en) Thermal image information recording device and thermal image information recording method
CN107147861A (en) Video record and processing system and method
WO2021063319A1 (en) Method and device for implementing 3d display, and 3d display terminal
DE112020006528T5 (en) Bidirectional multiband frequency manager for a wireless microphone system
CN112585939A (en) Image processing method, control method, equipment and storage medium
KR20200053879A (en) Apparatus and method for simultaneous control of heterogeneous cameras, and camera control client
CN104811627A (en) Method and device for photographing and previewing
US11501418B2 (en) Multi-level lookup tables for control point processing and histogram collection
CN104918010A (en) Signal redisplay method and system of spliced wall
CN110418059B (en) Image processing method and device applied to electronic equipment, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination