CN113301411B - Video processing method, device and system and video processing equipment - Google Patents

Video processing method, device and system and video processing equipment Download PDF

Info

Publication number
CN113301411B
CN113301411B CN202010107917.7A CN202010107917A CN113301411B CN 113301411 B CN113301411 B CN 113301411B CN 202010107917 A CN202010107917 A CN 202010107917A CN 113301411 B CN113301411 B CN 113301411B
Authority
CN
China
Prior art keywords
sub
information
images
image
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010107917.7A
Other languages
Chinese (zh)
Other versions
CN113301411A (en
Inventor
孙立停
周晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixelhue Technology Ltd
Original Assignee
Xian Novastar Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Novastar Electronic Technology Co Ltd filed Critical Xian Novastar Electronic Technology Co Ltd
Priority to CN202010107917.7A priority Critical patent/CN113301411B/en
Publication of CN113301411A publication Critical patent/CN113301411A/en
Application granted granted Critical
Publication of CN113301411B publication Critical patent/CN113301411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/391Resolution modifying circuits, e.g. variable screen formats

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention discloses a video processing method, a video processing device, a video processing system and video processing equipment. The video processing method comprises the following steps: acquiring interface information, wherein the interface information comprises interface coordinate information, interface width and height information and interface point distance information of a plurality of virtual interfaces, and the plurality of virtual interfaces correspond to a plurality of physical interfaces one to one; acquiring layer information, wherein the layer information comprises layer coordinate information, layer width and height information and resolution information of a video frame corresponding to a layer; according to the interface information and the layer information, determining respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively positioned in a plurality of virtual interfaces; and acquiring corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain a plurality of sub-images, and obtaining a plurality of processing sub-images based on the plurality of sub-images and outputting the plurality of processing sub-images from the plurality of physical interfaces respectively.

Description

Video processing method, device and system and video processing equipment
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video processing method, a video processing apparatus, a video processing system, and a video processing device.
Background
A display screen such as an LED display screen is widely used in various fields as one of main devices for displaying images. There are various types of dot pitch LED display screens on the market, wherein dot pitch generally refers to the distance (usually in mm) between two adjacent pixels of the display screen. However, in the prior art, when the display screen formed by splicing the sub-display screens with different dot pitches displays, the size of the displayed image is inconsistent, that is, the image displayed by the sub-display screen with a large dot pitch appears larger than the image displayed by the sub-display screen with a small dot pitch, so that the image display effect is not good, and the user experience is affected.
Therefore, a technical problem to be solved is how to improve the image display effect and improve the user experience.
Disclosure of Invention
Accordingly, to overcome the defects and shortcomings in the prior art, embodiments of the present invention provide a video source processing method, a video processing apparatus, a video processing system, a video processing device, and a computer readable storage medium.
In one aspect, a video processing method provided in an embodiment of the present invention includes: acquiring interface information, wherein the interface information comprises interface coordinate information, interface width and height information and interface point distance information of a plurality of virtual interfaces, and the virtual interfaces correspond to a plurality of physical interfaces one to one; acquiring layer information, wherein the layer information comprises layer coordinate information, layer width and height information and resolution information of a video frame corresponding to the layer; according to the interface information and the layer information, determining respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively positioned in the plurality of virtual interfaces; and acquiring corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processing sub-images based on the plurality of sub-images to be respectively output from the plurality of physical interfaces.
In the above solution, the video processing method first determines, according to the interface information and the layer information, respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively located in the plurality of virtual interfaces, then obtains, according to the respective sub-image parameters of the plurality of sub-images, corresponding image data in the video frame to obtain the plurality of sub-images, and obtains, based on the plurality of sub-images, a plurality of processed sub-images to be output from the plurality of physical interfaces, respectively, so that the sub-image parameters of the sub-images in the video frame corresponding to the plurality of virtual interfaces are determined according to the interface information and the layer information including the interface point distance information, and the sub-images are obtained according to the sub-image parameters and processed to obtain the processed sub-images, where the processed sub-images have the same display effect subsequently. The video processing method provided by the embodiment is not only suitable for displaying the sub-display screens with different dot pitches, but also suitable for displaying the sub-display screens with the same dot pitch.
In an embodiment of the present invention, the determining, according to the interface information and the layer information, sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are located in the plurality of virtual interfaces respectively includes: determining sub-graph coordinate information and sub-graph width and height information of the plurality of sub-graphs of which the layer is respectively positioned in the plurality of virtual interfaces according to the interface coordinate information and the interface width and height information of the plurality of virtual interfaces, the layer coordinate information and the layer width and height information; determining sub-graph relative layer coordinate information of the sub-graphs according to the sub-graph coordinate information of the sub-graphs and the layer coordinate information; and determining sub-image parameters of a plurality of sub-images corresponding to the plurality of virtual interfaces in the video frame according to the sub-image relative layer coordinate information and sub-image width and height information of the plurality of sub-images, the resolution information, the layer width and height information and the interface point distance information of the plurality of virtual interfaces.
In an embodiment of the present invention, the sub-image parameters of the plurality of sub-images include sub-image coordinate pixel information, sub-image width and height pixel information, and a sub-image scaling ratio of the plurality of sub-images; determining sub-image parameters of a plurality of sub-images corresponding to the plurality of virtual interfaces respectively in the video frame according to the sub-image relative layer coordinate information and sub-image width and height information of the plurality of sub-images, the resolution information, the layer width and height information, and the interface point distance information of the plurality of virtual interfaces respectively, specifically including: determining sub-image coordinate pixel information of the sub-images according to sub-image relative layer coordinate information, the layer width and height information and the resolution information of the sub-images; determining sub-image width and height pixel information of the sub-images according to the sub-image width and height information, the layer width and height information and the resolution information of the sub-images; determining sub-image width and height pixel information of each of the plurality of sub-images according to sub-image width and height information of each of the plurality of sub-images and interface point distance information of each of the plurality of virtual interfaces; and determining the scaling ratio of the sub-images according to the width and height pixel information of the sub-images and the width and height pixel information of the sub-images.
In an embodiment of the present invention, the acquiring, according to the sub-image parameters of the plurality of sub-images, the corresponding image data in the video frame to obtain the plurality of sub-images, and obtaining a plurality of processed sub-images based on the plurality of sub-images, and outputting the processed sub-images from the plurality of physical interfaces respectively includes: acquiring corresponding image data in the video frame according to the sub-image coordinate pixel information and the sub-image width and height pixel information of the sub-images to obtain the sub-images; and scaling the plurality of sub-images according to the respective sub-image scaling ratios of the plurality of sub-images to obtain the plurality of processed sub-images.
In an embodiment of the present invention, before the obtaining, according to sub-image parameters of the plurality of sub-images, corresponding image data in the video frame to obtain the plurality of sub-images, and obtaining, based on the plurality of sub-images, a plurality of processed sub-images to be output from the plurality of physical interfaces, respectively, the video processing method further includes: determining sub-graph relative interface coordinate information of each of the multiple sub-graphs according to interface coordinate information of each of the multiple virtual interfaces and sub-graph coordinate information of each of the multiple sub-graphs; determining sub-graph relative interface coordinate pixel information of each sub-graph of the multiple sub-graphs according to sub-graph interface coordinate information of each sub-graph of the multiple sub-graphs and interface point distance information of each virtual interface of the multiple virtual interfaces; after the obtaining, according to the sub-image parameters of the plurality of sub-images, corresponding image data in the video frame to obtain the plurality of sub-images, and obtaining a plurality of processed sub-images based on the plurality of sub-images and outputting the processed sub-images from the plurality of physical interfaces, respectively, the video processing method further includes: and outputting the sub-image relative interface coordinate pixel information of each sub-image from the physical interfaces.
In another aspect, an embodiment of the present invention provides a video processing apparatus, including: the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring interface information, and the interface information comprises interface coordinate information, interface width and height information and interface point distance information of a plurality of virtual interfaces, and the plurality of virtual interfaces are in one-to-one correspondence with a plurality of physical interfaces; the second obtaining module is used for obtaining layer information, wherein the layer information comprises layer coordinate information, layer width and height information and resolution information of a video frame corresponding to the layer; a sub-image parameter determining module, configured to determine, according to the interface information and the layer information, sub-image parameter resolution information of each of a plurality of sub-images in the video frame, where the sub-images correspond to a plurality of sub-images whose layers are located in the plurality of virtual interfaces, respectively; and the image acquisition and output module is used for acquiring corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processing sub-images based on the plurality of sub-images and outputting the processing sub-images from the plurality of physical interfaces respectively.
In the above solution, the video processing apparatus determines, by the sub-image parameter determining module, sub-image parameter resolution information of each of a plurality of sub-images corresponding to a plurality of sub-images, which are located in the plurality of virtual interfaces, in the video frame, respectively, and the image obtaining and outputting module obtains, according to the sub-image parameter of each of the plurality of sub-images, image data corresponding to the video frame to obtain the plurality of sub-images, and obtains, based on the plurality of sub-images, a plurality of processed sub-images to be output from the plurality of physical interfaces, respectively, so that sub-image parameters of each sub-image corresponding to the plurality of virtual interfaces in the video frame are determined according to the interface information including the interface point distance information and the layer information, and each sub-image is obtained according to the sub-image parameter and processed to obtain each processed sub-image, and each process has the same display effect when displaying the sub-images correspondingly.
In an embodiment of the present invention, the sub-image parameter determining module specifically includes: a sub-graph information determining unit, configured to determine, according to the interface coordinate information and the interface width and height information of each of the multiple virtual interfaces, the layer coordinate information and the layer width and height information, sub-graph coordinate information and sub-graph width and height information of each of the multiple sub-graphs, where the layer is located in the multiple virtual interfaces, respectively; a relative coordinate information determining unit, configured to determine, according to the sub-graph coordinate information and the layer coordinate information of each of the multiple sub-graphs, sub-graph relative layer coordinate information of each of the multiple sub-graphs; and a sub-image parameter determining unit, configured to determine sub-image parameters of multiple sub-images corresponding to the multiple virtual interfaces in the video frame according to the sub-image relative layer coordinate information and sub-image width height information of the multiple sub-images, the resolution information, the layer width height information, and the interface point distance information of the multiple virtual interfaces.
In an embodiment of the present invention, the sub-image parameters of the plurality of sub-images include sub-image coordinate pixel information, sub-image width and height pixel information, and a sub-image scaling ratio of the plurality of sub-images; the sub-image parameter determining unit is specifically configured to: determining sub-image coordinate pixel information of the sub-images according to sub-image relative layer coordinate information, the layer width and height information and the resolution information of the sub-images; determining sub-image width and height pixel information of the sub-images according to the sub-image width and height information, the layer width and height information and the resolution information of the sub-images; determining width and height pixel information of each of the multiple sub-images according to sub-image width and height information of each of the multiple sub-images and interface point distance information of each of the multiple virtual interfaces; and determining the scaling ratio of the sub-images according to the width and height pixel information of the sub-images and the width and height pixel information of the sub-images.
In another aspect, an embodiment of the present invention provides a video processing system, including: a processor and a memory coupled to the processor; wherein the memory stores instructions for execution by the processor, and the instructions cause the processor to perform operations to perform any of the video processing methods described above.
In another aspect, an embodiment of the present invention provides a video processing apparatus, including: an embedded processor; the programmable logic device is electrically connected with the embedded processor; a memory electrically connected to the programmable logic device; a plurality of physical interfaces electrically connecting the programmable logic devices; wherein the embedded processor is to: acquiring interface information, wherein the interface information comprises interface coordinate information, interface width and height information and interface point distance information of a plurality of virtual interfaces, and the plurality of virtual interfaces correspond to the plurality of physical interfaces one to one; acquiring layer information, wherein the layer information comprises layer coordinate information, layer width and height information and resolution information of a video frame corresponding to the layer; according to the interface information and the layer information, determining respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively positioned in the plurality of virtual interfaces; the programmable logic device is to: and acquiring corresponding image data in the video frame from the memory according to the sub-image parameters of the sub-images to obtain the sub-images, and acquiring a plurality of processed sub-images based on the sub-images and outputting the processed sub-images from the physical interfaces respectively.
In the above technical solution, the video processing device, through the embedded processor and the programmable logic device included therein, realizes processing of the corresponding sub-images in the video frame according to the interface information and the layer information, thereby realizing that each processed sub-image respectively output from the plurality of physical interfaces of the video processing device has the same display effect when displayed correspondingly in the following.
In yet another aspect, the present invention provides a computer-readable storage medium storing computer-executable instructions for performing any one of the above-mentioned video processing methods.
One or more of the above technical solutions may have the following advantages or beneficial effects: the video processing method, the video processing device, the video processing system, the video processing device and the computer readable storage medium provided by the embodiment realize processing of the corresponding sub-images in the video frame according to the interface information and the layer information, so that the processed sub-images respectively output from the plurality of physical interfaces have the same display effect when being displayed correspondingly in the follow-up process.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a video processing method according to a first embodiment of the present invention.
Fig. 2 is a detailed flowchart of step S106 in fig. 1.
Fig. 3 is a schematic structural diagram of a system applying the video processing method according to the first embodiment of the present invention.
Fig. 4A to 4D are schematic diagrams illustrating processing procedures involved in the video processing method according to the first embodiment of the present invention.
Fig. 5 is a schematic block diagram of a video processing apparatus according to a second embodiment of the present invention.
Fig. 6 is a schematic diagram of a unit structure of the sub-image parameter determining module in fig. 5.
Fig. 7 is a schematic diagram of a unit structure of the image acquisition and output module in fig. 5.
Fig. 8 is a schematic structural diagram of a video processing system according to a third embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a computer-readable storage medium according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
[ first embodiment ] A method for manufacturing a semiconductor device
Referring to fig. 1, a video processing method of a first embodiment of the present invention is shown. The video processing method comprises the following steps:
s102, interface information is obtained, wherein the interface information comprises interface coordinate information, interface width and height information and interface point distance information of a plurality of virtual interfaces, and the virtual interfaces are in one-to-one correspondence with the physical interfaces;
s104, obtaining layer information, wherein the layer information comprises layer coordinate information, layer width and height information and resolution information of a video frame corresponding to the layer;
s106, determining respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the image layers are respectively positioned in the plurality of virtual interfaces according to the interface information and the image layer information; and
s108, obtaining corresponding image data in the video frame according to the sub-image parameters of the sub-images to obtain the sub-images, and obtaining a plurality of processing sub-images based on the sub-images to be output from the physical interfaces respectively.
To facilitate understanding of the present embodiment, the video processing method of the present embodiment will be specifically described below with reference to fig. 2, fig. 3, and fig. 4A to fig. 4D.
The video processing method provided in the embodiment of the present invention may be implemented by, for example, a video processing device such as a video stitching processor, and may also be implemented by other video processing devices, which is not limited in this embodiment.
The structure of the video processing apparatus will be specifically described below with reference to fig. 3. Fig. 3 shows a video processing device 20, which is a card-inserted video processing device, for example, including a plurality of input cards 22 (only one input card 22 is shown in fig. 3 for illustrative purposes), a master card 24, a plurality of output cards 26 (only one output card 26 is shown in fig. 3 for illustrative purposes), and a switch backplane 28.
In light of the above, the input card 22, the master card 24, and the output card 26 are electrically connected to the switch backplane 28, the master card 24 is provided with an embedded processor 240, the switch backplane 28 includes a matrix switch module 280 such as a cross-point switch chip, and the output card 26 is provided with a programmable logic device 260, a plurality of physical interfaces 262, and a memory 264. The embedded processor 240 is electrically connected to the programmable logic device 260 through the switching backplane 28, the plurality of physical interfaces 262 are respectively electrically connected to the programmable logic device 260, and the memory 264 is electrically connected to the programmable logic device 260. The embedded processor 240 may be, for example, an MCU (micro controller Unit) or an ARM (Advanced RISC Machines) processor, the Programmable logic device 260 may be, for example, an FPGA (Field Programmable Gate Array), the physical Interface 262 may be, for example, a DVI (Digital Visual Interface) Interface, an HDMI (High Definition Multimedia Interface), a network port such as an RJ45 network port, the Memory 264 may be, for example, a volatile Memory such as a DDR (Double Data Rate Synchronous Dynamic Random Access Memory), and the master card 24 may be used to control the matrix switch module 280 to dispatch a specific input card 22 to a specific output card 26, so as to achieve flexible association between the input card 22 and the output card 26, which is not limited in this embodiment.
As mentioned above, the input card 22 may include a plurality of input interfaces (not shown) such as DVI interfaces, HDMI interfaces, etc. The input card 22 is configured to obtain video frames from an external source (e.g., a video source such as a PC) through at least one of the plurality of input interfaces included therein, and send the video frames to the output card 26 via the switch backplane 28, so that the output card 26 buffers the received video frames to the memory 264.
Of course, the video processing method of the present embodiment may also be applied to a non-card-insertion type video processing apparatus, which typically includes an embedded processor, a programmable logic device, a memory, and a plurality of physical interfaces. The embedded processor, the memory, and the plurality of physical interfaces are electrically connected to the programmable logic device, and for the description of the embedded processor, the programmable logic device, the memory, and the physical interfaces, reference may be made to the description of the card-inserted video processing device, which is not described herein again.
In the present embodiment, the processing procedure of the video processing apparatus 20 performs the correlation processing in response to receiving the relevant parameter information (interface information and layer information) transmitted by the upper computer 10. The processing procedure of the upper computer 10 will be specifically described below.
First, in response to a canvas creation operation by a user, a canvas is generated on the upper computer software of the upper computer 10 (as shown in fig. 4A), and then in response to an interface creation operation by the user, the upper computer software generates the same number of virtual interfaces as the physical interfaces 262 (for illustrative purposes, fig. 4A to 4D illustrate by taking two virtual interfaces, i.e., virtual interface 1 and virtual interface 2 (as shown in fig. 4A), where the virtual interfaces serve as media for displaying layers and are equivalent to display screens. As shown in FIG. 4A, the coordinate information of the vertex A at the top left corner of the virtual interface 1, i.e., the interface coordinate information of the virtual interface 1, is (x) A ,y A ) The interface width and height information of the virtual interface 1, i.e., the width and height of the virtual interface 1, are W1 and H1, respectively, and the interface dot pitch information of the virtual interface 1 (i.e., the dot pitch of the virtual interface 1) is D1 (not shown in the drawing). The coordinate information of the top left vertex B of the virtual interface 2, i.e., the interface coordinate information of the virtual interface 2, is (x) B ,y B ) The interface width and height information of the virtual interface 2, i.e., the width and height of the virtual interface 2, are W2 and H2, respectively, and the interface dot pitch information of the virtual interface 2 (i.e., the dot pitch of the virtual interface 2) is D2 (not shown in the drawing). Thereafter, the original layer (which is, for example, a user) is acquired from the upper computer 10 in response to the user through the upper computer softwarePrestored on the upper computer 10, not shown in the drawings), and performs size adjustment (for example, adjusting the width and height of the original layer) on the upper computer software according to the need, and the upper computer software finally displays the adjusted original layer (hereinafter referred to as layer) on the canvas. The layer corresponds to the video frame input by the input card 22, and it should be noted that "corresponding" here indicates that the resolution information of the layer corresponds to the resolution information of the image obtained after the subsequent programmable logic device 260 processes the video frame, that is, in this embodiment, the resolution of the image obtained after the video frame is processed is determined by the resolution of the layer. The resolution information of the video frame, i.e., the width and height of the video frame, is assumed to be X pixels by Y pixels. As shown in FIG. 4B, the coordinate information of the top left vertex C of the layer, i.e. the layer coordinate information, is (x) C ,y C ) And the width and height information of the layer, namely the width and height of the layer, are respectively W3 and H3. In response to a user generating a virtual interface 1, a virtual interface 2 and a layer on a canvas, the upper computer 10 (specifically, upper computer software) obtains interface information of the virtual interface 1 and the virtual interface 2, that is, interface coordinate information, interface width and height information and interface dot distance information of the virtual interface, and layer information, that is, layer coordinate information, layer width and height information and resolution information, and sends the information to the embedded processor 240 of the main control card 24 of the video processing apparatus 20.
It is worth mentioning that, in this embodiment, the vertex O at the upper left corner of the canvas is the origin of coordinates, i.e. x 0 =0,y 0 And =0. The units of the above coordinate information (including the interface coordinate information and the layer coordinate information) and the width and height information (including the interface width and height information and the layer width and height information) are mm × pixel, that is, in the present embodiment, the coordinate unit of the coordinate system of the canvas is mm × pixel. Furthermore, the unit of the interface point distance information is mm. In addition, the coordinate information may be coordinate information of other vertices of the interface or the layer, besides the coordinate information of the upper left corner of the interface or the layer, which is not limited specifically herein.
After acquiring the parameter information, the embedded processor 240 determines sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the image layers are respectively located in the plurality of virtual interfaces according to the interface information and the image layer information (see step S106 in fig. 1). In one embodiment, as shown in fig. 2, step S106 may, for example, include the following steps: s1062, determining sub-graph coordinate information and sub-graph width height information of the sub-graphs of which the layers are respectively located in the virtual interfaces according to the interface coordinate information and the interface width height information of the virtual interfaces, the layer coordinate information and the layer width height information; s1064, determining the subgraph relative layer coordinate information of each of the multiple subgraphs according to the subgraph coordinate information of each of the multiple subgraphs and the layer coordinate information; s1066, determining sub-image parameters of a plurality of sub-images corresponding to the plurality of virtual interfaces in the video frame according to the sub-image relative layer coordinate information and sub-image width and height information of the plurality of sub-images, the resolution information, the layer width and height information, and the interface point distance information of the plurality of virtual interfaces.
Specifically, regarding step S1062, referring to fig. 4C, the sub-graph whose layer is located in virtual interface 1 is sub-graph 1, and the sub-graph coordinate information of sub-graph 1, i.e. the top-left vertex coordinate of sub-graph 1, is (x) C ,y C ) The sub-graph width and height information of sub-graph 1, i.e. the width and height of sub-graph 1, are W4 and H3, respectively; referring to fig. 4D, a sub-graph with a layer located in the virtual interface 2 is sub-graph 2, and sub-graph coordinate information of the sub-graph 2, that is, a vertex D coordinate at the upper left corner of the sub-graph 2 is (x) D ,y D ) The sub-picture width and height information of sub-picture 2, i.e., the width and height of sub-picture 1, are W5 and H3, respectively. The embodiment of the present invention is not specifically limited herein, as long as the coordinate information and the width and height information of each sub-graph can be determined according to the interface coordinate information, the interface width and height information, the layer coordinate information, and the layer width and height information. Need to make sure thatNote that the unit of the above-described sub-picture coordinate information and sub-picture width and height information is mm pixel.
Regarding step S1064, the sub-graph relative-layer coordinate information of each of the multiple sub-graphs is a relative value between the top-left vertex coordinate of each sub-graph and the top-left vertex coordinate of the layer, for example, as shown in fig. 4C, the sub-graph relative-layer coordinate information of sub-graph 1 is obtained by subtracting the layer coordinate information from the sub-graph coordinate information of sub-graph 1, in this example, since the top-left vertex coordinate of sub-graph 1 is consistent with the top-left vertex coordinate of the layer, the sub-graph relative-layer coordinate information of sub-graph 1 is (0,0); as shown in FIG. 4D, the subgraph relative layer coordinate information of subgraph 2 is the subgraph coordinate information minus layer coordinate information of subgraph 2, and in this example, the top left corner vertex coordinate of subgraph 2 is (x) D ,y D ) The vertex coordinate of the upper left corner of the layer is (x) D ,y D ) Correspondingly, the subgraph relative layer coordinate information of subgraph 2 is (x) D -x C ,y D -y C ) In this example, y D =y C Therefore, the sub-graph relative layer coordinate information of sub-graph 2 is (x) D -x C ,0)。
With respect to step S1066, the sub-image parameters of each of the plurality of sub-images may include, for example, sub-image coordinate pixel information, sub-image width and height pixel information, and sub-image scaling ratio of each of the plurality of sub-images; correspondingly, step S1066 specifically includes: determining sub-image coordinate pixel information of the sub-images according to sub-image relative layer coordinate information, the layer width and height information and the resolution information of the sub-images; determining sub-image width and height pixel information of the sub-images according to the sub-image width and height information, the layer width and height information and the resolution information of the sub-images; determining sub-image width and height pixel information of each of the plurality of sub-images according to sub-image width and height information of each of the plurality of sub-images and interface point distance information of each of the plurality of virtual interfaces; and determining the sub-image scaling ratio of each of the plurality of sub-images according to the width and height pixel information of each of the plurality of sub-images and the width and height pixel information of each of the plurality of sub-images. Note that the unit of sub-image coordinate pixel information and sub-image width and height pixel information is pixel, which represents the coordinate information and width and height information of the sub-image corresponding to the sub-image within the video frame (the unit is pixel, not mm × pixel).
In the above, specifically, the abscissa information of the sub-image coordinate pixel information is the product of the width corresponding to the resolution information and the abscissa of the sub-image relative layer coordinate information divided by the width of the layer width and height information, and in this example, the abscissa information of the sub-image coordinate pixel information of the sub-image corresponding to the sub-image 1 is (X × 0)/W3, that is, 0. The ordinate information of the sub-image coordinate pixel information is the product of the height corresponding to the resolution information and the ordinate of the sub-image relative layer coordinate information divided by the height of the layer width height information, and in this example, the ordinate information of the sub-image coordinate pixel information of the sub-image corresponding to sub-image 1 is (Y × 0)/H3, that is, 0. The sub-image width and height pixel information has a width which is a product of a width corresponding to the resolution information and a width of the sub-image width and height information divided by a width of the layer width and height information, and in this example, the sub-image width and height pixel information of the sub-image corresponding to sub-image 1 has a width of (X × W4)/W3. The height of the sub-image width and height pixel information is the product of the height corresponding to the resolution information and the height of the sub-image width and height information divided by the height of the layer width and height information, and in this example, the height of the sub-image width and height pixel information of the sub-image corresponding to sub-image 1 is (Y × H3)/H3. The sub-image width and height pixel information is sub-image width and height information divided by corresponding interface point distance information, that is, the height of the sub-image width and height pixel information is the height of the sub-image width and height information divided by the interface point distance information of the virtual interface corresponding to the sub-image, the width of the sub-image width and height pixel information is the width of the sub-image width and height information divided by the interface point distance information of the virtual interface corresponding to the sub-image, in this example, the width of the sub-image width and height pixel information of the sub-image 1 is W4/D1, and the height of the sub-image width and height pixel information of the sub-image 1 is H3/D1. The sub-image scaling ratio includes a sub-image horizontal scaling ratio and a sub-image vertical scaling ratio, where the sub-image horizontal scaling ratio is a width of the sub-image width and height pixel information divided by a width of the sub-image width and height pixel information, and correspondingly, the sub-image vertical scaling ratio is a height of the sub-image width and height pixel information divided by a height of the sub-image width and height pixel information, in this example, the sub-image horizontal scaling ratio corresponding to sub-image 1 is a ratio of (X × W4)/W3 and W4/D1, and the sub-image vertical scaling ratio corresponding to sub-image 1 is a ratio of (Y × H3)/H3 and H3/D1. The process of calculating the related information of sub-graph 2 is not described here, and reference may be made to the process of calculating the related information of sub-graph 1.
Finally, the embedded processor 240 sends the sub-image parameter information of each of the plurality of sub-images to the programmable logic device 260 via the switch backplane 28, and the programmable logic device 260 obtains and processes the corresponding image data in the video frame from the memory according to the sub-image parameter of each of the plurality of sub-images to obtain the plurality of sub-images, and obtains a plurality of processed sub-images based on the plurality of sub-images and outputs the processed sub-images from the plurality of physical interfaces 262, respectively (see step S108 in fig. 1). Specifically, step S108 specifically includes: acquiring corresponding image data in the video frame according to the sub-image coordinate pixel information and the sub-image width and height pixel information of the sub-images to obtain the sub-images; and scaling the plurality of sub-images according to the respective sub-image scaling ratios of the plurality of sub-images to obtain the plurality of processed sub-images. In this embodiment, the plurality of processing sub-images are output to the corresponding sub-display screens through the physical interfaces, and the dot pitch of each sub-display screen corresponds to the dot pitch of each virtual interface one by one. By the above technical solution, when each processing sub-image output via each physical interface of the video processing device 20 is displayed on each sub-screen having the same point distance as that of each virtual interface, the image display effect on each sub-screen is consistent, and the problem of inconsistent display effect due to different point distances does not occur.
Optionally, before step S108, the video processing method may further include, for example: determining sub-graph relative interface coordinate information of each of the multiple sub-graphs according to interface coordinate information of each of the multiple virtual interfaces and sub-graph coordinate information of each of the multiple sub-graphs; and according to the sub-graph interface coordinate information of each sub-graph and each virtual interfaceThe interface point distance information determines sub-graph relative interface coordinate pixel information for each of the plurality of sub-graphs. Here, the subgraph relative interface coordinate information of each subgraph is the difference value between the subgraph coordinate information of each subgraph and the interface coordinate information of the corresponding virtual interface, in this example, the subgraph relative interface coordinate information of subgraph 1 is the difference value between the subgraph coordinate information of subgraph 1 and the interface coordinate information of virtual interface 1, so the subgraph relative interface coordinate information of subgraph 1 is (x) C -x A ,y C -y A ) The sub-graph of sub-graph 1 has (x) relative to the interface coordinate pixel information C -x A ,y C -y A ) The ratio of the dot pitch information D1 of the virtual interface 1 corresponding to sub-diagram 1, i.e., (x) C -x A ,y C -y A ) and/D1. Correspondingly, after step S108, the video processing method further includes: and outputting the sub-graph relative interface coordinate information of each sub-graph from the plurality of physical interfaces respectively. In this way, subsequent devices (e.g., a sending card and a receiving card) of the video processing device 20 can obtain the interface coordinate information of the sub-image, so that the sub-image output by the physical interface 262 can be subsequently presented on the physical display screen according to the position of the corresponding virtual interface of the upper computer of each sub-image, and the display effect (display scale and display position) of the upper sub-image on each virtual interface of the upper computer 10 is completely consistent with the display effect of the image on the display screen electrically connected to the video processing device 20 (specifically, the physical interface 262 thereof), such as an LED display screen.
In summary, the video processing method first determines, according to the interface information and the layer information, respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively located in the plurality of virtual interfaces, then obtains, according to the respective sub-image parameters of the plurality of sub-images, corresponding image data in the video frame to obtain the plurality of sub-images, and obtains, based on the plurality of sub-images, a plurality of processed sub-images to be output from the plurality of physical interfaces, respectively, so that determining, according to the interface information including the interface point distance information and the layer information, sub-image parameters of the sub-images in the video frame corresponding to the plurality of virtual interfaces, obtaining, according to the sub-image parameters, each sub-image and processing the sub-image to obtain each processed sub-image, and each processed sub-image has the same display effect subsequently. The video processing method provided by the embodiment is not only suitable for displaying the sub-display screens with different dot pitches, but also suitable for displaying the sub-display screens with the same dot pitch.
[ second embodiment ]
As shown in fig. 5, a video processing apparatus 200 is provided according to a second embodiment of the present invention. The video processing apparatus 200 includes, for example, a first acquisition module 202, a second acquisition module 204, a sub-image parameter determination module 206, and an image acquisition and output module 208.
Specifically, the first obtaining module 202 is configured to obtain interface information, where the interface information includes interface coordinate information, interface width and height information, and interface point distance information of each of a plurality of virtual interfaces, where the plurality of virtual interfaces correspond to a plurality of physical interfaces one to one;
the second obtaining module 204 is configured to obtain layer information, where the layer information includes layer coordinate information, layer width and height information, and resolution information of a video frame corresponding to the layer;
the sub-image parameter determining module 206 is configured to determine, according to the interface information and the layer information, sub-image parameter resolution information of each of a plurality of sub-images in the video frame corresponding to a plurality of sub-images with layers respectively located in the plurality of virtual interfaces; and
the image obtaining and outputting module 208 is configured to obtain corresponding image data in the video frame according to respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, obtain a plurality of processed sub-images based on the plurality of sub-images, and output the processed sub-images from the plurality of physical interfaces respectively.
In one embodiment of the present invention, as shown in fig. 6, the sub-image parameter determining module 206 specifically includes a sub-image information determining unit 2062, a relative coordinate information determining unit 2064, and a sub-image parameter determining unit 2066. The sub-graph information determining unit 2062 is configured to determine, according to the interface coordinate information and the interface width and height information of each of the multiple virtual interfaces, the layer coordinate information and the layer width and height information, sub-graph coordinate information and sub-graph width and height information of each of the multiple sub-graphs of the layer, which are located in the multiple virtual interfaces, respectively. The relative coordinate information determining unit 2064 is configured to determine sub-graph relative layer coordinate information of each of the multiple sub-graphs according to the sub-graph coordinate information of each of the multiple sub-graphs and the layer coordinate information. The sub-image parameter determining unit 2066 is configured to determine, according to the sub-image relative layer coordinate information and sub-image width height information of each of the multiple sub-images, the resolution information, the layer width height information, and the interface point distance information of each of the multiple virtual interfaces, a sub-image parameter of each of multiple sub-images corresponding to each of the multiple virtual interfaces in the video frame.
In an embodiment of the present invention, the sub-image parameters of the plurality of sub-images include sub-image coordinate pixel information, sub-image width and height pixel information, and a sub-image scaling ratio of the plurality of sub-images. The sub-image parameter determining unit 2066 is specifically configured to determine sub-image coordinate pixel information of each of the plurality of sub-images according to sub-image relative layer coordinate information of each of the plurality of sub-images, the layer width and height information, and the resolution information; the sub-image width and height pixel information of each of the plurality of sub-images is determined according to the sub-image width and height information, the layer width and height information and the resolution information of each of the plurality of sub-images; the system comprises a plurality of sub-graphs and a plurality of virtual interfaces, wherein the sub-graphs are used for determining the width and height pixel information of each of the plurality of sub-graphs according to the sub-graph width and height information of each of the plurality of sub-graphs and the interface point distance information of each of the plurality of virtual interfaces; and the scaling ratio of each sub-image of the plurality of sub-images is determined according to the width and height pixel information of each sub-image of the plurality of sub-images and the width and height pixel information of each sub-image of the plurality of sub-images.
In one embodiment of the present invention, as shown in fig. 7, the image acquisition and output module 208 specifically includes, for example, a sub-image acquisition unit 2082 and a processed sub-image acquisition unit 2084. The sub-image obtaining unit 2082 is configured to obtain corresponding image data in the video frame according to the sub-image coordinate pixel information and the sub-image width and height pixel information of the plurality of sub-images, so as to obtain the plurality of sub-images. The processing sub-image obtaining unit 2084 is configured to scale the plurality of sub-images according to the sub-image scaling ratios of the plurality of sub-images, so as to obtain the plurality of processed sub-images.
In an embodiment of the present invention, the relative coordinate information determining unit 2064 is further configured to: determining sub-graph relative interface coordinate information of each of the multiple sub-graphs according to interface coordinate information of each of the multiple virtual interfaces and sub-graph coordinate information of each of the multiple sub-graphs; and determining sub-graph relative interface coordinate pixel information of each of the multiple sub-graphs according to sub-graph interface coordinate information of each of the multiple sub-graphs and interface point distance information of each of the multiple virtual interfaces. The sub-image acquisition and output module 208 is further configured to: and outputting the sub-image relative interface coordinate pixel information of each sub-image from the physical interfaces.
For specific working processes and technical effects among the modules in the video processing apparatus 200 in this embodiment, reference is made to the description of the relevant steps in the foregoing first embodiment, and details are not repeated here.
[ third embodiment ]
As shown in fig. 8, a video processing system 300 is provided according to a third embodiment of the present invention. The video processing system 300 includes, for example, a processor 330 and a memory 310 coupled to the processor 330. The memory 310 may be, for example, a non-volatile memory having stored thereon instructions 311 for execution by the processor 330. The processor 330 may, for example, comprise an embedded processor. The processor 330, when executing the instructions 311, performs the video processing method provided by the first embodiment.
[ fourth example ] A
As shown in FIG. 9, a fourth embodiment of the invention provides a computer-readable storage medium 400 having stored thereon computer-executable instructions 410. The computer-executable instructions 410 are for performing the video processing method as described in the first embodiment above. The computer-readable storage medium 400 is, for example, a non-volatile memory, such as including: magnetic media (e.g., hard disks, floppy disks, and magnetic tape), optical media (e.g., cd ROM disks and DVDs), magneto-optical media (e.g., optical disks), and hardware devices specially constructed for storing and executing computer-executable instructions (e.g., read Only Memories (ROMs), random Access Memories (RAMs), flash memories, etc.). The computer-readable storage medium 400 may execute the computer-executable instructions 410 by one or more processors or processing devices.
In addition, it should be understood that the foregoing embodiments are merely exemplary illustrations of the present invention, and the technical solutions of the embodiments can be arbitrarily combined and collocated without conflict between technical features and structural contradictions, which do not violate the purpose of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit/module in the embodiments of the present invention may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules may be integrated into one unit/module. The integrated units/modules may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units/modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A video processing method, comprising:
acquiring interface information, wherein the interface information comprises interface coordinate information, interface width and height information and interface point distance information of a plurality of virtual interfaces, and the virtual interfaces are in one-to-one correspondence with the physical interfaces;
acquiring layer information, wherein the layer information comprises layer coordinate information, layer width and height information and resolution information of a video frame corresponding to the layer;
according to the interface information and the layer information, determining respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively positioned in the plurality of virtual interfaces; and
acquiring corresponding image data in the video frame according to respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processing sub-images based on the plurality of sub-images and outputting the plurality of processing sub-images from the plurality of physical interfaces respectively;
determining respective sub-image parameters of a plurality of sub-images corresponding to a plurality of sub-images of the video frame, which are respectively located in the plurality of virtual interfaces with the layer, according to the interface information and the layer information, specifically includes:
determining sub-graph coordinate information and sub-graph width and height information of the plurality of sub-graphs of which the layer is respectively positioned in the plurality of virtual interfaces according to the interface coordinate information and the interface width and height information of the plurality of virtual interfaces, the layer coordinate information and the layer width and height information;
determining sub-graph relative layer coordinate information of the sub-graphs according to the sub-graph coordinate information of the sub-graphs and the layer coordinate information; and
and determining sub-image parameters of a plurality of sub-images corresponding to the plurality of virtual interfaces in the video frame according to the sub-image relative layer coordinate information and sub-image width and height information of the plurality of sub-images, the resolution information, the layer width and height information and the interface point distance information of the plurality of virtual interfaces.
2. The video processing method of claim 1, wherein the sub-image parameters of the sub-images comprise sub-image coordinate pixel information, sub-image width and height pixel information, and sub-image scaling ratio of the sub-images;
determining sub-image parameters of a plurality of sub-images corresponding to the plurality of virtual interfaces respectively in the video frame according to the sub-image relative layer coordinate information and sub-image width and height information of the plurality of sub-images, the resolution information, the layer width and height information, and the interface point distance information of the plurality of virtual interfaces respectively, specifically including:
determining sub-image coordinate pixel information of the sub-images according to sub-image relative layer coordinate information, the layer width and height information and the resolution information of the sub-images;
determining sub-image width and height pixel information of the sub-images according to the sub-image width and height information, the layer width and height information and the resolution information of the sub-images;
determining sub-image width and height pixel information of each of the plurality of sub-images according to sub-image width and height information of each of the plurality of sub-images and interface point distance information of each of the plurality of virtual interfaces; and
and determining the sub-image scaling ratio of each of the plurality of sub-images according to the width and height pixel information of each of the plurality of sub-images and the width and height pixel information of each of the plurality of sub-images.
3. The video processing method according to claim 2,
the obtaining, according to the sub-image parameters of the plurality of sub-images, corresponding image data in the video frame to obtain the plurality of sub-images, and obtaining, based on the plurality of sub-images, a plurality of processed sub-images to be output from the plurality of physical interfaces respectively specifically includes:
acquiring corresponding image data in the video frame according to the sub-image coordinate pixel information and the sub-image width and height pixel information of the sub-images to obtain the sub-images; and
and scaling the plurality of sub-images according to the sub-image scaling ratios of the plurality of sub-images to obtain the plurality of processed sub-images.
4. The video processing method according to any one of claims 1 to 3,
before the obtaining, according to the sub-image parameters of the plurality of sub-images, corresponding image data in the video frame to obtain the plurality of sub-images, and obtaining, based on the plurality of sub-images, a plurality of processed sub-images to be output from the plurality of physical interfaces, respectively, the video processing method further includes:
determining sub-graph relative interface coordinate information of each of the multiple sub-graphs according to interface coordinate information of each of the multiple virtual interfaces and sub-graph coordinate information of each of the multiple sub-graphs; and
determining sub-graph relative interface coordinate pixel information of each of the multiple sub-graphs according to sub-graph interface coordinate information of each of the multiple sub-graphs and interface point distance information of each of the multiple virtual interfaces;
after the obtaining, according to the sub-image parameters of the plurality of sub-images, corresponding image data in the video frame to obtain the plurality of sub-images, and obtaining a plurality of processed sub-images based on the plurality of sub-images and outputting the processed sub-images from the plurality of physical interfaces, respectively, the video processing method further includes:
and outputting the sub-graph relative interface coordinate pixel information of each sub-graph from the plurality of physical interfaces respectively.
5. A video processing apparatus, comprising:
the first acquisition module is used for acquiring interface information, wherein the interface information comprises interface coordinate information, interface width and height information and interface point distance information of a plurality of virtual interfaces, and the virtual interfaces are in one-to-one correspondence with the physical interfaces;
the second obtaining module is used for obtaining layer information, wherein the layer information comprises layer coordinate information, layer width and height information and resolution information of a video frame corresponding to the layer;
a sub-image parameter determining module, configured to determine, according to the interface information and the layer information, sub-image parameter resolution information of each of a plurality of sub-images in the video frame, where the sub-images correspond to a plurality of sub-images whose layers are located in the plurality of virtual interfaces, respectively; and
the image acquisition and output module is used for acquiring corresponding image data in the video frame according to respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processed sub-images based on the plurality of sub-images and outputting the processed sub-images from the plurality of physical interfaces respectively;
the sub-image parameter determining module specifically includes:
a sub-graph information determining unit, configured to determine, according to interface coordinate information and interface width and height information of each of the multiple virtual interfaces, the layer coordinate information, and the layer width and height information, sub-graph coordinate information and sub-graph width and height information of each of the multiple sub-graphs that the layer is located in the multiple virtual interfaces, respectively;
a relative coordinate information determining unit, configured to determine, according to the sub-graph coordinate information and the layer coordinate information of each of the multiple sub-graphs, sub-graph relative layer coordinate information of each of the multiple sub-graphs; and
and the sub-image parameter determining unit is configured to determine sub-image parameters of multiple sub-images corresponding to the multiple virtual interfaces in the video frame according to the sub-image relative layer coordinate information and sub-image width and height information of the multiple sub-images, the resolution information, the layer width and height information, and the interface point distance information of the multiple virtual interfaces.
6. The video processing apparatus of claim 5, wherein the sub-image parameters of the sub-images comprise sub-image coordinate pixel information, sub-image width and height pixel information, and sub-image scaling ratio of the sub-images;
the sub-image parameter determining unit is specifically configured to:
determining sub-image coordinate pixel information of the sub-images according to sub-image relative layer coordinate information, the layer width and height information and the resolution information of the sub-images;
determining sub-image width and height pixel information of the sub-images according to the sub-image width and height information, the layer width and height information and the resolution information of the sub-images;
determining width and height pixel information of each of the multiple sub-images according to sub-image width and height information of each of the multiple sub-images and interface point distance information of each of the multiple virtual interfaces; and
and determining the sub-image scaling ratio of each of the plurality of sub-images according to the width and height pixel information of each of the plurality of sub-images and the width and height pixel information of each of the plurality of sub-images.
7. A video processing system, comprising: a processor and a memory coupled to the processor; wherein the memory stores instructions for execution by the processor and the instructions cause the processor to perform operations to perform the video processing method of any of claims 1 to 4.
8. A video processing apparatus, comprising:
an embedded processor;
the programmable logic device is electrically connected with the embedded processor;
a memory electrically connected to the programmable logic device;
a plurality of physical interfaces electrically connecting the programmable logic devices;
wherein the embedded processor is to:
acquiring interface information, wherein the interface information comprises interface coordinate information, interface width and height information and interface point distance information of a plurality of virtual interfaces, and the virtual interfaces are in one-to-one correspondence with the physical interfaces;
acquiring layer information, wherein the layer information comprises layer coordinate information, layer width and height information and resolution information of a video frame corresponding to the layer;
according to the interface information and the layer information, determining respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively positioned in the plurality of virtual interfaces;
the programmable logic device is to:
acquiring corresponding image data in the video frame from the memory according to respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processing sub-images based on the plurality of sub-images and outputting the processing sub-images from the plurality of physical interfaces respectively;
wherein the programmable logic device is further to:
determining sub-graph coordinate information and sub-graph width and height information of the sub-graphs of the layers respectively positioned in the virtual interfaces according to interface coordinate information and interface width and height information of the virtual interfaces, the layer coordinate information and the layer width and height information; determining sub-graph relative layer coordinate information of the sub-graphs according to the sub-graph coordinate information of the sub-graphs and the layer coordinate information; and determining sub-image parameters of a plurality of sub-images corresponding to the virtual interfaces in the video frame according to the sub-image relative layer coordinate information and sub-image width and height information of the sub-images, the resolution information, the layer width and height information and the interface point distance information of the virtual interfaces.
CN202010107917.7A 2020-02-21 2020-02-21 Video processing method, device and system and video processing equipment Active CN113301411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010107917.7A CN113301411B (en) 2020-02-21 2020-02-21 Video processing method, device and system and video processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010107917.7A CN113301411B (en) 2020-02-21 2020-02-21 Video processing method, device and system and video processing equipment

Publications (2)

Publication Number Publication Date
CN113301411A CN113301411A (en) 2021-08-24
CN113301411B true CN113301411B (en) 2023-03-14

Family

ID=77317460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010107917.7A Active CN113301411B (en) 2020-02-21 2020-02-21 Video processing method, device and system and video processing equipment

Country Status (1)

Country Link
CN (1) CN113301411B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096576A (en) * 2011-01-14 2011-06-15 深圳市创凯电子有限公司 Quick tiled display method for light-emitting diode (LED) screens with different dot pitches and different dot matrixes
CN102682734A (en) * 2010-06-21 2012-09-19 应骏 Image correction method for display panel
CN106303225A (en) * 2016-07-29 2017-01-04 努比亚技术有限公司 A kind of image processing method and electronic equipment
CN106683576A (en) * 2016-11-14 2017-05-17 深圳市大族元亨光电股份有限公司 Making method of LED display screen
CN110362286A (en) * 2019-07-10 2019-10-22 苏州铭威天欣信息科技有限公司 A kind of multi-screen splicing Dynamic Configuration and configuration device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7002589B2 (en) * 2000-03-17 2006-02-21 Sun Microsystems, Inc. Blending the edges of multiple overlapping screen images
JP4510853B2 (en) * 2007-07-05 2010-07-28 シャープ株式会社 Image data display device, image data output device, image data display method, image data output method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682734A (en) * 2010-06-21 2012-09-19 应骏 Image correction method for display panel
CN102096576A (en) * 2011-01-14 2011-06-15 深圳市创凯电子有限公司 Quick tiled display method for light-emitting diode (LED) screens with different dot pitches and different dot matrixes
CN106303225A (en) * 2016-07-29 2017-01-04 努比亚技术有限公司 A kind of image processing method and electronic equipment
CN106683576A (en) * 2016-11-14 2017-05-17 深圳市大族元亨光电股份有限公司 Making method of LED display screen
CN110362286A (en) * 2019-07-10 2019-10-22 苏州铭威天欣信息科技有限公司 A kind of multi-screen splicing Dynamic Configuration and configuration device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Design and implementation of fine pitch COB LED display;Nick G. M. Yang等;《 2013 10th China International Forum on Solid State Lighting (ChinaSSL)》;20150806;全文 *
LED/LCD视频拼接器的多视频源系统设计;江勇;《单片机与嵌入式系统应用》;20160701(第07期);全文 *

Also Published As

Publication number Publication date
CN113301411A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
US10489948B2 (en) Image synthesis method, image chip, and image device
CN109636885B (en) Sequential frame animation production method and system for H5 page
US11783445B2 (en) Image processing method, device and apparatus, image fitting method and device, display method and apparatus, and computer readable medium
TW201439969A (en) Rendering method and device
CN112650460B (en) Media display method and media display device
CN110099224B (en) Pre-monitoring display method, device and system, computer equipment and storage medium
TW202025080A (en) Methods and devices for graphics processing
CN212137804U (en) Point-to-point video splicing system
CN106951204B (en) Image synchronization method based on computer cluster visualization system
CN113094010A (en) Image display method, device and system
CN113301411B (en) Video processing method, device and system and video processing equipment
CN105808184A (en) Method and device for displaying Android 2D (Two-dimensional) application image, and headset equipment
CN112540735B (en) Multi-screen synchronous display method, device and system and computer storage medium
CN110364127A (en) The adaptive display method and device of intelligent large screen system
JP7289390B2 (en) Image processing device for display wall system and display control method
CN115880156B (en) Multi-layer spliced display control method and device
CN113132556B (en) Video processing method, device and system and video processing equipment
CN110597577A (en) Head-mounted visual equipment and split-screen display method and device thereof
CN104639969A (en) Full screen page display method and device
US9998723B2 (en) Filling disparity holes based on resolution decoupling
CN104951260A (en) Implementation method of mixed interface based on Qt under embedded-type Linux platform
CN111158618A (en) Method and equipment for displaying pattern based on mobile terminal
CN112584217A (en) Picture identification method, device and system
CN113741836B (en) Media processing method, device and system
CN114371820A (en) Method and device for realizing special-shaped layer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240710

Address after: 01A, 12th Floor, No. 8 Caihefang Road, Haidian District, Beijing, 100000

Patentee after: PIXELHUE TECHNOLOGY Ltd.

Country or region after: China

Address before: 710075 DEF101, Zero One Square, Xi'an Software Park, No. 72 Zhangbajie Science and Technology Second Road, Xi'an High-tech Zone, Shaanxi Province

Patentee before: XI'AN NOVASTAR TECH Co.,Ltd.

Country or region before: China