CN113132556B - Video processing method, device and system and video processing equipment - Google Patents

Video processing method, device and system and video processing equipment Download PDF

Info

Publication number
CN113132556B
CN113132556B CN202010048892.8A CN202010048892A CN113132556B CN 113132556 B CN113132556 B CN 113132556B CN 202010048892 A CN202010048892 A CN 202010048892A CN 113132556 B CN113132556 B CN 113132556B
Authority
CN
China
Prior art keywords
sub
images
information
layer
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010048892.8A
Other languages
Chinese (zh)
Other versions
CN113132556A (en
Inventor
孙立停
周晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Novastar Electronic Technology Co Ltd
Original Assignee
Xian Novastar Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Novastar Electronic Technology Co Ltd filed Critical Xian Novastar Electronic Technology Co Ltd
Priority to CN202010048892.8A priority Critical patent/CN113132556B/en
Publication of CN113132556A publication Critical patent/CN113132556A/en
Application granted granted Critical
Publication of CN113132556B publication Critical patent/CN113132556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region

Abstract

The embodiment of the invention discloses a video processing method, a video processing device, a video processing system and video processing equipment. The video processing method includes, for example: acquiring interface information, wherein the interface information comprises interface coordinate information and interface resolution information of a plurality of virtual interfaces, and the virtual interfaces correspond to a plurality of physical interfaces one to one; acquiring layer information, wherein the layer information comprises layer coordinate information, layer resolution information and video frame resolution information of a video frame corresponding to the layer; determining sub-image parameters of a plurality of sub-images corresponding to a plurality of sub-images of the layer in the plurality of virtual interfaces respectively in the video frame according to the interface information and the layer information; and acquiring corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processing sub-images based on the plurality of sub-images to be respectively output from the plurality of physical interfaces.

Description

Video processing method, device and system and video processing equipment
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video processing method, a video processing apparatus, a video processing system, and a video processing device.
Background
In the field of video processing technology, a video processing device is generally used to process an image, the video processing device generally includes a plurality of output interfaces (i.e., physical interfaces) for outputting an image, and how to achieve flexible output of an image from the output interfaces of the video processing device is a problem that needs to be solved at present.
Disclosure of Invention
Accordingly, embodiments of the present invention provide a video source processing method, a video processing apparatus, a video processing system, a video processing device, and a computer-readable storage medium, which enable flexible output of images.
In one aspect, a video processing method provided in an embodiment of the present invention includes: acquiring interface information, wherein the interface information comprises interface coordinate information and interface resolution information of a plurality of virtual interfaces, and the virtual interfaces correspond to a plurality of physical interfaces one to one; acquiring layer information, wherein the layer information comprises layer coordinate information, layer resolution information and video frame resolution information of a video frame corresponding to the layer; determining sub-image parameters of a plurality of sub-images corresponding to a plurality of sub-images of the layer in the plurality of virtual interfaces respectively in the video frame according to the interface information and the layer information; and acquiring corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processing sub-images based on the plurality of sub-images to be respectively output from the plurality of physical interfaces.
In the above scheme, the video processing method determines respective sub-image parameters of a plurality of sub-images corresponding to a plurality of sub-images of the video frame respectively positioned in the plurality of virtual interfaces and the image layer according to the interface information and the image layer information, acquires and processes corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain a plurality of processed sub-images, and outputs the plurality of processed sub-images from the plurality of physical interfaces respectively, thereby realizing flexible output of images through the plurality of physical interfaces according to a relationship between the virtual interfaces and the image layer.
In an embodiment of the present invention, the determining, according to the interface information and the layer information, respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images with layers respectively located in the plurality of virtual interfaces includes: determining sub-graph coordinate information and sub-graph resolution information of the plurality of sub-graphs of which the layers are respectively positioned in the plurality of virtual interfaces according to the interface information, the layer coordinate information and the layer resolution information; determining sub-graph relative layer coordinate information of the sub-graphs according to the sub-graph coordinate information of the sub-graphs and the layer coordinate information; and determining sub-image parameters of a plurality of sub-images corresponding to the virtual interfaces in the video frame according to the video frame resolution information, the layer resolution information, the sub-image relative layer coordinate information of the sub-images and the sub-image resolution information of the sub-images.
In one embodiment of the present invention, the sub-image parameters of each of the plurality of sub-images include sub-image coordinate information and sub-image resolution information of each of the plurality of sub-images; the determining, according to the video frame resolution information, the layer resolution information, the relative layer coordinate information of each sub-picture of the multiple sub-pictures, and the sub-picture resolution information of each sub-picture of the multiple sub-pictures, sub-picture parameters of each sub-picture in the video frame corresponding to the multiple virtual interfaces respectively includes: determining a video frame scaling ratio according to the video frame resolution information and the layer resolution information; and determining sub-image coordinate information and sub-image resolution information of the sub-images according to the sub-image relative layer coordinate information of the sub-images, the sub-image resolution information of the sub-images and the video frame scaling ratio.
In an embodiment of the present invention, the sub-image parameters of the plurality of sub-images include sub-image coordinate information, sub-image resolution information, and sub-image scaling ratio of the plurality of sub-images; the determining, according to the video frame resolution information, the layer resolution information, the relative layer coordinate information of each sub-picture of the multiple sub-pictures, and the sub-picture resolution information of each sub-picture of the multiple sub-pictures, sub-picture parameters of each sub-picture in the video frame corresponding to the multiple virtual interfaces respectively includes: determining a video frame scaling ratio according to the video frame resolution information and the layer resolution information, wherein the sub-image scaling ratio of each of the plurality of sub-images is equal to the video frame scaling ratio; determining sub-image coordinate information and sub-image resolution information of the sub-images according to sub-image relative layer coordinate information of the sub-images, sub-image resolution information of the sub-images and sub-image scaling ratios of the sub-images; before the obtaining, according to the sub-image parameters of the plurality of sub-images, corresponding image data in the video frame to obtain the plurality of sub-images, and obtaining, based on the plurality of sub-images, a plurality of processed sub-images to be output from the plurality of physical interfaces, respectively, the video processing method further includes: determining sub-graph relative interface coordinate information of each of the multiple sub-graphs according to sub-graph coordinate information of each of the multiple sub-graphs and interface coordinate information of each of the multiple virtual interfaces; and after the corresponding image data in the video frame is obtained according to the sub-image parameters of the sub-images to obtain the sub-images, and a plurality of processing sub-images are obtained based on the sub-images and are output from the physical interfaces respectively, the video processing method further comprises the following steps: and outputting the sub-graph relative interface coordinate information of each sub-graph from the plurality of physical interfaces respectively.
In an embodiment of the present invention, the obtaining image data corresponding to the video frame according to the sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processed sub-images based on the plurality of sub-images specifically includes: acquiring corresponding image data in the video frame according to the sub-image coordinate information and the sub-image resolution information of the sub-images to obtain the sub-images; and scaling the plurality of sub-images according to the respective sub-image scaling ratios of the plurality of sub-images to obtain the plurality of processed sub-images.
In another aspect, an embodiment of the present invention provides a video source processing apparatus, including: the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring interface information, the interface information comprises interface coordinate information and interface resolution information of a plurality of virtual interfaces, and the plurality of virtual interfaces correspond to a plurality of physical interfaces one to one; the second acquisition module is used for acquiring layer information, wherein the layer information comprises layer coordinate information, layer resolution information and video frame resolution information of a video frame corresponding to the layer; a sub-image parameter determining module, configured to determine, according to the interface information and the layer information, sub-image parameters of multiple sub-images in the video frame, where the multiple sub-images correspond to multiple sub-images with layers located in the multiple virtual interfaces, respectively; and the image acquisition and output module is used for acquiring corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processing sub-images based on the plurality of sub-images to be respectively output from the plurality of physical interfaces.
In the above scheme, the video processing apparatus determines, by the sub-image parameter determining module, respective sub-image parameters of a plurality of sub-images corresponding to a plurality of sub-images in the video frame, which are respectively located in the plurality of virtual interfaces, and the image layer according to the interface information and the image layer information, and the image obtaining and outputting module obtains and processes corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain a plurality of processed sub-images, and outputs the plurality of processed sub-images from the plurality of physical interfaces, respectively, so that flexible output of images through the plurality of physical interfaces according to a relationship between the virtual interfaces and the image layer is achieved.
In an embodiment of the present invention, the sub-image parameter determining module specifically includes: a sub-graph information determining unit, configured to determine, according to the interface information, the layer coordinate information, and the layer resolution information, sub-graph coordinate information and sub-graph resolution information of each of the multiple sub-graphs that the layer is located in the multiple virtual interfaces, respectively; a relative coordinate information determining unit, configured to determine, according to the sub-graph coordinate information and the layer coordinate information of each of the multiple sub-graphs, sub-graph relative layer coordinate information of each of the multiple sub-graphs; and a sub-image parameter determining unit, configured to determine sub-image parameters of multiple sub-images corresponding to the multiple virtual interfaces in the video frame according to the video frame resolution information, the layer resolution information, sub-image relative layer coordinate information of the multiple sub-images, and sub-image resolution information of the multiple sub-images.
In another aspect, an embodiment of the present invention provides a video processing system, including: a processor and a memory coupled to the processor; wherein the memory stores instructions for execution by the processor, and the instructions cause the processor to perform operations to perform any of the video processing methods described above.
In another aspect, an embodiment of the present invention provides a video processing apparatus, including: an embedded processor; the programmable logic device is electrically connected with the embedded processor; the memory is electrically connected with the programmable logic device; a plurality of physical interfaces electrically connected to the programmable logic device; wherein the embedded processor is to: acquiring interface information, wherein the interface information comprises interface coordinate information and interface resolution information of a plurality of virtual interfaces, and the virtual interfaces correspond to a plurality of physical interfaces one to one; acquiring layer information, wherein the layer information comprises layer coordinate information, layer resolution information and video frame resolution information of a video frame corresponding to the layer; determining sub-image parameters of a plurality of sub-images corresponding to a plurality of sub-images of the video frame, wherein the sub-images are respectively positioned in the plurality of virtual interfaces, and the image layers are respectively positioned in the plurality of virtual interfaces according to the interface information and the image layer information; the programmable logic device is to: and acquiring corresponding image data in the video frame from the memory according to the sub-image parameters of the sub-images to obtain the sub-images, and acquiring a plurality of processing sub-images based on the sub-images and outputting the processing sub-images from the physical interfaces respectively.
In the above technical solution, the video processing device processes the image through the embedded processor and the programmable logic device included in the video processing device, so that the image is flexibly output from a plurality of physical interfaces of the video processing device.
In yet another aspect, an embodiment of the present invention provides a computer-readable storage medium storing computer-executable instructions for performing any one of the above-mentioned video processing methods.
One or more of the above technical solutions may have the following advantages or beneficial effects: the video processing method, the video processing device, the video processing system, the video processing equipment and the computer readable storage medium provided by the embodiment realize the processing of the images and the flexible output of the images from a plurality of physical interfaces.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a video processing method according to a first embodiment of the present invention.
Fig. 2 is a detailed flowchart of step S106 in fig. 1.
Fig. 3 is a schematic structural diagram of a system applying the video processing method according to the first embodiment of the present invention.
Fig. 4A to 4D are schematic diagrams illustrating processing procedures involved in the video processing method according to the first embodiment of the present invention.
Fig. 5 is a schematic block diagram of a video processing apparatus according to a second embodiment of the present invention.
Fig. 6 is a schematic diagram of a unit structure of the sub-image parameter determining module in fig. 5.
Fig. 7 is a schematic structural diagram of a video processing system according to a third embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a computer-readable storage medium according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
[ first embodiment ] A method for manufacturing a semiconductor device
Referring to fig. 1, a video processing method of a first embodiment of the present invention is shown. The video processing method comprises the following steps:
s102, interface information is obtained, wherein the interface information comprises interface coordinate information and interface resolution information of a plurality of virtual interfaces, and the virtual interfaces correspond to a plurality of physical interfaces one to one;
s104, obtaining layer information, wherein the layer information comprises layer coordinate information, layer resolution information and video frame resolution information of a video frame corresponding to the layer;
s106, determining respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the image layers are respectively positioned in the plurality of virtual interfaces according to the interface information and the image layer information; and
s108, obtaining corresponding image data in the video frame according to the sub-image parameters of the sub-images to obtain the sub-images, and obtaining a plurality of processing sub-images based on the sub-images to be output from the physical interfaces respectively.
To facilitate understanding of the present embodiment, the video processing method of the present embodiment will be specifically described below with reference to fig. 2, fig. 3, and fig. 4A to fig. 4D.
The video processing method provided in the embodiment of the present invention may be implemented by, for example, a video processing device such as a video stitching processor, and may also be implemented by other video processing devices, which is not limited in this embodiment.
The structure of the video processing apparatus will be specifically described below with reference to fig. 3. Fig. 3 shows a video processing device 20, which is a card-insertion type video processing device, for example, including a plurality of input cards 22 (only one input card 22 is shown in fig. 3 for illustrative purposes), a master card 24, a plurality of output cards 26 (only one output card 26 is shown in fig. 3 for illustrative purposes), and a switch backplane 28.
In the above description, the input card 22, the main control card 24, and the output card 26 are respectively electrically connected to the switch backplane 28, the main control card 24 is provided with an embedded processor 240, the switch backplane 28 includes a matrix switch module 280 such as a cross-point switch chip, and the output card 26 is provided with a programmable logic device 260, a plurality of physical interfaces 262, and a memory 264. The embedded processor 240 is electrically connected to the programmable logic device 260 through the switch backplane 28, the plurality of physical interfaces 262 are respectively electrically connected to the programmable logic device 260, and the memory 264 is electrically connected to the programmable logic device 260. The embedded processor 240 may be, for example, an MCU (micro controller Unit) or an ARM (Advanced RISC Machines) processor, the Programmable logic device 260 may be, for example, an FPGA (Field Programmable Gate Array), the physical Interface 262 may be, for example, a DVI (Digital Visual Interface) Interface, an HDMI (High Definition Multimedia Interface), a network port such as an RJ45 network port, the Memory 264 may be, for example, a volatile Memory such as a DDR (Double Data Rate Synchronous Dynamic Random Access Memory), and the master card 24 may be used to control the matrix switch module 280 to dispatch a specific input card 22 to a specific output card 26, so as to achieve flexible association between the input card 22 and the output card 26, which is not limited in this embodiment.
As mentioned above, the input card 22 may include a plurality of input interfaces (not shown), such as DVI interfaces, HDMI interfaces, etc. The input card 22 is used for obtaining video frames from the outside (e.g., a video source such as a PC) through at least one of the plurality of input interfaces included therein and sending the video frames to the output card 26 via the switch backplane 28, so that the output card 26 buffers the received video frames to the memory 264.
Of course, the video processing method of the present embodiment may also be applied to a non-card-insertion type video processing apparatus, which typically includes an embedded processor, a programmable logic device, a memory, and a plurality of physical interfaces. The embedded processor, the memory and the plurality of physical interfaces are respectively electrically connected to the programmable logic device, and the description of the embedded processor, the programmable logic device, the memory and the physical interfaces can be referred to the description of the card-inserted video processing device, which is not repeated herein.
In the present embodiment, the processing procedure of the video processing apparatus 20 performs the correlation processing in response to receiving the relevant parameter information (interface information and layer information) transmitted by the upper computer 10. The processing procedure of the upper computer 10 will be specifically described below.
First, in response to a canvas creation operation by a user, a canvas is generated on the upper computer software of the upper computer 10 (as shown in fig. 4A), and then in response to an interface creation operation by the user, the upper computer software generates the same number of virtual interfaces as the physical interfaces 262 (for illustrative purposes, fig. 4A to 4D illustrate by taking two virtual interfaces, i.e., virtual interface 1 and virtual interface 2 (as shown in fig. 4A), where the virtual interfaces serve as media for displaying layers and are equivalent to display screens. As shown in FIG. 4A, the coordinate information of the vertex A at the top left corner of the virtual interface 1, i.e., the interface coordinate information of the virtual interface 1, is (x) A ,y A ) The interface resolution information of the virtual interface 1, i.e., the width and height of the virtual interface 1, are W1 and H1, respectively. The coordinate information of the top left vertex B of the virtual interface 2, i.e., the interface coordinate information of the virtual interface 2, is (x) B ,y B ) The interface resolution information of the virtual interface 2, i.e., the width and height of the virtual interface 2, is W2 and H2, respectively. Then, in response to the user obtaining the original layer (which is, for example, pre-stored on the upper computer 10 by the user) from the upper computer 10 through the upper computer software and performing size adjustment on the original layer on the upper computer software as required, the upper computer software finally displays the adjusted original layer (hereinafter, referred to as layer) on the canvas and obtains related information of the layer. Wherein the layer corresponds to the video frame input by the input card 22, it should be noted that "corresponding" here indicates that the resolution information of the layer corresponds to the subsequent programmable logicThe device 260 processes the video frames to obtain resolution information of the resulting image. Wherein, the video frame resolution information of the video frame is the width and height of the video frame. As shown in FIG. 4B, the coordinate information of the top left vertex C of the layer, i.e. the layer coordinate information, is (x) C ,y C ) The layer resolution information, i.e., the width and height of the layer, is W3 and H3, respectively. In response to a user generating a virtual interface 1, a virtual interface 2 and a layer on a canvas, the upper computer 10 (specifically, upper computer software) obtains interface information of the virtual interface 1 and the virtual interface 2, that is, interface coordinate information and interface resolution information of each virtual interface, and layer information, that is, layer coordinate information and layer resolution information, and video frame resolution information, and sends the information to the embedded processor 240 of the main control card 24 of the video processing device 20.
It is worth mentioning that, in this embodiment, the vertex O at the upper left corner of the canvas is the origin of coordinates, i.e. x o =0,y o And =0. The coordinate information and the resolution information are both in units of pixels (pixels). In addition, the coordinate information may be coordinate information of other vertices of the interface or the layer, besides the coordinate information of the upper left corner of the interface or the layer, which is not limited specifically herein.
As described above, after obtaining the parameter information, the embedded processor 240 determines, according to the interface information and the layer information, sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of the layer that are respectively located in the plurality of virtual interfaces (see step S106 in fig. 1). In one embodiment, as shown in fig. 2, step S106 may, for example, include the steps of: s1060, determining sub-graph coordinate information and sub-graph resolution information of the multiple sub-graphs, where the graph layers are located in the multiple virtual interfaces, respectively according to the interface information, the graph layer coordinate information, and the graph layer resolution information; s1062, determining, according to the sub-image coordinate information and the layer coordinate information of the multiple sub-images, sub-image relative layer coordinate information of the multiple sub-images, and S1064, and determining, according to the video frame resolution information, the layer resolution information, the sub-image relative layer coordinate information of the multiple sub-images, and the sub-image resolution information of the multiple sub-images, sub-image parameters of the multiple sub-images corresponding to the multiple virtual interfaces in the video frame.
Specifically, regarding step S1060, referring to fig. 4C, a sub-graph with a layer located in virtual interface 1 is sub-graph 1, and the sub-graph coordinate information of sub-graph 1, i.e. the top-left vertex coordinate of sub-graph 1, is (x) C ,y C ) The sub-picture resolution information of sub-picture 1, i.e. the width and height of sub-picture 1, is W4 and H4, respectively; referring to fig. 4D, a sub-graph with a layer located in the virtual interface 2 is sub-graph 2, and sub-graph coordinate information of the sub-graph 2, that is, a vertex coordinate at the upper left corner of the sub-graph 2, is (x) B ,y B ) The sub-picture resolution information of sub-picture 2, i.e., the width and height of sub-picture 1, is W5 and H5, respectively. The embodiment of the present invention is not specifically limited herein as long as the coordinate information and the resolution information of each sub-graph can be determined according to the interface information, the layer coordinate information, and the layer resolution information.
Regarding step S1062, the sub-graph relative layer coordinate information of each of the multiple sub-graphs is a relative value between a top left corner vertex coordinate of each sub-graph and a top left corner vertex coordinate of the layer, for example, as shown in fig. 4C, the sub-graph relative layer coordinate information of the sub-graph 1 is obtained by subtracting the layer coordinate information from the sub-graph coordinate information of the sub-graph 1, in this example, since the top left corner vertex coordinate of the sub-graph 1 is consistent with the top left corner vertex coordinate of the layer, the sub-graph relative layer coordinate information of the sub-graph 1 is (0,0); as shown in FIG. 4D, the subgraph relative layer coordinate information of subgraph 2 is the subgraph coordinate information minus the layer coordinate information of subgraph 2, and in this example, the top-left vertex coordinate of subgraph 2 is (x) B ,y B ) The vertex coordinate of the upper left corner of the layer is (x) C ,y C ) Correspondingly, the subgraph relative layer coordinate information of subgraph 2 is (x) B -x C ,y B -y C )。
With respect to step S1062, the sub-image parameters of each of the plurality of sub-images may include, for example, sub-image coordinate information and sub-image resolution information of each of the plurality of sub-images; correspondingly, step S1064 specifically includes: determining a video frame scaling ratio according to the video frame resolution information and the layer resolution information; and determining sub-image coordinate information and sub-image resolution information of the sub-images according to the sub-image relative layer coordinate information of the sub-images, the sub-image resolution information of the sub-images and the video frame scaling ratio. Specifically, the video frame scaling ratio may be, for example, a ratio of video frame resolution information to layer resolution information, where the video frame scaling ratio includes a horizontal scaling ratio and a vertical scaling ratio, and correspondingly, the horizontal scaling ratio is a ratio of a width of the video frame resolution information to a width of the layer resolution information, and the vertical scaling ratio is a ratio of a height of the video frame resolution information to a height of the layer resolution information. The abscissa of the sub-image coordinate information of each sub-image is a value corresponding to the product of the abscissa of the sub-image relative layer coordinate information and the video frame horizontal scaling ratio, the ordinate of the sub-image coordinate information of each sub-image is a value corresponding to the product of the ordinate of the sub-image relative layer coordinate information and the video frame vertical scaling ratio, the width of the resolution information of each sub-image is a value corresponding to the product of the resolution information of each corresponding sub-image and the video frame horizontal scaling ratio, and the height of the resolution information of each sub-image is a value corresponding to the product of the resolution information of each corresponding sub-image and the video frame vertical scaling ratio. It should be noted that the above process of acquiring the width and height of the resolution information of each sub-image further includes performing precision processing, for example, by multiplying 65536 by the result obtained in the above process to obtain the corresponding height and width. In this case, that is, under the condition that the sub-image parameters of the sub-images include the sub-image coordinate information and the sub-image resolution information of the sub-images, the embedded processor 240 can obtain the specific information, that is, the coordinate information and the resolution information, of the sub-images in the video frames in the virtual interfaces, so that the subsequent programmable logic device 260 obtains the corresponding image data in the video frames from the memory 264 according to the coordinate information and the resolution information to obtain the sub-images, and processes the sub-images based on the sub-images to obtain the processed sub-images, which are output through the corresponding physical interfaces 262, respectively, thereby achieving flexible output of the video frames.
Optionally, the sub-image parameters of each of the plurality of sub-images may include, for example, sub-image coordinate information, sub-image resolution information, and a sub-image scaling ratio of each of the plurality of sub-images; the step S1064 specifically includes: determining a video frame scaling ratio according to the video frame resolution information and the layer resolution information, wherein the sub-image scaling ratio of each of the plurality of sub-images is equal to the video frame scaling ratio; and determining sub-image coordinate information and sub-image resolution information of the sub-images according to sub-image relative layer coordinate information of the sub-images, sub-image resolution information of the sub-images and sub-image scaling ratios of the sub-images. In this case, since the sub-image parameter information includes the sub-image scaling ratio of each sub-image, the subsequent programmable logic device 260 obtains corresponding image data in the video frame according to the coordinate information and the resolution information to obtain each sub-image, performs scaling processing on each sub-image according to the sub-image scaling ratio of each sub-image, performs corresponding processing, and finally outputs the processed data through the physical interface 262, thereby achieving flexible output of the video frame, and simultaneously achieving that the scaling ratio of the image output by the physical interface 262 is consistent with the scaling ratio of the image layer displayed on the upper computer 10, and thus, the scale of the image displayed on the upper computer 10 is consistent with the scale of the image output by the physical interface 262.
Finally, the embedded processor 240 sends the sub-image parameter information of each of the plurality of sub-images to the programmable logic device 260 via the switch backplane 28, and the programmable logic device 260 obtains and processes the corresponding image data in the video frame from the memory according to the sub-image parameter of each of the plurality of sub-images to obtain the plurality of sub-images, and obtains a plurality of processed sub-images based on the plurality of sub-images and outputs the processed sub-images from the plurality of physical interfaces 262, respectively (see step S108 in fig. 1).
Optionally, before step S108, the video processing method may further include, for example: and determining the sub-graph relative interface coordinate information of each of the multiple sub-graphs according to the sub-graph coordinate information of each of the multiple sub-graphs and the interface coordinate information of each of the multiple virtual interfaces. The subgraph relative interface coordinate information of each subgraph is the difference value between the subgraph coordinate information of each subgraph and the interface coordinate information of the corresponding interface. Correspondingly, after step S108, the video processing method further includes: and outputting the sub-graph relative interface coordinate information of each sub-graph from the plurality of physical interfaces respectively. In this way, the subsequent devices (e.g., the sending card and the receiving card) of the video processing device 20 may obtain the coordinate information of the interface corresponding to the sub-image, so that the sub-image output by the physical interface 262 can be presented according to the position of each sub-image in the corresponding virtual interface of the upper computer, and the display effect (display scale and display position) of the sub-image on the virtual interface of the upper computer 10 is completely consistent with the display effect of the image on the display screen electrically connected to the video processing device 20, such as an LED display screen.
It should be noted that, in this embodiment, the virtual interfaces on the upper computer 10 may be arbitrarily arranged, and each virtual interface may be overlapped or separated. The sizes of the virtual interfaces are not limited, the final arrangement condition of the virtual interfaces on the upper computer 10 may be completely consistent with the actual physical LED display screen, and the corresponding actual physical LED display screen may be a regular screen (that is, the whole screen has a regular rectangular structure) or a special-shaped screen. In this embodiment, it is possible to display any area and any size of a picture of an original layer in a single virtual interface, and to flexibly display an output image, for example, two physical interfaces display the same image, and only two virtual interfaces on the upper computer 10 need to be overlapped.
In summary, the video processing method first determines, according to the interface information and the layer information, respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively located in the plurality of virtual interfaces, then obtains and processes corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain a plurality of processed sub-images, and outputs the plurality of processed sub-images from the plurality of physical interfaces, so as to implement flexible output of images through the plurality of physical interfaces according to a relationship between the virtual interfaces and the layers.
[ second embodiment ]
As shown in fig. 5, a video processing apparatus 200 is provided according to a second embodiment of the present invention. The video processing apparatus 200 includes, for example, a first acquisition module 202, a second acquisition module 204, a sub-image parameter determination module 206, and an image acquisition and output module 208.
Specifically, the first obtaining module 202 is configured to obtain interface information, where the interface information includes interface coordinate information and interface resolution information of each of a plurality of virtual interfaces, and the plurality of virtual interfaces are in one-to-one correspondence with a plurality of physical interfaces;
the second obtaining module 204 is configured to obtain layer information, where the layer information includes layer coordinate information, layer resolution information, and video frame resolution information of a video frame corresponding to the layer;
the sub-image parameter determining module 206 is configured to determine, according to the interface information and the layer information, respective sub-image parameters of multiple sub-images in the video frame that correspond to multiple sub-images with layers that are located in the multiple virtual interfaces, respectively; and
the image obtaining and outputting module 208 is configured to obtain corresponding image data in the video frame according to respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtain a plurality of processing sub-images based on the plurality of sub-images to be output from the plurality of physical interfaces, respectively.
In one embodiment of the present invention, as shown in fig. 6, the sub-image parameter determining module 206 specifically includes a sub-image information determining unit 2060, a relative coordinate information determining unit 2062, and a sub-image parameter determining unit 2064. The sub-graph information determining unit 2060 is configured to determine, according to the interface information, the layer coordinate information, and the layer resolution information, sub-graph coordinate information and sub-graph resolution information of the multiple sub-graphs, where the layers are located in the multiple virtual interfaces, respectively. The relative coordinate information determination unit 2062 is configured to determine sub-graph relative layer coordinate information of each of the multiple sub-graphs according to the sub-graph coordinate information of each of the multiple sub-graphs and the layer coordinate information. The sub-image parameter determining unit 2064 is configured to determine sub-image parameters of a plurality of sub-images corresponding to the plurality of virtual interfaces in the video frame according to the video frame resolution information, the layer resolution information, sub-image relative layer coordinate information of each of the plurality of sub-images, and sub-image resolution information of each of the plurality of sub-images.
In one embodiment of the present invention, the sub-image parameters of the plurality of sub-images include sub-image coordinate information and sub-image resolution information of the plurality of sub-images; the sub-image parameter determining unit 2064 is specifically configured to: determining a video frame scaling ratio according to the video frame resolution information and the layer resolution information; and determining sub-image coordinate information and sub-image resolution information of the sub-images according to the sub-image relative layer coordinate information of the sub-images, the sub-image resolution information of the sub-images and the video frame scaling ratio.
In an embodiment of the present invention, the sub-image parameters of the plurality of sub-images include sub-image coordinate information, sub-image resolution information, and sub-image scaling ratio of the plurality of sub-images; the sub-image parameter determining unit 2064 is specifically configured to: determining a video frame scaling ratio according to the video frame resolution information and the layer resolution information, wherein the sub-image scaling ratio of each of the plurality of sub-images is equal to the video frame scaling ratio; determining sub-image coordinate information and sub-image resolution information of the sub-images according to sub-image relative layer coordinate information of the sub-images, sub-image resolution information of the sub-images and sub-image scaling ratios of the sub-images; the relative coordinate information determination unit 2062 is further configured to: determining sub-graph relative interface coordinate information of each of the multiple sub-graphs according to sub-graph coordinate information of each of the multiple sub-graphs and interface coordinate information of each of the multiple virtual interfaces; and the image acquisition and output module 208 is further configured to: and outputting the subgraph relative to the interface coordinate information of each subgraph from the physical interfaces.
For specific working processes and technical effects among the modules in the video processing apparatus 200 in this embodiment, reference is made to the description of the relevant steps in the foregoing first embodiment, and details are not repeated here.
[ third embodiment ]
As shown in fig. 7, a video processing system 300 is provided according to a third embodiment of the present invention. The video processing system 300 includes, for example, a processor 330 and a memory 310 coupled to the processor 330. The memory 310 may be, for example, a non-volatile memory having stored thereon instructions 311 for execution by the processor 330. The processor 330 may, for example, comprise an embedded processor. The processor 330, when executing the instructions 311, performs the video processing method provided in the foregoing first embodiment.
[ fourth embodiment ]
As shown in FIG. 8, a fourth embodiment of the invention provides a computer-readable storage medium 400 having stored thereon computer-executable instructions 410. The computer-executable instructions 410 are for performing the video processing method as described in the foregoing first embodiment. The computer-readable storage medium 400 is, for example, a non-volatile memory, such as including: magnetic media (e.g., hard disks, floppy disks, and magnetic tape), optical media (e.g., CDROM disks and DVDs), magneto-optical media (e.g., optical disks), and hardware devices that are specially constructed for the storage and execution of computer-executable instructions (e.g., read-only memories (ROMs), random Access Memories (RAMs), flash memories, etc.). The computer-readable storage medium 400 may execute the computer-executable instructions 410 by one or more processors or processing devices.
In addition, it should be understood that the foregoing embodiments are merely exemplary of the present invention, and the technical solutions of the embodiments may be arbitrarily combined and used without conflict and contradiction in technical features and without departing from the purpose of the present invention.
In the several embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit/module in the embodiments of the present invention may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules may be integrated into one unit/module. The integrated unit/module may be implemented in the form of hardware, or may be implemented in the form of hardware plus a software functional unit/module.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A video processing method, comprising:
acquiring interface information, wherein the interface information comprises interface coordinate information and interface resolution information of a plurality of virtual interfaces, and the virtual interfaces correspond to a plurality of physical interfaces one to one;
acquiring layer information, wherein the layer information comprises layer coordinate information, layer resolution information and video frame resolution information of a video frame corresponding to the layer;
according to the interface information and the layer information, determining respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively positioned in the plurality of virtual interfaces; and
acquiring corresponding image data in the video frame according to the sub-image parameters of the sub-images to obtain the sub-images, and obtaining a plurality of processing sub-images based on the sub-images and outputting the processing sub-images from the physical interfaces respectively;
determining respective sub-image parameters of a plurality of sub-images corresponding to a plurality of sub-images of the video frame, which are respectively located in the plurality of virtual interfaces with the layer, according to the interface information and the layer information, specifically includes:
determining sub-graph coordinate information and sub-graph resolution information of the plurality of sub-graphs of which the layers are respectively positioned in the plurality of virtual interfaces according to the interface information, the layer coordinate information and the layer resolution information;
determining sub-graph relative layer coordinate information of the sub-graphs according to the sub-graph coordinate information of the sub-graphs and the layer coordinate information; and
and determining sub-image parameters of a plurality of sub-images corresponding to the virtual interfaces in the video frame according to the video frame resolution information, the layer resolution information, the relative layer coordinate information of the sub-images and the sub-image resolution information of the sub-images.
2. The video processing method according to claim 1, wherein the sub-image parameters of each of the plurality of sub-images comprise sub-image coordinate information and sub-image resolution information of each of the plurality of sub-images;
determining sub-image parameters of a plurality of sub-images corresponding to the plurality of virtual interfaces in the video frame according to the video frame resolution information, the layer resolution information, the sub-image relative layer coordinate information of the plurality of sub-images, and the sub-image resolution information of the plurality of sub-images, specifically including:
determining a video frame scaling ratio according to the video frame resolution information and the layer resolution information; and
and determining sub-image coordinate information and sub-image resolution information of the sub-images according to the sub-image relative layer coordinate information of the sub-images, the sub-image resolution information of the sub-images and the video frame scaling ratio.
3. The video processing method according to claim 1, wherein the sub-image parameters of the sub-images comprise sub-image coordinate information, sub-image resolution information, and sub-image scaling ratio of the sub-images;
determining sub-image parameters of a plurality of sub-images corresponding to the plurality of virtual interfaces in the video frame according to the video frame resolution information, the layer resolution information, the sub-image relative layer coordinate information of the plurality of sub-images, and the sub-image resolution information of the plurality of sub-images, specifically including:
determining a video frame scaling ratio according to the video frame resolution information and the layer resolution information, wherein the sub-image scaling ratio of each of the plurality of sub-images is equal to the video frame scaling ratio; and
determining sub-image coordinate information and sub-image resolution information of the sub-images according to sub-image relative layer coordinate information of the sub-images, sub-image resolution information of the sub-images and sub-image scaling ratios of the sub-images;
before the obtaining, according to the sub-image parameters of the plurality of sub-images, corresponding image data in the video frame to obtain the plurality of sub-images, and obtaining, based on the plurality of sub-images, a plurality of processed sub-images to be output from the plurality of physical interfaces, respectively, the video processing method further includes: determining sub-graph relative interface coordinate information of each of the multiple sub-graphs according to sub-graph coordinate information of each of the multiple sub-graphs and interface coordinate information of each of the multiple virtual interfaces; and
after the obtaining of the corresponding image data in the video frame according to the sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processed sub-images based on the plurality of sub-images and outputting the processed sub-images from the plurality of physical interfaces, respectively, the video processing method further includes: and outputting the sub-graph relative interface coordinate information of each sub-graph from the plurality of physical interfaces respectively.
4. The video processing method according to claim 3,
the obtaining corresponding image data in the video frame according to the respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processed sub-images based on the plurality of sub-images specifically includes:
acquiring corresponding image data in the video frame according to the sub-image coordinate information and the sub-image resolution information of the sub-images to obtain the sub-images; and
and scaling the plurality of sub-images according to the sub-image scaling ratios of the plurality of sub-images to obtain the plurality of processed sub-images.
5. A video processing apparatus, comprising:
the device comprises a first acquisition module, a first processing module and a second acquisition module, wherein the first acquisition module is used for acquiring interface information, the interface information comprises interface coordinate information and interface resolution information of a plurality of virtual interfaces, and the virtual interfaces are in one-to-one correspondence with a plurality of physical interfaces;
the second acquisition module is used for acquiring layer information, wherein the layer information comprises layer coordinate information, layer resolution information and video frame resolution information of a video frame corresponding to the layer;
a sub-image parameter determining module, configured to determine, according to the interface information and the layer information, sub-image parameters of multiple sub-images in the video frame, where the multiple sub-images correspond to multiple sub-images with layers located in the multiple virtual interfaces, respectively; and
the image acquisition and output module is used for acquiring corresponding image data in the video frame according to respective sub-image parameters of the plurality of sub-images to obtain the plurality of sub-images, and obtaining a plurality of processing sub-images based on the plurality of sub-images and outputting the plurality of processing sub-images from the plurality of physical interfaces respectively;
the sub-image parameter determining module specifically includes:
a sub-graph information determining unit, configured to determine, according to the interface information, the layer coordinate information, and the layer resolution information, sub-graph coordinate information and sub-graph resolution information of each of the multiple sub-graphs that the layer is located in the multiple virtual interfaces, respectively;
a relative coordinate information determining unit, configured to determine, according to the sub-graph coordinate information and the layer coordinate information of each of the multiple sub-graphs, sub-graph relative layer coordinate information of each of the multiple sub-graphs; and
a sub-image parameter determining unit, configured to determine sub-image parameters of multiple sub-images corresponding to the multiple virtual interfaces in the video frame according to the video frame resolution information, the layer resolution information, sub-image relative layer coordinate information of the multiple sub-images, and sub-image resolution information of the multiple sub-images.
6. A video processing system, comprising: a processor and a memory coupled to the processor; wherein the memory stores instructions for execution by the processor and the instructions cause the processor to perform operations to perform the video processing method of any of claims 1 to 4.
7. A video processing apparatus, characterized by comprising:
an embedded processor;
the programmable logic device is electrically connected with the embedded processor;
the memory is electrically connected with the programmable logic device;
a plurality of physical interfaces electrically connecting the programmable logic devices;
wherein the embedded processor is to:
acquiring interface information, wherein the interface information comprises interface coordinate information and interface resolution information of a plurality of virtual interfaces, and the virtual interfaces are in one-to-one correspondence with the physical interfaces;
acquiring layer information, wherein the layer information comprises layer coordinate information, layer resolution information and video frame resolution information of a video frame corresponding to the layer; and
according to the interface information and the layer information, determining respective sub-image parameters of a plurality of sub-images in the video frame corresponding to a plurality of sub-images of which the layers are respectively positioned in the plurality of virtual interfaces;
the programmable logic device is to:
acquiring corresponding image data in the video frame from the memory according to the sub-image parameters of the sub-images to obtain the sub-images, and acquiring a plurality of processing sub-images based on the sub-images to be output from the physical interfaces respectively;
wherein the programmable logic device is further to: determining sub-graph coordinate information and sub-graph resolution information of the plurality of sub-graphs of which the layers are respectively positioned in the plurality of virtual interfaces according to the interface information, the layer coordinate information and the layer resolution information; determining subgraph relative layer coordinate information of each of the multiple subgraphs according to the subgraph coordinate information of each of the multiple subgraphs and the layer coordinate information; and determining sub-image parameters of a plurality of sub-images corresponding to the plurality of virtual interfaces in the video frame according to the resolution information of the video frame, the layer resolution information, the sub-image relative layer coordinate information of the plurality of sub-images and the sub-image resolution information of the plurality of sub-images.
8. The video processing device according to claim 7, further comprising: the system comprises a main control card, a switching back plate comprising a matrix switching module and an output card;
the master control card and the output card are respectively electrically connected with the exchange back plate; the embedded processor is arranged on the main control card; the programmable logic device, the memory, and the plurality of physical interfaces are disposed on the output card.
CN202010048892.8A 2020-01-16 2020-01-16 Video processing method, device and system and video processing equipment Active CN113132556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010048892.8A CN113132556B (en) 2020-01-16 2020-01-16 Video processing method, device and system and video processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010048892.8A CN113132556B (en) 2020-01-16 2020-01-16 Video processing method, device and system and video processing equipment

Publications (2)

Publication Number Publication Date
CN113132556A CN113132556A (en) 2021-07-16
CN113132556B true CN113132556B (en) 2023-04-11

Family

ID=76771797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010048892.8A Active CN113132556B (en) 2020-01-16 2020-01-16 Video processing method, device and system and video processing equipment

Country Status (1)

Country Link
CN (1) CN113132556B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611213A (en) * 2016-01-04 2016-05-25 京东方科技集团股份有限公司 Image processing method, image play method and related device and system
CN108255454A (en) * 2018-02-01 2018-07-06 上海大视信息科技有限公司 A kind of virtual interactive interface method of splicing device and splicing device
CN109656654A (en) * 2018-11-30 2019-04-19 厦门亿力吉奥信息科技有限公司 The edit methods and computer readable storage medium of large-size screen monitors scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8068485B2 (en) * 2003-05-01 2011-11-29 Genesis Microchip Inc. Multimedia interface
US9554189B2 (en) * 2014-06-30 2017-01-24 Microsoft Technology Licensing, Llc Contextual remote control interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611213A (en) * 2016-01-04 2016-05-25 京东方科技集团股份有限公司 Image processing method, image play method and related device and system
CN108255454A (en) * 2018-02-01 2018-07-06 上海大视信息科技有限公司 A kind of virtual interactive interface method of splicing device and splicing device
CN109656654A (en) * 2018-11-30 2019-04-19 厦门亿力吉奥信息科技有限公司 The edit methods and computer readable storage medium of large-size screen monitors scene

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Adaptive resolution image acquisition using image mosaicing technique from video sequence;S. Takeuchi等;《 Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)》;20020806;全文 *
DM6446中的高清数字视频显示接口设计;罗国柱等;《现代电子技术》;20130815(第16期);全文 *
大屏幕显示系统的设计与应用;林观养;《现代电视技术》;20080915(第09期);全文 *

Also Published As

Publication number Publication date
CN113132556A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US10832462B2 (en) Image synthesis method, image chip, and image device
US8401339B1 (en) Apparatus for partitioning and processing a digital image using two or more defined regions
JP2020533710A (en) Image stitching method and device, storage medium
TWI597685B (en) Rendering method and device
US11783445B2 (en) Image processing method, device and apparatus, image fitting method and device, display method and apparatus, and computer readable medium
CN109636885B (en) Sequential frame animation production method and system for H5 page
CN110099224B (en) Pre-monitoring display method, device and system, computer equipment and storage medium
CN109389558A (en) A kind of method and device for eliminating image border sawtooth
US20060203002A1 (en) Display controller enabling superposed display
CN212137804U (en) Point-to-point video splicing system
CN113132556B (en) Video processing method, device and system and video processing equipment
US10650488B2 (en) Apparatus, method, and computer program code for producing composite image
CN106951204B (en) Image synchronization method based on computer cluster visualization system
CN112540735B (en) Multi-screen synchronous display method, device and system and computer storage medium
CN112650460A (en) Media display method and media display device
CN113301411B (en) Video processing method, device and system and video processing equipment
JP7289390B2 (en) Image processing device for display wall system and display control method
CN110597577A (en) Head-mounted visual equipment and split-screen display method and device thereof
US20180295315A1 (en) Display device configuring multi display system and control method thereof
TW200832275A (en) Image revealing method
CN113094010A (en) Image display method, device and system
CN112099745A (en) Image display method, device and system
CN114371820A (en) Method and device for realizing special-shaped layer
WO2023230965A1 (en) Display method and device
CN104951260A (en) Implementation method of mixed interface based on Qt under embedded-type Linux platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant