CN112511896A - Video rendering method and device - Google Patents

Video rendering method and device Download PDF

Info

Publication number
CN112511896A
CN112511896A CN202011223445.8A CN202011223445A CN112511896A CN 112511896 A CN112511896 A CN 112511896A CN 202011223445 A CN202011223445 A CN 202011223445A CN 112511896 A CN112511896 A CN 112511896A
Authority
CN
China
Prior art keywords
video
window
rendering
mask information
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011223445.8A
Other languages
Chinese (zh)
Inventor
张宏
李永配
潘武
李浙伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011223445.8A priority Critical patent/CN112511896A/en
Publication of CN112511896A publication Critical patent/CN112511896A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention provides a video rendering method and a video rendering device, wherein the method comprises the following steps: determining display areas and display layers of a plurality of video windows on display equipment; determining whether pixel points in each video window are covered by other video windows or not according to the display area and the display layer, and generating video mask information corresponding to each video window according to the covering condition of each pixel point; rendering the plurality of video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points; and splicing the video rendering data corresponding to the uncovered pixel points to obtain a spliced video. By using the method provided by the invention, when determining whether the pixel point of each window is written into the DDR, the problem of excessive system resource occupation caused by the coverage condition of the real-time calculation area is avoided by calculating the video mask information in advance, and the overall performance of the video rendering system capability is improved.

Description

Video rendering method and device
Technical Field
The present invention relates to the field of video processing and display control, and in particular, to a video rendering method and apparatus.
Background
With the rapid development of liquid crystal display and small-distance LED display technologies, the application field of large-scale spliced display curtain walls is more and more extensive, and the large-scale spliced display curtain walls are particularly and practically applied in the fields of safe city command centers, public security command centers, traffic management command and dispatching centers, power dispatching centers and the like.
Generally, the display system mainly includes a splicing control system and a display device, where the splicing control system is mainly used to receive video streams in different video formats sent from different video interfaces, and display the received video streams in different positions of the display device in a manner of compression processing and the like according to the setting of a user in a display application scene, where the display device may be an LED screen or a screen type such as a liquid crystal screen, and the display device may also display at least one video window together in a manner of combining and splicing multiple screens.
In the above display system, the mosaic control system plays a crucial role, and determines the processing capability and display effect of the display system for video, but as the video streams sent by different video interfaces become more complex, the requirement for the number of input video streams that the mosaic control system can process also becomes higher, and as the video resolution becomes larger and larger, the video processing performance requirement for the mosaic control system is also high, whereas in the prior art, the way of displaying the mosaic video is usually that, according to the layout of each video window on the display device, the complete video of each window is sequentially input into a DDR (Double Data Rate Synchronous Dynamic Random Access Memory) from the bottom layer to the top layer according to the display sequence of the video on the video layer of the display device, and finally the video content is output to the display device by the DDR according to the interface protocol and the output requirement of the display device, the above method has a higher requirement on the bandwidth of the DDR, but because the current DDR bandwidth has a bottleneck, it is difficult to process the application scene of the complex spliced video window, and further the display performance of the display device is restricted.
Disclosure of Invention
The embodiment of the invention provides a video rendering method and device, which are used for solving the problem that the display performance of display equipment is restricted due to the fact that the current DDR bandwidth has a bottleneck and is difficult to process an application scene of a complex spliced video window.
A first aspect of the present invention provides a video rendering method, the method comprising:
determining display areas and display layers of a plurality of video windows on display equipment;
determining whether pixel points in each video window are covered by other video windows or not according to the display area and the display layer, and generating video mask information corresponding to each video window according to the covering condition of each pixel point;
rendering the plurality of video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points;
and splicing the video rendering data corresponding to the uncovered pixel points to obtain a spliced video.
Optionally, generating video mask information corresponding to each video window according to the coverage condition of each pixel point, including:
coding the pixel points which are not shielded in the video windows into a first preset value, and coding the shielded pixel points into a second preset value to obtain video mask information corresponding to each video window;
rendering the plurality of video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points, including:
and rendering the pixel points coded into the first preset value according to the video mask information corresponding to each video window to obtain video rendering data corresponding to the uncovered pixel points.
Optionally, before rendering the plurality of video windows according to the video mask information, the method further includes:
and acquiring video signal sources corresponding to the plurality of video windows, and scaling each video signal source according to the source resolution of the video and the resolution of the video window.
Optionally, the method further comprises:
and if the display equipment is determined to be spliced by a plurality of sub-display equipment, after the spliced video is obtained, dividing the spliced video by using the splicing positions of the sub-display equipment, and respectively sending the divided video to each sub-display equipment according to the interface protocol type of each sub-display equipment.
A second aspect of the present invention provides a video rendering apparatus, the apparatus comprising: the device comprises a Central Processing Unit (CPU), a video processing unit and a double-rate synchronous dynamic random access memory (DDR SDRAM);
the central processing unit is used for determining display areas and display layers of a plurality of video windows on the display equipment, determining whether pixel points in each video window are covered by other video windows according to the display areas and the display layers, generating video mask information corresponding to each video window according to the covering condition of each pixel point, and sending the video mask information to the DDR SDRAM;
the video processing unit is used for acquiring the video mask information from the DDR SDRAM, rendering the video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points, sending the video rendering data to the DDR SDRAM, and receiving a spliced video sent by the DDR SDRAM;
the DDR SDRAM is used for receiving and storing the video mask information sent by the central processing unit, receiving video rendering data sent by the video processing unit when the video processing unit renders the video windows, splicing the video rendering data, and sending the spliced video to the video processing unit.
Optionally, the video processing unit is: any one or more of a field programmable gate array FPGA, a graphic processor GPU and a central processing unit CPU.
Optionally, the central processor is further configured to:
coding the pixel points which are not shielded in the video windows into a first preset value, and coding the shielded pixel points into a second preset value to obtain video mask information corresponding to each video window;
and the video processing unit is further used for rendering the pixel points coded into the first preset value according to the video mask information corresponding to each video window to obtain video rendering data corresponding to the uncovered coordinate position.
Optionally, the video processing unit is further configured to:
before rendering the video windows according to the video mask information, obtaining video signal sources corresponding to the video windows, and scaling each video signal source according to the source resolution of the video and the resolution of the video window.
Optionally, the video processing unit is further configured to:
and if the display equipment is determined to be spliced by a plurality of sub-display equipment, after the spliced video is obtained, dividing the spliced video by using the splicing positions of the sub-display equipment, and respectively sending the divided video to each sub-display equipment according to the interface protocol type of each sub-display equipment.
A third aspect of the present invention provides a video rendering apparatus, comprising:
the window layout determining module is used for determining display areas and display layers of a plurality of video windows on the display equipment;
the mask information determining module is used for determining whether pixel points in each video window are covered by other video windows according to the display area and the display layer, and generating video mask information corresponding to each video window according to the covering condition of each pixel point;
the rendering module is used for rendering the video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points;
and the video splicing module is used for splicing the video rendering data corresponding to the uncovered pixel points to obtain a spliced video.
Optionally, the mask information determining module generates video mask information corresponding to each video window according to the coverage condition of each pixel point, and the method includes:
coding the pixel points which are not shielded in the video windows into a first preset value, and coding the shielded pixel points into a second preset value to obtain video mask information corresponding to each video window;
rendering the plurality of video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points, including:
and rendering the pixel points coded into the first preset value according to the video mask information corresponding to each video window to obtain video rendering data corresponding to the uncovered pixel points.
Optionally, before the rendering module renders the plurality of video windows according to the video mask information, the rendering module further includes:
and acquiring video signal sources corresponding to the plurality of video windows, and scaling each video signal source according to the source resolution of the video and the resolution of the video window.
Optionally, the apparatus is further configured to, after it is determined that the display device is a spliced display device formed by splicing a plurality of sub display devices, divide the spliced video by using the splicing positions of the sub display devices, and send the divided video to each of the sub display devices according to the interface protocol type of each of the sub display devices.
A fourth aspect of the invention provides a computer storage medium having stored thereon a computer program which, when executed by a processor, performs any of the methods provided by the first aspect of the invention.
By utilizing the video rendering method provided by the invention, when splicing write-back is carried out, all rendering data of each video window are not written into the memory, only pixel point data which are finally concealed by other video windows in the display area of the display equipment are written and discarded, the write-in bandwidth of the DDR of the memory is saved, and whether the pixel point of each video window is written into the memory or not is determined by adopting a mode of carrying out mask coding in advance, and the rendering area of each window is only recalculated when the layout of the video window is changed, so that occupation and consumption of system resources caused by real-time calculation and judgment of whether each pixel point is covered or not are avoided.
Drawings
FIG. 1 is a schematic diagram of a video rendering system;
FIG. 2a is a flow chart of the steps of a video rendering method;
FIG. 2b is a schematic illustration of a user selecting a window layout;
FIG. 3 is a flow chart of a complete method of video rendering;
FIG. 4 is a schematic diagram of a video rendering apparatus;
fig. 5 is a block diagram of a video rendering apparatus.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the gradual complexity of video streams sent by different video interfaces, the requirement on the number of input video streams that can be processed by a splicing control system also becomes higher, and as the video resolution becomes larger and larger, the requirement on the video processing performance of the splicing control system also becomes higher, whereas in the prior art, the way of displaying the spliced video is generally that, according to the layout of each video window on a display device, the complete video of each window is sequentially input to a DDR SDRAM from the bottom layer to the top layer according to the display sequence of the video on the video layer of the display device, and finally, the DDR outputs the video content to the display device according to the interface protocol and the output requirement of the display device, and the above way has a higher requirement on the DDR bandwidth, but because the current DDR bandwidth has a bottleneck, it is difficult to process the application scene of the complicated spliced video window, which in turn may limit the display performance of the display device. The embodiment of the application provides a video rendering method, wherein a mask mode is adopted to mark whether pixel points of each window are covered by other windows, mask information is stored in a memory in advance, in the working process of a system, whether the pixel points of the window are covered is determined by reading the mask information of the window from the memory, if the pixel points are covered, the pixel points are not written into a DDR, otherwise, the pixel points are written into the DDR; mask information is generated only once when the window layout is sent and changed and is prestored in a memory, and each frame is not required to be generated, so that the waste of system resources caused by the fact that whether each pixel point is covered or not is judged in real time. The mask information is coded, the data quantity of the coded mask is very small, and the bandwidth consumed by reading the mask from the memory every time can be ignored.
At present, the image processing may be mainly implemented by using several processing modes, such as a CPU, a GPU, a Digital Signal Processor (DSP), a field Programmable Gate array (fpga) (field Programmable Gate array), and the like, and the embodiments of the present invention are further described in detail below with reference to the drawings of the specification.
As shown in fig. 1, a video rendering system includes: the splicing and rendering device 101 and the display device 102, wherein the splicing and rendering device 101 is configured to collect various video signal sources, scale each video source according to the source resolution and the target resolution, splice the video sources into a final video to be displayed according to the window layout of a user, and finally output the final video to the display device 102.
The display device 102 may be a small-pitch LED, or a liquid crystal screen, and the like, and the liquid crystal screen may be a plurality of screens spliced together, wherein the display device 102 may include a spliced curtain wall and a splicer, wherein a spliced curtain wall may include a plurality of display screens, each display screen is connected to an output channel of the splicer for displaying a video image output by the output channel, a plurality of video windows may be displayed on a display screen, a user may set a picture layout on the display screen according to a position and a size of each window, and a video display process is performed on each display screen window, that is, in a process of implementing picture splicing on the spliced curtain wall, there may be overlap between the windows on a display screen. If the complete picture of the video window is rendered in each window, the pictures covered by parts of other windows cannot be finally displayed even if rendered, so that the picture splicing computation amount on the spliced curtain wall is large, and the picture splicing efficiency is low.
An embodiment of the present application provides a video rendering method, which is applied to the above-mentioned mosaic rendering device 101, and as shown in fig. 2a, the method includes the following steps:
step S201, determining display areas and display layers of a plurality of video windows on a display device;
firstly, selecting and acquiring video signal sources sent from different ports, wherein the input formats of the video signal sources can be as follows: rm, rmvb, mtv, dat, wmv, avi, 3gp, amv, dmv, flv, etc., and the video signal stream can also be a video stream shot by various kinds of image pickup devices in real time, wherein the kinds of the image pickup devices can be professional cameras, CCD cameras (Charge Coupled devices), network cameras, broadcast-grade cameras, business-grade cameras, home-grade cameras, studio/field-pedestal cameras, camcorders, black and white cameras, color cameras, infrared cameras, X-ray cameras, surveillance cameras, etc., without limitation.
After selecting a video signal source, a user may select a window layout, where the window layout includes a display area and a display layer of a video window corresponding to the video signal source on a display device, and the user may select a coverage relation and a window size of each video window, further, the user adjusts the display area of the video window by manually dragging, stretching, and the like, and determines a final layout of the video window by selecting the display layer of the video window, for example, as shown in fig. 2B, a default display layer of a video window a is a first layer, and a display layer of a video window B is a second layer, and the user clicks the video window B to exchange a window hierarchy relation between the video windows, and may also adjust the display area of the original video window by dragging a window frame, and as the display area is adjusted, the resolution of the video window may also change, and one pixel point in the video window is used for corresponding to one display point on the display equipment, and each pixel point in the video window corresponds to each display point in the appointed output display window one to one.
In this embodiment, the method for determining the display areas and the display layers of the plurality of video windows on the display device may further include: the method for fixedly setting the display area and the display layer of the input port according to the input port of each video window and determining the display area and the display layer is not limited to the method described in the above embodiment, and any method that can be applied to video layout that can be thought of by those skilled in the art can be applied to the present application, and is not limited herein.
Step S202, determining whether pixel points in each video window are covered by other video windows according to the display area and the display layer, and generating video mask information corresponding to each video window according to the covering condition of each pixel point;
specifically, each video window can be moved and zoomed on the display device at will, and when a plurality of video windows are displayed at different positions of the display device at the same time, the video window covering condition occurs, and even if the covered portion is rendered and then output to the display device, the rendered pixel points are difficult to display on the display device due to being covered by other windows, so that the problems of large image splicing computation amount on the spliced curtain wall and low display efficiency can be caused.
At this time, the window covering relationship of each video window may be predetermined, so that the covered portion does not enter the video rendering step any more, the calculation amount of the final rendering is reduced, and the covering relationship of each window can be determined after the display area and the display layer of each video window are obtained.
Specifically, the step of determining whether to override may be: determining whether other video windows are shielded on each pixel point in the video window of the first layer, if so, not rendering the part during video rendering, and in order to enable a device for rendering to determine a specific reserved rendering part during rendering, in the embodiments of the present application, each video window may be encoded in a certain encoding manner to obtain mask information of the video window, and a specific encoding manner, which should be known to those skilled in the art, is not limited herein, and in addition, a compression encoding manner may be used to reduce the data amount of the mask information, because for a video window, generally a plurality of continuous pixels are simultaneously covered or displayed, so that continuous display or covered parts may be compression encoded, and the compression encoding manner may be: coding modes such as mask coding, CSR complete state coding and CSR coding are adopted for coding, the data volume of coded mask information is very small, and the storage space is saved.
In addition, a display priority level may also be set for each video window, when each video window is encoded, it is not necessary to determine a display layer of the video window, and a corresponding encoded value is determined according to the priority level of each video window, that is, if a pixel point position of the video window exists, if there are other window pixel points with priority higher than that of the window, the window position is encoded as an encoded value that is not corresponding to rendering, where a setting manner of the priority level may be actively set by a user, or may be sorted according to the importance level of the video window, or may represent a display level of the window, and a specific setting manner of the priority level is not limited to the above-described manner, and is not described here again.
Step S203, rendering the plurality of video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points;
when rendering is performed each time, a decoder in the splicing rendering device renders Video sources of a plurality of Video windows based on the coded Video mask information, and at present, for Video Coding standards including modes such as multifunctional Video Coding (VVC), joint exploration test model (JEM), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), Moving Picture Experts Group (MPEG) Coding, and the like, the decoder is a working mode in which an analog Video stream is compressed and then converted into a digital Video stream, and then the digital Video stream is transmitted through a network, and Video data storage needs to be compressed first, otherwise, the data amount is too large. The video decoder is divided into a hardware decoder and a software decoder, wherein the hardware decoder is usually completed by a DSP, and the software decoder is usually completed by a CPU.
The software decoder is generally used in video decoding, image restoration and other processes of computers, processors and the like, and decoded images or videos can be directly displayed in a display window.
The hardware decoder generally performs decoding work of the video by hardware, wherein the hardware decoding is performed by a GPU, and the GPU decoding can be used for reducing the workload of a CPU and reducing the power consumption. The played video is smooth, and the time for playing the video by the mobile equipment can be prolonged.
The decoder determines whether each pixel point in the video window is rendered or not by reading the video mask information stored in the external space, and renders uncovered pixel points to obtain corresponding video rendering data.
And step S204, splicing the video rendering data corresponding to the uncovered pixel points to obtain a spliced video.
And splicing the video rendering data corresponding to the uncovered pixel points, wherein firstly, the pixel point data at a specified position is required to be written into the DDR, and the specified position is a position of the specified window in the image to be displayed, which corresponds to each display point in the area where the specified window is not covered by other windows. Specifically, the DDR write operation may adopt a write form defined by various buses, which is not limited herein;
after the pixel data at the appointed position of the image to be displayed is written into the DDR, a video time sequence can be generated according to the display resolution requirement of a video window, the pixel data are read from the DDR according to the video time sequence and are output to display equipment, and therefore the video window area which is not covered by other windows is displayed.
The output of the video window is realized by utilizing the mode, and only the data on the pixel points needing to be displayed is written into the DDR, so that the data volume written into the DDR is reduced, the data processing speed is improved, the use bottleneck and the write back loss of the DDR bandwidth are reduced, and the splicing process of the picture is accelerated. Further improving the speed of realizing the picture splicing.
As an optional implementation manner, video mask information corresponding to each video window is generated according to the coverage condition of each pixel point.
Specifically, a coding value corresponding to each pixel coordinate of a video window exists in the video mask information, an unshielded pixel in the video window is coded into a first preset value, a shielded pixel is coded into a second preset value, and the video mask information corresponding to each video window is obtained, wherein the first preset value is 0, and the second preset value is 1.
Specifically, in this embodiment, it is necessary to determine whether each pixel in one window is covered by another video window, and if so, the code of the pixel corresponding to the video window is recorded as 1; otherwise, the uncovered code is recorded as 0, for example, if the resolution of the captured video is 1080P, that is, 1920 × 1080 pixels, the corresponding video mask information of the first row of pixel points is: [1,1,1,1,1,1,1,1 … 0,0,0,0] includes the coding of 1920 pixels horizontally.
In addition, in order to reduce the data amount of the video mask information, the video mask information may be compressed and encoded according to a certain encoding method, for example, the video mask information is re-encoded by adopting a mask method, and since the encoding corresponding to the pixel point is not 1, that is, 0, binary numbers may be encoded into data between (0 to 255) in a form of 32-bit group, so as to reduce the data amount of the video mask information;
optionally, a Run-length coding (RLC), also called "Run-length coding", may also be used to code the video mask information, where the Run-length coding is based on a principle that one (character, number of characters) represents a continuous same character string, so as to reduce redundancy of the character string, and the video mask information coding may be [1,8, … 0,4], so as to effectively reduce the data storage amount of the video mask information.
And rendering the plurality of video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points.
And rendering the pixel points coded into the first preset value according to the video mask information corresponding to each video window to obtain video rendering data corresponding to the uncovered pixel points.
When the decoder renders the video window, only the pixel points coded with the first preset value are stored, and the pixel points coded with the second preset value are discarded, so that video rendering data corresponding to uncovered pixel points are obtained.
As an optional implementation manner, before rendering the plurality of video windows according to the video mask information, video signal sources corresponding to the plurality of video windows are further obtained, and each video signal source is scaled according to a source resolution of a video and a resolution of a video window.
Specifically, after a video signal source corresponding to a plurality of video windows is acquired, videos of various different interface protocols need to be converted into a unified video interface, and in addition, because a user has a situation of scaling or enlarging a window when selecting a window layout, the resolution of a video at this time may also change due to a change of the window, and scaling needs to be performed according to the source resolution and the resolution of the video window, for example, the original video resolution is 1080p, but the number of pixels included in the size of a display area on a display device is 720p, and the original video needs to be scaled to 720 p.
As an optional implementation manner, if it is determined that the display device is a spliced display device formed by splicing a plurality of sub-display devices, after the spliced video is obtained, the spliced video is divided by using the splicing positions of the sub-display devices, and the divided video is respectively sent to each sub-display device according to the interface protocol type of each sub-display device.
Specifically, the spliced video can be output to the splicing display device according to the embodiment of the application, each splicing display device is used for displaying a partial area of the whole spliced video, the spliced video is divided according to the position of each splicing display device on the whole display device, and the split video is respectively sent to each sub-display device according to the interface protocol type of each sub-display device.
The invention provides a video rendering complete method, as shown in fig. 3, comprising the following steps:
step S301, acquiring video windows selected by a user and setting window layout to determine display areas and display layers of a plurality of video windows on display equipment, selecting some video windows to be displayed from all the video windows by the user, and then setting information such as the size, position and the like of each window in which each video source is displayed and finally displaying the window layout to be displayed;
step S302, according to the display area and the display layer, determining whether pixel points in each video window are covered by other video windows, generating video mask information corresponding to each video window according to the covering condition of each pixel point, judging whether each pixel point in each video window is covered by other windows, if the pixel point is covered, recording the mask code of the pixel point of the window as 1, otherwise, the code is 0, and coding according to a certain coding mode to reduce the data volume of the mask information.
Step S303, storing the video mask information into an external mask information table;
step S304, collecting multiple paths of videos of various video interface protocols, and converting each path into a video with a unified video protocol standard format;
s305, collecting and converting videos with uniform formats, and zooming the uniform videos according to the size of a source video and the size of a display video window;
step S306, according to the video mask information, whether each pixel point is written into a memory is determined. If the corresponding mask code is 1, discarding the video rendering data corresponding to the pixel point, otherwise, writing the video rendering data into a memory to obtain the video rendering data corresponding to the uncovered pixel point;
and step S307, outputting the video rendering data to the display equipment according to the interface protocol and the requirement of the display equipment.
By utilizing the video rendering method provided by the invention, when splicing and writing back are carried out, all rendering data of each video window are not written into the memory, only pixel point data which are finally displayed in the display device and are covered by other video windows are written into the memory, and the DDR bandwidth of the memory is saved.
An embodiment of the present invention provides a video rendering apparatus, as shown in fig. 4, the apparatus includes: a central processing unit CPU401, a video processing unit 402, a double-rate synchronous dynamic random access memory DDR SDRAM 403;
the central processing unit 401 is configured to determine display areas and display layers of a plurality of video windows on a display device, determine whether pixel points in each video window are covered by other video windows according to the display areas and the display layers, generate video mask information corresponding to each video window according to a covering condition of each pixel point, and send the video mask information to the DDR SDRAM;
the video processing unit 402 is configured to obtain the video mask information from the DDR SDRAM, render the multiple video windows according to the video mask information, obtain video rendering data corresponding to uncovered pixel points, send the video rendering data to the DDR SDRAM, and receive a spliced video sent by the DDR SDRAM;
the DDR SDRAM403 is configured to receive and store the video mask information sent by the central processing unit, receive video rendering data sent by the video processing unit when the video processing unit renders the multiple video windows, splice the video rendering data, and send the spliced video to the video processing unit.
As an optional implementation manner, the video processing unit 402 is: any one or more of a field programmable gate array FPGA, a graphic processor GPU and a central processing unit CPU.
As an optional implementation, the central processing unit 401 is further configured to:
coding the pixel points which are not shielded in the video windows into a first preset value, and coding the shielded pixel points into a second preset value to obtain video mask information corresponding to each video window;
the video processing unit 402 is further configured to render the pixel points encoded with the first preset value according to the video mask information corresponding to each video window, so as to obtain video rendering data corresponding to the uncovered coordinate position.
As an optional implementation manner, the video processing unit 402 is further configured to:
before rendering the video windows according to the video mask information, obtaining video signal sources corresponding to the video windows, and scaling each video signal source according to the source resolution of the video and the resolution of the video window.
As an optional implementation manner, the video processing unit 402 is further configured to:
and if the display equipment is determined to be spliced by a plurality of sub-display equipment, after the spliced video is obtained, dividing the spliced video by using the splicing positions of the sub-display equipment, and respectively sending the divided video to each sub-display equipment according to the interface protocol type of each sub-display equipment.
An embodiment of the present invention provides a video rendering apparatus, as shown in fig. 5, the apparatus includes the following modules:
a window layout determining module 501, configured to determine display areas and display layers of multiple video windows on a display device;
a mask information determining module 502, configured to determine whether a pixel point in each video window is covered by another video window according to the display area and the display layer, and generate video mask information corresponding to each video window according to a covering condition of each pixel point;
a rendering module 503, configured to render the multiple video windows according to the video mask information, so as to obtain video rendering data corresponding to uncovered pixel points;
and the video splicing module 504 is configured to splice the video rendering data corresponding to the uncovered pixel points to obtain a spliced video.
Optionally, the mask information determining module 502 generates video mask information corresponding to each video window according to the coverage condition of each pixel, including:
coding the pixel points which are not shielded in the video windows into a first preset value, and coding the shielded pixel points into a second preset value to obtain video mask information corresponding to each video window;
rendering the plurality of video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points, including:
and rendering the pixel points coded into the first preset value according to the video mask information corresponding to each video window to obtain video rendering data corresponding to the uncovered pixel points.
Optionally, before the rendering module 503 renders the plurality of video windows according to the video mask information, the method further includes:
and acquiring video signal sources corresponding to the plurality of video windows, and scaling each video signal source according to the source resolution of the video and the resolution of the video window.
Optionally, the apparatus is further configured to, after it is determined that the display device is a spliced display device formed by splicing a plurality of sub display devices, divide the spliced video by using the splicing positions of the sub display devices, and send the divided video to each of the sub display devices according to the interface protocol type of each of the sub display devices.
An embodiment of the present invention provides a computer storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the video rendering method provided by the above embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of video rendering, the method comprising:
determining display areas and display layers of a plurality of video windows on display equipment;
determining whether pixel points in each video window are covered by other video windows or not according to the display area and the display layer, and generating video mask information corresponding to each video window according to the covering condition of each pixel point;
rendering the plurality of video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points;
and splicing the video rendering data corresponding to the uncovered pixel points to obtain a spliced video.
2. The method of claim 1, wherein generating video mask information corresponding to each video window according to the coverage of each pixel point comprises:
coding the pixel points which are not shielded in the video windows into a first preset value, and coding the shielded pixel points into a second preset value to obtain video mask information corresponding to each video window;
rendering the plurality of video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points, including:
and rendering the pixel points coded into the first preset value according to the video mask information corresponding to each video window to obtain video rendering data corresponding to the uncovered pixel points.
3. The method of claim 1, wherein before rendering the plurality of video windows according to the video mask information, further comprising:
and acquiring video signal sources corresponding to the plurality of video windows, and scaling each video signal source according to the source resolution of the video and the resolution of the video window.
4. The method of claim 1, further comprising:
and if the display equipment is determined to be spliced by a plurality of sub-display equipment, after the spliced video is obtained, dividing the spliced video by using the splicing positions of the sub-display equipment, and respectively sending the divided video to each sub-display equipment according to the interface protocol type of each sub-display equipment.
5. A video rendering apparatus, characterized in that the apparatus comprises: the device comprises a Central Processing Unit (CPU), a video processing unit and a double-rate synchronous dynamic random access memory (DDR SDRAM);
the central processing unit is used for determining display areas and display layers of a plurality of video windows on the display equipment, determining whether pixel points in each video window are covered by other video windows according to the display areas and the display layers, generating video mask information corresponding to each video window according to the covering condition of each pixel point, and sending the video mask information to the DDR SDRAM;
the video processing unit is used for acquiring the video mask information from the DDR SDRAM, rendering the video windows according to the video mask information to obtain video rendering data corresponding to uncovered pixel points, sending the video rendering data to the DDR SDRAM, and receiving a spliced video sent by the DDR SDRAM;
the DDR SDRAM is used for receiving and storing the video mask information sent by the central processing unit, receiving video rendering data sent by the video processing unit when the video processing unit renders the video windows, splicing the video rendering data, and sending the spliced video to the video processing unit.
6. The apparatus of claim 5, wherein the video processing unit is to: any one or more of a field programmable gate array FPGA, a graphic processor GPU and a central processing unit CPU.
7. The apparatus of claim 5, wherein the central processor is further configured to:
coding the pixel points which are not shielded in the video windows into a first preset value, and coding the shielded pixel points into a second preset value to obtain video mask information corresponding to each video window;
and the video processing unit is further used for rendering the pixel points coded into the first preset value according to the video mask information corresponding to each video window to obtain video rendering data corresponding to the uncovered coordinate position.
8. The apparatus of claim 5, wherein the video processing unit is further configured to:
before rendering the video windows according to the video mask information, obtaining video signal sources corresponding to the video windows, and scaling each video signal source according to the source resolution of the video and the resolution of the video window.
9. The apparatus of claim 5, wherein the video processing unit is further configured to:
and if the display equipment is determined to be spliced by a plurality of sub-display equipment, after the spliced video is obtained, dividing the spliced video by using the splicing positions of the sub-display equipment, and respectively sending the divided video to each sub-display equipment according to the interface protocol type of each sub-display equipment.
10. A computer storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN202011223445.8A 2020-11-05 2020-11-05 Video rendering method and device Pending CN112511896A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011223445.8A CN112511896A (en) 2020-11-05 2020-11-05 Video rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011223445.8A CN112511896A (en) 2020-11-05 2020-11-05 Video rendering method and device

Publications (1)

Publication Number Publication Date
CN112511896A true CN112511896A (en) 2021-03-16

Family

ID=74955258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011223445.8A Pending CN112511896A (en) 2020-11-05 2020-11-05 Video rendering method and device

Country Status (1)

Country Link
CN (1) CN112511896A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099182A (en) * 2021-04-08 2021-07-09 西安应用光学研究所 Multi-window real-time scaling method based on airborne parallel processing architecture
CN113709563A (en) * 2021-10-27 2021-11-26 北京金山云网络技术有限公司 Video cover selecting method and device, storage medium and electronic equipment
CN114257704A (en) * 2021-12-17 2022-03-29 威创集团股份有限公司 FPGA-based video superposition method, device, equipment and medium
CN114449245A (en) * 2022-01-28 2022-05-06 上海瞳观智能科技有限公司 Real-time two-way video processing system and method based on programmable chip

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581505A (en) * 2012-07-30 2014-02-12 浙江大华技术股份有限公司 Digital video signal processing device and method
CN104281426A (en) * 2013-07-05 2015-01-14 浙江大华技术股份有限公司 Image display method and device
US20180160123A1 (en) * 2016-12-07 2018-06-07 Qualcomm Incorporated Systems and methods of signaling of regions of interest
CN111147770A (en) * 2019-12-18 2020-05-12 广州市保伦电子有限公司 Multi-channel video window overlapping display method, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581505A (en) * 2012-07-30 2014-02-12 浙江大华技术股份有限公司 Digital video signal processing device and method
CN104281426A (en) * 2013-07-05 2015-01-14 浙江大华技术股份有限公司 Image display method and device
US20180160123A1 (en) * 2016-12-07 2018-06-07 Qualcomm Incorporated Systems and methods of signaling of regions of interest
CN111147770A (en) * 2019-12-18 2020-05-12 广州市保伦电子有限公司 Multi-channel video window overlapping display method, electronic equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099182A (en) * 2021-04-08 2021-07-09 西安应用光学研究所 Multi-window real-time scaling method based on airborne parallel processing architecture
CN113099182B (en) * 2021-04-08 2022-11-22 西安应用光学研究所 Multi-window real-time scaling method based on airborne parallel processing architecture
CN113709563A (en) * 2021-10-27 2021-11-26 北京金山云网络技术有限公司 Video cover selecting method and device, storage medium and electronic equipment
CN113709563B (en) * 2021-10-27 2022-03-08 北京金山云网络技术有限公司 Video cover selecting method and device, storage medium and electronic equipment
CN114257704A (en) * 2021-12-17 2022-03-29 威创集团股份有限公司 FPGA-based video superposition method, device, equipment and medium
CN114257704B (en) * 2021-12-17 2023-10-10 威创集团股份有限公司 FPGA-based video superposition method, device, equipment and medium
CN114449245A (en) * 2022-01-28 2022-05-06 上海瞳观智能科技有限公司 Real-time two-way video processing system and method based on programmable chip
CN114449245B (en) * 2022-01-28 2024-04-05 上海瞳观智能科技有限公司 Real-time two-way video processing system and method based on programmable chip

Similar Documents

Publication Publication Date Title
CN110290425B (en) Video processing method, device and storage medium
CN112511896A (en) Video rendering method and device
CN112235626B (en) Video rendering method and device, electronic equipment and storage medium
WO2021175049A1 (en) Video frame interpolation method and related apparatus
WO2018045927A1 (en) Three-dimensional virtual technology based internet real-time interactive live broadcasting method and device
CN109983500B (en) Flat panel projection of reprojected panoramic video pictures for rendering by an application
JP3907947B2 (en) HDTV editing and pre-visualization of effects using SDTV devices
WO2015196937A1 (en) Video recording method and device
KR102617258B1 (en) Image processing method and apparatus
CN110868625A (en) Video playing method and device, electronic equipment and storage medium
RU2276470C2 (en) Method for compacting and unpacking video data
JP2006148875A (en) Creation of image based video using step-images
CN111818295B (en) Image acquisition method and device
US11967345B2 (en) System and method for rendering key and fill video streams for video processing
CN114710702A (en) Video playing method and device
CN107580228B (en) Monitoring video processing method, device and equipment
CN114245137A (en) Video frame processing method performed by GPU and video frame processing apparatus including GPU
CN111406404B (en) Compression method, decompression method, system and storage medium for obtaining video file
CN117499755A (en) Method for embedding watermark or subtitle into hard decoding video based on GPU hard layer
WO2023193524A1 (en) Live streaming video processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN116055540B (en) Virtual content display system, method, apparatus and computer readable medium
CN114245027B (en) Video data hybrid processing method, system, electronic equipment and storage medium
CN113037947B (en) Method for coding spatial information in continuous dynamic image
CN114615458A (en) Method and device for real-time screen closing and rapid drawing in cloud conference
CN114513675A (en) Construction method of panoramic video live broadcast system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210316