CN116954541B - Video cutting method and system for spliced screen - Google Patents

Video cutting method and system for spliced screen Download PDF

Info

Publication number
CN116954541B
CN116954541B CN202311198741.0A CN202311198741A CN116954541B CN 116954541 B CN116954541 B CN 116954541B CN 202311198741 A CN202311198741 A CN 202311198741A CN 116954541 B CN116954541 B CN 116954541B
Authority
CN
China
Prior art keywords
rendering
actual
video
displayed
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311198741.0A
Other languages
Chinese (zh)
Other versions
CN116954541A (en
Inventor
石金川
朱正辉
黄小强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Baolun Electronics Co ltd
Original Assignee
Guangdong Baolun Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Baolun Electronics Co ltd filed Critical Guangdong Baolun Electronics Co ltd
Priority to CN202311198741.0A priority Critical patent/CN116954541B/en
Publication of CN116954541A publication Critical patent/CN116954541A/en
Application granted granted Critical
Publication of CN116954541B publication Critical patent/CN116954541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Studio Circuits (AREA)

Abstract

The invention relates to the field of video display, in particular to a method and a system for cutting spliced screen video, wherein the method comprises the following steps: acquiring a video to be displayed and a spliced screen; determining the picture element distribution condition of any video frame in the video to be displayed in any splicing unit in the splicing screen; starting a processor corresponding to a rendering calculation scheme, connecting the splicing unit and the processor, rendering the local video to be displayed in the splicing unit, judging the actual blocking value of the local video to be displayed in the splicing unit in the rendering process, adjusting the rendering parameters of the local video to be displayed by the processor, rendering according to the adjusted rendering parameters, and displaying the target local video to be rendered in the splicing unit; and summarizing the target local videos in the splicing units to form target videos, and displaying the target videos on a splicing screen. The invention improves the fluency of the spliced screen when the video content is displayed.

Description

Video cutting method and system for spliced screen
Technical Field
The invention relates to the field of video display, in particular to a method and a system for cutting spliced screen video.
Background
Currently, there is a distributed video processing method for video processing of a spliced screen. The distributed video processing mode and the centralized control mode are different, and each spliced screen is provided with a decoder unit for controlling the display content of one screen. The server is responsible for informing the decoder unit by means of instructions what the screen needs to display. The decoder units read the video data streams through the network and display the video data streams according to the requirements, the display content of each spliced screen is different, and the server needs to calculate the video size, the video display position and the video display offset of each decoder unit through a video cutting algorithm.
Patent document with publication number CN113470038A discloses a video cutting method, which includes obtaining a video frame to be processed in a target video, and information to be cut in the video frame to be processed; determining an inclination angle corresponding to the video frame to be processed according to the information to be cut; correcting the video frame to be processed according to the inclination angle to obtain a corrected video frame; and performing image cutting on the corrected video frame according to the information to be cut, and determining an image to be played.
However, the existing video processing mode is single, and considered conditions are limited, so that limitation exists in the display process of the content in the spliced screen, and further, delay exists in the display of the spliced screen, and the display effect is affected.
Disclosure of Invention
Therefore, the invention provides a video cutting method for a spliced screen, which can solve the problem that the spliced screen has delay when displaying video content.
In order to achieve the above object, the present invention provides a method for cutting a video of a spliced screen, comprising:
acquiring a video to be displayed and a spliced screen;
determining picture element distribution conditions of any video frame in the video to be displayed in any splicing unit in the splicing screen, selecting a rendering computing scheme according to the picture element distribution conditions, and distributing a processor corresponding to the rendering computing scheme;
starting a processor corresponding to the rendering calculation scheme, connecting the splicing unit and the processor, rendering the local video to be displayed in the splicing unit, judging the actual blocking value of the local video to be displayed in the splicing unit in the rendering process, adjusting the rendering parameters of the processor to the local video to be displayed, rendering according to the adjusted rendering parameters, and displaying the target local video after rendering in the splicing unit;
and summarizing the target local videos in the splicing units to form target videos, and displaying the target videos on the splicing screen.
Further, determining a picture element distribution condition of any video frame in the video to be displayed in any splicing unit in the spliced screen, and selecting a rendering computing scheme according to the picture element distribution condition includes:
setting a plurality of decoders and determining decoding ranges of the decoders, if the picture element distribution belongs to the decoding ranges of the decoders, the decoders identify picture element contour categories, and counting the number of picture elements of each picture element contour category according to the picture element contour categories;
obtaining the actual rendering complexity of the picture elements in any splicing unit according to the number of the picture elements, and selecting a rendering calculation scheme according to the actual rendering complexity of the picture elements;
the picture element profile categories are straight line profiles and curve profiles, and the decoding ranges are in one-to-one correspondence with the splicing units.
Further, selecting a rendering computing power scheme according to the actual rendering complexity of the picture element includes:
setting a picture element standard rendering complexity P0;
if the actual rendering complexity P of the picture element is larger than the standard rendering complexity P0 of the picture element, judging that the actual rendering computing force is larger than the standard rendering computing force requirement, and selecting a first rendering computing force scheme;
if the actual rendering complexity P of the picture element is equal to the standard rendering complexity P0 of the picture element, judging that the actual rendering computing force is equal to the standard rendering computing force requirement, and selecting a standard rendering computing force scheme;
if the actual rendering complexity P of the picture element is smaller than the standard rendering complexity P0 of the picture element, judging that the actual rendering computing force is smaller than the standard rendering computing force requirement, and selecting a second rendering computing force scheme.
Further, assigning a processor corresponding to the rendering algorithm includes;
setting the number N0 of standard processors;
when the first rendering algorithm is selected, then adjusting the standard processor number N0 to a first processor number n1=n0 (1- α) by adjusting the parameter α and assigning with said first processor number N1;
when the standard rendering computing force scheme is selected, the standard rendering computing force scheme is distributed according to the number N0 of the standard processors;
when the second rendering algorithm is selected, then the standard processor number N0 to the second processor number n2=n0 (1+α) is adjusted by adjusting the parameter α and allocated with said second processor number N2.
Further, rendering the local video to be displayed in the splicing unit, and judging an actual clamping value of the local video to be displayed in the splicing unit in the rendering process to adjust rendering parameters of the processor to the local video to be displayed includes:
monitoring the actual rendering time of the local video to be displayed in the rendering process, and comparing the actual rendering time of the local video to be displayed with the standard rendering time according to the actual rendering time of the local video to be displayed so as to determine the actual blocking value of the local video to be displayed;
comparing the actual stuck value with a standard stuck value, and judging that the local video to be displayed is a stuck local video if the actual stuck value is larger than the standard stuck value;
and calculating an actual stuck value difference value of the stuck local video according to the actual stuck value of the stuck local video, comparing the actual stuck value difference value with a standard stuck value difference value, and adjusting rendering parameters according to a comparison result.
Further, comparing the actual rendering time of the local video to be displayed with the standard rendering time to determine the actual katon value of the local video to be displayed comprises:
setting a standard rendering time H0;
if the actual rendering time H is greater than the standard rendering time H0, calculating an actual stuck value K according to the formula (1),
k=video frame sustained low frame rate x 0.8+video frame jittering frame rate x 0.2 (1).
Further, calculating an actual stuck value difference of the stuck local video according to the actual stuck value of the stuck local video, comparing the actual stuck value difference with a standard stuck value difference, and adjusting the rendering parameters according to the comparison result includes:
setting a standard clamping and stopping value K0;
setting a first actual stuck value difference delta K1 of the stuck partial video and a second actual stuck value difference delta K2 of the stuck partial video,
calculating an actual stuck value difference delta K of the stuck partial video, wherein K0 (Kmax, kmin) and delta K=K-Kmax;
when ΔK < ΔK1, then the first coefficient α1 is selected to adjust the number of standard processors F0 to F1=F0 (1+α1), where
When Δk1 is less than or equal to Δk2, selecting a second coefficient α2 to adjust the number of standard processors f0 to f2=f0 (1+α2), wherein
When ΔK is not less than ΔK2, then selecting a third coefficient α3 to adjust the number of standard processors F0 to F3=F0 (1+α3), wherein
Wherein Δk2> Δk1.
Further, determining the picture element distribution condition of the video frame to be displayed of any splicing unit in the spliced screen includes:
establishing a two-dimensional rectangular coordinate system by taking the intersection point of the transverse edge and the longitudinal edge at the left upper part of the spliced screen as an origin, and placing the video frame to be displayed in the two-dimensional rectangular coordinate system to obtain the original coordinates of each picture element in the video frame to be displayed;
presetting a two-dimensional rectangular coordinate range (x, y, w, h) according to the parameter characteristics of the splicing unit, wherein x represents the x-axis coordinate of the top left vertex of the splicing unit, y represents the y-axis coordinate of the top left vertex of the splicing unit, w represents the length of the splicing unit, and h represents the width of the splicing unit;
cutting the video frame to be displayed according to the two-dimensional rectangular coordinate range, and converting the original coordinates of the picture elements in the cut video frame to be displayed into the relative coordinates of the picture elements so as to determine the picture element distribution.
Further, obtaining the actual rendering complexity of the picture elements in any of the splicing units according to the number of the picture elements of each class includes:
calculating the actual rendering complexity of the picture elements according to formula (2),
P=E/E0+D/D0(2),
wherein, P represents the actual rendering complexity of the picture elements, E represents the number of lines, E0 represents the preset standard number of lines parameter, D represents the number of curves, and D0 represents the preset standard number of curves parameter.
On the other hand, the embodiment of the invention also provides a cutting system of the video cutting method of the spliced screen, which comprises the following steps:
the acquisition module is used for acquiring the video to be displayed and the spliced screen;
the decoder comprises a determining unit, a selecting unit and an allocation unit, wherein the determining unit is used for determining picture element distribution conditions of any splicing unit of any video frame in the video to be displayed in the spliced screen, the selecting unit selects a rendering calculation scheme according to the picture element distribution conditions, and the allocation unit allocates a processor corresponding to the rendering calculation scheme;
the starting module is used for starting a processor corresponding to the rendering calculation scheme, connecting the splicing unit and the processor, rendering the local video to be displayed in the splicing unit, and judging the actual clamping and stopping value of the local video to be displayed in the splicing unit in the rendering process so as to adjust the rendering parameters of the processor on the local video to be displayed;
the processor comprises a rendering unit and a display unit, wherein the rendering unit is used for rendering according to the adjusted rendering parameters, the display unit is used for displaying the target local videos subjected to rendering in the splicing unit, summarizing the target local videos in the splicing units to form target videos, and displaying the target videos on the splicing screen.
Compared with the prior art, the method has the advantages that the purpose of converting the original coordinates of the picture elements in any video frame into the relative coordinates of the picture elements to determine the picture element distribution is achieved by determining the picture element distribution condition of any splicing unit of any video frame in the spliced screen in any video frame, the purpose of distributing a processor corresponding to the rendering algorithm is achieved by selecting the rendering algorithm through the picture element distribution condition, the local video to be displayed in the splicing unit is achieved by starting the processor corresponding to the rendering algorithm and connecting the splicing unit and the processor, the adjustment of rendering parameters is achieved according to the actual clamping value of the local video to be displayed, and the smoothness of video content displayed in the spliced screen is improved through the adjusted rendering parameters.
In particular, by setting a plurality of decoders and determining the decoding range of each decoder, the picture element distribution in each splicing unit is identified, and statistics of the number of the determined picture elements is realized according to the picture element distribution.
In particular, by comparing the actual rendering complexity of the picture elements with the standard rendering complexity of the picture elements, the determination of the rendering calculation scheme is realized, so as to achieve the difference of the corresponding processors and balance the operation of calculation force.
In particular, the purpose of judging the actual rendering computing force and the standard rendering computing force requirement is achieved through the determination of the rendering computing force scheme, the number of standard processors is adjusted according to the judging result, and the purpose of layered adjustment is achieved.
In particular, the adjustment of rendering parameters is realized by rendering the picture elements in the splicing unit and judging the actual stuck value of the local video to be displayed in the rendering process.
In particular, by comparing the actual katon value difference with the standard katon value difference, the adjustment of the rendering parameters is realized, and rendering is performed according to the adjusted rendering parameters, so that the smoothness of display in the spliced screen is improved.
In particular, the purpose of cutting the video frame to be displayed is achieved by establishing a two-dimensional rectangular coordinate system and a two-dimensional rectangular coordinate range, and the number of picture elements is determined by determining the picture element distribution.
Drawings
Fig. 1 is a schematic flow chart of a video cutting method for a spliced screen according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a video cutting system for a spliced screen according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a decoder according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a processor according to an embodiment of the present invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, in the description of the present invention, terms such as "upper," "lower," "left," "right," "inner," "outer," and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
Referring to fig. 1, a flowchart of a method for cutting a video of a spliced screen according to an embodiment of the present invention is shown, where the method includes:
step S100: acquiring a video to be displayed and a spliced screen;
step S200: determining picture element distribution conditions of any video frame in the video to be displayed in any splicing unit in the splicing screen, selecting a rendering computing scheme according to the picture element distribution conditions, and distributing a processor corresponding to the rendering computing scheme;
step S300: starting a processor corresponding to the rendering calculation scheme, connecting the splicing unit and the processor, rendering the local video to be displayed in the splicing unit, judging the actual blocking value of the local video to be displayed in the splicing unit in the rendering process, adjusting the rendering parameters of the processor to the local video to be displayed, rendering according to the adjusted rendering parameters, and displaying the target local video after rendering in the splicing unit;
step S400: and summarizing the target local videos in the splicing units to form target videos, and displaying the target videos on the splicing screen.
Specifically, in the embodiment of the invention, the purpose of converting the original coordinates of the picture elements in any video frame into the relative coordinates of the picture elements to determine the picture element distribution is achieved by determining the picture element distribution condition of any splicing unit of any video frame in the spliced screen, the purpose of distributing a processor corresponding to the rendering algorithm is achieved by selecting the rendering algorithm through the picture element distribution condition, the local video to be displayed in the splicing unit is achieved by starting the processor corresponding to the rendering algorithm and connecting the splicing unit and the processor, the adjustment of rendering parameters is achieved according to the actual clamping value of the local video to be displayed, and the smoothness of video content displayed in the spliced screen is improved through the adjusted rendering parameters.
Specifically, determining a picture element distribution condition of any video frame in the video to be displayed in any splicing unit in the spliced screen, and selecting a rendering computing scheme according to the picture element distribution condition includes:
setting a plurality of decoders and determining decoding ranges of the decoders, if the picture element distribution belongs to the decoding ranges of the decoders, the decoders identify picture element contour categories, and counting the number of picture elements of each picture element contour category according to the picture element contour categories;
obtaining the actual rendering complexity of the picture elements in any splicing unit according to the number of the picture elements, and selecting a rendering calculation scheme according to the actual rendering complexity of the picture elements;
the picture element profile categories are straight line profiles and curve profiles, and the decoding ranges are in one-to-one correspondence with the splicing units.
Specifically, in the embodiment of the invention, by arranging a plurality of decoders and determining the decoding range of each decoder, the picture element distribution in each splicing unit is identified, and the statistics of the number of the determined picture elements is realized according to the picture element distribution.
Specifically, selecting a rendering power scheme according to the real rendering complexity of the picture element includes:
setting a picture element standard rendering complexity P0;
if the actual rendering complexity P of the picture element is larger than the standard rendering complexity P0 of the picture element, judging that the actual rendering computing force is larger than the standard rendering computing force requirement, and selecting a first rendering computing force scheme;
if the actual rendering complexity P of the picture element is equal to the standard rendering complexity P0 of the picture element, judging that the actual rendering computing force is equal to the standard rendering computing force requirement, and selecting a standard rendering computing force scheme;
if the actual rendering complexity P of the picture element is smaller than the standard rendering complexity P0 of the picture element, judging that the actual rendering computing force is smaller than the standard rendering computing force requirement, and selecting a second rendering computing force scheme.
Specifically, in the embodiment of the invention, the determination of the rendering calculation scheme is realized by comparing the actual rendering complexity of the picture elements with the standard rendering complexity of the picture elements, so as to achieve the difference of the corresponding processors and balance the operation of calculation force.
Specifically, assigning a processor corresponding to the rendering algorithm includes;
setting the number N0 of standard processors;
when the first rendering algorithm is selected, then adjusting the standard processor number N0 to a first processor number n1=n0 (1- α) by adjusting the parameter α and assigning with said first processor number N1;
when the standard rendering computing force scheme is selected, the standard rendering computing force scheme is distributed according to the number N0 of the standard processors;
when the second rendering algorithm is selected, then the standard processor number N0 to the second processor number n2=n0 (1+α) is adjusted by adjusting the parameter α and allocated with said second processor number N2.
Specifically, the standard processor number N0 may be set in the range of [5,10], the processor corresponds to the screen element number one by one, and the standard rendering calculation force demand is a calculation force that can be provided by the standard processor number N0.
Specifically, the adjustment parameter α threshold can be set within the interval [0,20% ] when selected by those skilled in the art.
Specifically, in the embodiment of the invention, the purpose of judging the actual rendering computing force and the standard rendering computing force demand is realized through the determination of the rendering computing force scheme, the adjustment of the number of standard processors is realized according to the judgment result, and the purpose of layering adjustment is realized.
Specifically, rendering the local video to be displayed in the splicing unit, and judging an actual clamping and stopping value of the local video to be displayed in the splicing unit in the rendering process to adjust rendering parameters of the processor to the local video to be displayed includes:
monitoring the actual rendering time of the local video to be displayed in the rendering process, and comparing the actual rendering time of the local video to be displayed with the standard rendering time according to the actual rendering time of the local video to be displayed so as to determine the actual blocking value of the local video to be displayed;
comparing the actual stuck value with a standard stuck value, and judging that the local video to be displayed is a stuck local video if the actual stuck value is larger than the standard stuck value;
and calculating an actual stuck value difference value of the stuck local video according to the actual stuck value of the stuck local video, comparing the actual stuck value difference value with a standard stuck value difference value, and adjusting rendering parameters according to a comparison result.
Specifically, the rendering parameter, i.e., the noise threshold, in the present invention may be set within the range of [0.001,0.003] by those skilled in the art.
Specifically, in the embodiment of the invention, the adjustment of the rendering parameters is realized by rendering the picture elements in the splicing unit and judging the actual clamping and stopping values of the local video to be displayed in the rendering process.
Specifically, comparing the actual rendering time of the local video to be displayed with the standard rendering time to determine the actual katon value of the local video to be displayed includes:
setting a standard rendering time H0;
if the actual rendering time H is greater than the standard rendering time H0, calculating an actual stuck value K according to the formula (1),
k=video frame sustained low frame rate x 0.8+video frame jittering frame rate x 0.2 (1).
Specifically, when the video frame rate is stable but too low, for example, when the video frame rate is continuously lower than 12fps, the rendering display cannot be continuous, and when the video frame rate is continuously lower than 30fps, the rendering display continuity is affected, and the flicker experience is poor; the jitter frame rate of a video frame refers to when the frame rate is unstable: for example, sometimes 60fps, sometimes 20fps, the rendering display rate inconsistency produces occasional stuck. Accordingly, a larger weight is given in consideration of the influence of the video frame duration low frame rate on the rendering display smoothness.
Specifically, one skilled in the art can set the standard katon value K0 in the range of [60,80], in the dimension of fps.
Specifically, calculating an actual stuck value difference of the stuck local video according to the actual stuck value of the stuck local video, comparing the actual stuck value difference with a standard stuck value difference, and adjusting rendering parameters according to a comparison result includes:
setting a standard clamping and stopping value K0;
setting a first actual stuck value difference delta K1 of the stuck partial video and a second actual stuck value difference delta K2 of the stuck partial video,
calculating an actual stuck value difference delta K of the stuck partial video, wherein K0 (Kmax, kmin) and delta K=K-Kmax;
when ΔK < ΔK1, then the first coefficient α1 is selected to adjust the number of standard processors F0 to F1=F0 (1+α1), where
When Δk1 is less than or equal to Δk2, selecting a second coefficient α2 to adjust the number of standard processors f0 to f2=f0 (1+α2), wherein
When ΔK is not less than ΔK2, then selecting a third coefficient α3 to adjust the number of standard processors F0 to F3=F0 (1+α3), wherein
Wherein Δk2> Δk1.
Specifically, in the embodiment of the invention, the adjustment of the rendering parameters is realized by comparing the actual difference value of the katon value with the standard katon difference value, and the rendering is performed according to the adjusted rendering parameters, so that the smoothness of the display in the spliced screen is improved.
Specifically, further, determining the picture element distribution condition of the video frame to be displayed of any splicing unit in the spliced screen includes:
establishing a two-dimensional rectangular coordinate system by taking the intersection point of the transverse edge and the longitudinal edge at the left upper part of the spliced screen as an origin, and placing the video frame to be displayed in the two-dimensional rectangular coordinate system to obtain the original coordinates of each picture element in the video frame to be displayed;
presetting a two-dimensional rectangular coordinate range (x, y, w, h) according to the parameter characteristics of the splicing unit, wherein x represents the x-axis coordinate of the top left vertex of the splicing unit, y represents the y-axis coordinate of the top left vertex of the splicing unit, w represents the length of the splicing unit, and h represents the width of the splicing unit;
cutting the video frame to be displayed according to the two-dimensional rectangular coordinate range, and converting the original coordinates of the picture elements in the cut video frame to be displayed into the relative coordinates of the picture elements so as to determine the picture element distribution.
Specifically, the coordinate conversion is to convert an original coordinate with an upper left vertex of the spliced screen as an origin, a length of the spliced screen and a width of the spliced screen as coordinate ranges into a relative coordinate with the upper left vertex of each spliced unit as the origin, the length of the spliced unit and the width of the spliced unit as coordinate ranges.
Specifically, in the embodiment of the invention, the aim of cutting the video frame to be displayed is achieved by establishing a two-dimensional rectangular coordinate system and a two-dimensional rectangular coordinate range, and the number of picture elements is determined by determining the picture element distribution.
Specifically, obtaining the actual rendering complexity of the picture elements in any splicing unit according to the number of the picture elements of each category includes:
calculating the actual rendering complexity of the picture elements according to formula (2),
P=E/E0+D/D0(2),
wherein, P represents the actual rendering complexity of the picture elements, E represents the number of lines, E0 represents the preset standard number of lines parameter, D represents the number of curves, and D0 represents the preset standard number of curves parameter.
Specifically, in this embodiment, E0 is calculated based on a plurality of sample data, a plurality of videos are acquired, the number of straight lines in each video frame in the videos is calculated, an average value of the number of straight lines in each video frame is solved, the average value is set as a standard straight line number parameter, D0 is calculated based on a plurality of sample data, a plurality of videos are acquired, the number of curves in each video frame in the videos is calculated, an average value of the number of curves in each video frame is solved, and the average value is set as a standard curve number parameter.
Referring to fig. 2-4, a schematic structural diagram of a video cutting system for a spliced screen according to an embodiment of the present invention includes:
the acquisition module 10 is used for acquiring the video to be displayed and the spliced screen;
the decoder 20 comprises a determining unit 21, a selecting unit 22 and an allocating unit 23, wherein the determining unit 21 is used for determining picture element distribution conditions of any splicing unit of any video frame in the video to be displayed in the spliced screen, the selecting unit 22 selects a rendering calculation scheme according to the picture element distribution conditions, and the allocating unit 23 allocates a processor 40 corresponding to the rendering calculation scheme;
the starting module 30 is configured to start a processor corresponding to the rendering computing scheme, connect the splicing unit with the processor, render the local video to be displayed in the splicing unit, and determine an actual katon value of the local video to be displayed in the splicing unit in a rendering process, where the actual katon value is used to adjust rendering parameters of the local video to be displayed by the processor;
the processor 40 includes a rendering unit 41 and a display unit 42, where the rendering unit 41 is configured to render according to the adjusted rendering parameters, and the display unit 42 is configured to display a target local video after rendering in the stitching unit, aggregate the target local videos in the stitching units, and display the target video on the stitching screen.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.
The foregoing description is only of the preferred embodiments of the invention and is not intended to limit the invention; various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. The spliced screen video cutting method is characterized by comprising the following steps of:
acquiring a video to be displayed and a spliced screen;
determining picture element distribution conditions of any video frame in the video to be displayed in any splicing unit in the splicing screen, selecting a rendering computing scheme according to the picture element distribution conditions, and distributing a processor corresponding to the rendering computing scheme;
starting a processor corresponding to the rendering calculation scheme, connecting the splicing unit and the processor, rendering the local video to be displayed in the splicing unit, judging the actual blocking value of the local video to be displayed in the splicing unit in the rendering process, adjusting the rendering parameters of the processor to the local video to be displayed, rendering according to the adjusted rendering parameters, and displaying the target local video after rendering in the splicing unit;
summarizing the target local videos in the splicing units to form target videos, and displaying the target videos on the splicing screen;
the method for determining the picture element distribution situation of any splicing unit of any video frame in the video to be displayed in the spliced screen comprises the following steps of:
setting a plurality of decoders and determining decoding ranges of the decoders, if the picture element distribution belongs to the decoding ranges of the decoders, the decoders identify picture element contour categories, and counting the number of picture elements of each picture element contour category according to the picture element contour categories;
obtaining the actual rendering complexity of the picture elements in any splicing unit according to the number of the picture elements, and selecting a rendering calculation scheme according to the actual rendering complexity of the picture elements;
the picture element profile categories are linear profiles and curve profiles, and the decoding ranges are in one-to-one correspondence with the splicing units;
the selecting a rendering computing scheme according to the actual rendering complexity of the picture elements comprises:
setting a picture element standard rendering complexity P0;
if the actual rendering complexity P of the picture element is larger than the standard rendering complexity P0 of the picture element, judging that the actual rendering computing force is larger than the standard rendering computing force requirement, and selecting a first rendering computing force scheme;
if the actual rendering complexity P of the picture element is equal to the standard rendering complexity P0 of the picture element, judging that the actual rendering computing force is equal to the standard rendering computing force requirement, and selecting a standard rendering computing force scheme;
if the actual rendering complexity P of the picture element is smaller than the standard rendering complexity P0 of the picture element, judging that the actual rendering computing force is smaller than the standard rendering computing force requirement, and selecting a second rendering computing force scheme;
the obtaining the actual rendering complexity of the picture elements in any splicing unit according to the number of the picture elements of each class comprises the following steps:
calculating the actual rendering complexity of the picture elements according to formula (2),
P=E/E0+D/D0(2),
wherein, P represents the actual rendering complexity of the picture elements, E represents the number of lines, E0 represents the preset standard number of lines parameter, D represents the number of curves, and D0 represents the preset standard number of curves parameter.
2. The method of claim 1, wherein assigning a processor corresponding to the rendering algorithm comprises;
setting the number N0 of standard processors;
when the first rendering algorithm is selected, then adjusting the standard processor number N0 to a first processor number n1=n0 (1- α) by adjusting the parameter α and assigning with said first processor number N1;
when the standard rendering computing force scheme is selected, the standard rendering computing force scheme is distributed according to the number N0 of the standard processors;
when the second rendering algorithm is selected, then the standard processor number N0 to the second processor number n2=n0 (1+α) is adjusted by adjusting the parameter α and allocated with said second processor number N2.
3. The method for cutting the video of the spliced screen according to claim 2, wherein rendering the local video to be displayed in the spliced unit and determining an actual click value of the local video to be displayed in the spliced unit during the rendering process to adjust a rendering parameter of the processor to the local video to be displayed comprises:
monitoring the actual rendering time of the local video to be displayed in the rendering process, and comparing the actual rendering time of the local video to be displayed with the standard rendering time according to the actual rendering time of the local video to be displayed so as to determine the actual blocking value of the local video to be displayed;
comparing the actual stuck value with a standard stuck value, and judging that the local video to be displayed is a stuck local video if the actual stuck value is larger than the standard stuck value;
and calculating an actual stuck value difference value of the stuck local video according to the actual stuck value of the stuck local video, comparing the actual stuck value difference value with a standard stuck value difference value, and adjusting rendering parameters according to a comparison result.
4. The method of claim 3, wherein determining the actual katon value of the local video to be displayed based on comparing the actual rendering time of the local video to be displayed with a standard rendering time comprises:
setting a standard rendering time H0;
if the actual rendering time H is greater than the standard rendering time H0, calculating an actual stuck value K according to the formula (1),
k=video frame sustained low frame rate x 0.8+video frame jittering frame rate x 0.2 (1).
5. The method of claim 4, wherein calculating an actual katon value difference of the katon local video according to the actual katon value of the katon local video, comparing the actual katon value difference with a standard katon difference, and adjusting the rendering parameters according to the comparison result comprises:
setting a standard katon value K0[ Kmax, kmin ];
setting a first actual stuck value difference delta K1 of the stuck partial video and a second actual stuck value difference delta K2 of the stuck partial video,
calculating an actual stuck value difference delta K of the stuck partial video, wherein,
△K=K-Kmax;
when ΔK < ΔK1, then the first coefficient α1 is selected to adjust the number of standard processors F0 to F1=F0 (1+α1), where
When Δk1 is less than or equal to Δk2, selecting a second coefficient α2 to adjust the number of standard processors f0 to f2=f0 (1+α2), wherein
When ΔK is not less than ΔK2, then selecting a third coefficient α3 to adjust the number of standard processors F0 to F3=F0 (1+α3), wherein
Wherein Δk2> Δk1.
6. The method of claim 5, wherein determining a picture element distribution of the video frame to be displayed for any tile in the tile comprises:
establishing a two-dimensional rectangular coordinate system by taking the intersection point of the transverse edge and the longitudinal edge at the left upper part of the spliced screen as an origin, and placing the video frame to be displayed in the two-dimensional rectangular coordinate system to obtain the original coordinates of each picture element in the video frame to be displayed;
presetting a two-dimensional rectangular coordinate range (x, y, w, h) according to the parameter characteristics of the splicing unit, wherein x represents the x-axis coordinate of the top left vertex of the splicing unit, y represents the y-axis coordinate of the top left vertex of the splicing unit, w represents the length of the splicing unit, and h represents the width of the splicing unit;
cutting the video frame to be displayed according to the two-dimensional rectangular coordinate range, and converting the original coordinates of the picture elements in the cut video frame to be displayed into the relative coordinates of the picture elements so as to determine the picture element distribution.
7. A cutting system according to any one of claims 1-6, characterized by comprising:
the acquisition module is used for acquiring the video to be displayed and the spliced screen;
the decoder comprises a determining unit, a selecting unit and an allocation unit, wherein the determining unit is used for determining picture element distribution conditions of any splicing unit of any video frame in the video to be displayed in the spliced screen, the selecting unit selects a rendering calculation scheme according to the picture element distribution conditions, and the allocation unit allocates a processor corresponding to the rendering calculation scheme;
the starting module is used for starting a processor corresponding to the rendering calculation scheme, connecting the splicing unit and the processor, rendering the local video to be displayed in the splicing unit, and judging the actual clamping and stopping value of the local video to be displayed in the splicing unit in the rendering process so as to adjust the rendering parameters of the processor on the local video to be displayed;
the processor comprises a rendering unit and a display unit, wherein the rendering unit is used for rendering according to the adjusted rendering parameters, the display unit is used for displaying the target local videos subjected to rendering in the splicing unit, summarizing the target local videos in the splicing units to form target videos, and displaying the target videos on the splicing screen.
CN202311198741.0A 2023-09-18 2023-09-18 Video cutting method and system for spliced screen Active CN116954541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311198741.0A CN116954541B (en) 2023-09-18 2023-09-18 Video cutting method and system for spliced screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311198741.0A CN116954541B (en) 2023-09-18 2023-09-18 Video cutting method and system for spliced screen

Publications (2)

Publication Number Publication Date
CN116954541A CN116954541A (en) 2023-10-27
CN116954541B true CN116954541B (en) 2024-02-09

Family

ID=88456806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311198741.0A Active CN116954541B (en) 2023-09-18 2023-09-18 Video cutting method and system for spliced screen

Country Status (1)

Country Link
CN (1) CN116954541B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104888A (en) * 2014-07-01 2014-10-15 大连民族学院 Parallel multi-core FPGA digital image real-time zooming processing method and device
WO2017196582A1 (en) * 2016-05-11 2017-11-16 Advanced Micro Devices, Inc. System and method for dynamically stitching video streams
CN107958437A (en) * 2017-11-24 2018-04-24 中国航空工业集团公司西安航空计算技术研究所 A kind of big resolution ratio multi-screen figure block parallel rendering intents of more GPU
KR101973985B1 (en) * 2018-10-10 2019-04-30 주식회사 누리콘 System and method of image rendering through distributed parallel processing for high resolution display
CN116389831A (en) * 2023-06-06 2023-07-04 湖南马栏山视频先进技术研究院有限公司 Yun Yuansheng-based offline rendering system and method
CN116489457A (en) * 2023-05-26 2023-07-25 西安诺瓦星云科技股份有限公司 Video display control method, device, equipment, system and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7075541B2 (en) * 2003-08-18 2006-07-11 Nvidia Corporation Adaptive load balancing in a multi-processor graphics processing system
US11170461B2 (en) * 2020-02-03 2021-11-09 Sony Interactive Entertainment Inc. System and method for efficient multi-GPU rendering of geometry by performing geometry analysis while rendering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104888A (en) * 2014-07-01 2014-10-15 大连民族学院 Parallel multi-core FPGA digital image real-time zooming processing method and device
WO2017196582A1 (en) * 2016-05-11 2017-11-16 Advanced Micro Devices, Inc. System and method for dynamically stitching video streams
CN107958437A (en) * 2017-11-24 2018-04-24 中国航空工业集团公司西安航空计算技术研究所 A kind of big resolution ratio multi-screen figure block parallel rendering intents of more GPU
KR101973985B1 (en) * 2018-10-10 2019-04-30 주식회사 누리콘 System and method of image rendering through distributed parallel processing for high resolution display
CN116489457A (en) * 2023-05-26 2023-07-25 西安诺瓦星云科技股份有限公司 Video display control method, device, equipment, system and storage medium
CN116389831A (en) * 2023-06-06 2023-07-04 湖南马栏山视频先进技术研究院有限公司 Yun Yuansheng-based offline rendering system and method

Also Published As

Publication number Publication date
CN116954541A (en) 2023-10-27

Similar Documents

Publication Publication Date Title
US10567764B2 (en) Controlling a video content system by adjusting the compression parameters
AU2012269508B2 (en) Method and device for MCU multi-picture optimized configuration
US10003765B2 (en) System and method for brightening video image regions to compensate for backlighting
US10171829B2 (en) Picture encoding device and picture encoding method
US20120250761A1 (en) Multi-pass video encoding
JP3166716B2 (en) Fade image-adaptive moving image encoding apparatus and encoding method
JP2011151838A (en) Multi-pass video-encoding
US9253411B2 (en) Image processing apparatus, image processing method and image communication system
US8644642B2 (en) Image quality evaluation method, system, and program based on an alternating-current component differential value
KR19990042668A (en) Video encoding apparatus and method for multiple video transmission
US8208536B2 (en) Method and apparatus for encoding using single pass rate controller
US20120287990A1 (en) Image processor
CN113630600A (en) Human visual system adaptive video coding
JP2010263500A (en) Video processing system, photography device, and method thereof
CN116954541B (en) Video cutting method and system for spliced screen
CN116931864B (en) Screen sharing method and intelligent interaction panel
WO2010117808A2 (en) Flagging of z-space for a multi-camera 3d event
US20120249869A1 (en) Statmux method for broadcasting
CN107820087B (en) Method for dynamically adjusting code rate according to mobile detection result
CN108810513A (en) The image quality display methods and device of panoramic video
JP6089846B2 (en) Video distribution system, decoder, and video distribution method
CN108696733A (en) Projected picture antidote and device
WO2004105393A1 (en) Methods and apparatus for improving video quality in statistical multiplexing.
US7050497B2 (en) Process and device for the video coding of high definition images
US10412341B2 (en) Image display device, frame transmission interval control method, and image display system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant