CN114339401A - Video background processing method and device - Google Patents
Video background processing method and device Download PDFInfo
- Publication number
- CN114339401A CN114339401A CN202111652168.7A CN202111652168A CN114339401A CN 114339401 A CN114339401 A CN 114339401A CN 202111652168 A CN202111652168 A CN 202111652168A CN 114339401 A CN114339401 A CN 114339401A
- Authority
- CN
- China
- Prior art keywords
- video
- background
- window
- display
- background image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000007499 fusion processing Methods 0.000 claims abstract description 24
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims description 20
- 238000004891 communication Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003286 fusion draw glass process Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Studio Circuits (AREA)
Abstract
The invention provides a video background processing method and device. The method comprises the following steps: respectively acquiring a background video and a background image, and respectively determining a window stacking sequence and window coordinates of each video frame in the background video and the background image on a display window; based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image, performing fusion processing on the background video and the background image to obtain a fusion video; acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object; and respectively taking the extracted video and the fused video as a foreground and a background for fusion processing. The invention can improve the video quality of the fused video, realize customized multilayer background scenes and achieve the aim of real-time and rich display.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a video background processing method and device.
Background
In the course of a video live course or a video conference, the shooting background of a camera of a live anchor or a live lecturer needs to be replaced frequently so as to achieve the purpose of rich display. Particularly, the portrait is extracted from a complex background and then is combined with the background for display, so that the purpose of real-time and rich display in a video application scene can be achieved.
At present, only extracted portraits can be fused with static pictures, and the extracted portraits cannot be fused with a plurality of static pictures or dynamic videos.
Disclosure of Invention
In view of the above, the present invention provides a video background processing method and apparatus, which can achieve customized multi-layer background scenes to achieve the purpose of real-time and rich display.
In a first aspect, the present invention provides a video background processing method, including:
respectively acquiring a background video and a background image, and respectively determining a window stacking sequence and window coordinates of each video frame in the background video and the background image on a display window;
based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image, performing fusion processing on the background video and the background image to obtain a fusion video;
acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object;
and respectively taking the extracted video and the fused video as a foreground and a background for fusion processing.
In an embodiment, after the determining the window stacking order and the window coordinates of each video frame and the background image in the background video on the display window, the method further includes:
and modifying the window stacking sequence or the window coordinates on the display window.
In an embodiment, after the determining the window stacking order and the window coordinates of each video frame and the background image in the background video on the display window, the method further includes:
and respectively determining a display area of the background video on the display window and a display area of the background image on the display window.
In an embodiment, after the determining the display area of the background video and the display area of the background image on the display window respectively, the method further includes:
and performing scaling processing on a display area of the background video on the display window or a display area of the background image on the display window.
In an embodiment, the performing the fusion processing by using the extracted video and the fused video as a foreground and a background, respectively, includes:
determining the transparency of all pixel points on the target object in each video frame of the extracted video;
fusing the extracted video and the fused video based on the transparency of all the pixel points on the target object;
and the transparency of the pixel points is the transparency degree of each pixel point corresponding to the target object in the extracted video on a display window.
In a second aspect, the present invention provides a video background processing apparatus, including:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for respectively acquiring a background video and a background image and respectively determining the window stacking sequence and window coordinates of each video frame in the background video and the background image on a display window;
the background unit is used for fusing the background video and the background image to obtain a fused video based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image;
the extraction unit is used for acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object;
and the fusion unit is used for respectively taking the extracted video and the fused video as a foreground and a background to perform fusion processing.
In one embodiment, the method further comprises:
and the modifying unit is used for modifying the window stacking sequence or the window coordinates on the display window.
In one embodiment, the method further comprises:
and the adjusting unit is used for respectively determining a display area of the background video on the display window and a display area of the background image on the display window.
In a third aspect, the present invention provides an electronic device, comprising: a processor, a memory, a communication interface, and a communication bus; the processor, the communication interface and the memory complete mutual communication through a communication bus;
the processor is used for calling the computer instructions in the memory to execute the steps of the video background processing method.
In a fourth aspect, the present invention provides a computer-readable storage medium storing computer instructions which, when executed, cause the computer to perform the steps of the video background processing method described above.
The video background processing method and the video background processing device respectively acquire a background video and a background image and respectively determine the window stacking sequence and the window coordinates of each video frame in the background video and the background image on a display window; based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image, performing fusion processing on the background video and the background image to obtain a fusion video; acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object; and respectively taking the extracted video and the fused video as a foreground and a background for fusion processing. Splicing traces can be weakened, the video quality of the fused video is improved, customized multilayer background scenes are realized, and the purpose of real-time and rich display is achieved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate embodiments of the present invention or solutions in the prior art, the drawings that are needed in the embodiments or solutions in the prior art will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and are therefore not to be considered limiting of scope, and that other relevant drawings can be derived from those drawings without inventive effort for a person skilled in the art.
Fig. 1 is a first flowchart of a video background processing method according to the present invention;
fig. 2 is a second flowchart of a video background processing method according to the present invention;
fig. 3 is a third flowchart of a video background processing method according to the present invention;
fig. 4 is a schematic diagram of a fourth flowchart of a video background processing method according to the present invention;
fig. 5 is a schematic structural diagram of a video background processing apparatus according to the present invention;
fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a video background processing method, which is shown in fig. 1 and specifically comprises the following contents:
s101: respectively acquiring a background video and a background image, and respectively determining a window stacking sequence and window coordinates of each video frame in the background video and the background image on a display window;
in this step, the acquired background video is used as a dynamic input source, and may be a recorded video or a video acquired by a camera in real time.
And reading the picture information of each frame in the background video through an sdk module corresponding to OpengGL. Wherein the each frame picture information includes: the frame has an image memory address in memory, a window stacking order (z-order), and window coordinates.
And the acquired background image is used as a dynamic and static input source, and picture information corresponding to the background image is determined. The picture information corresponding to the background image comprises: image memory address of background image in memory, window stacking order (z-order), and window coordinates.
It should be noted that OpenGL (english: Open Graphics Library, translation name: Open Graphics Library or "Open Graphics Library") is a cross-language, cross-platform Application Programming Interface (API) for rendering 2D and 3D vector Graphics. This interface consists of nearly 350 different function calls to draw from simple graphics bits to complex three-dimensional scenes.
The SDK module, i.e., a third-party "software development kit," is generally a collection of development tools used by software engineers to build application software for a particular software package, software framework, hardware platform, operating system, etc. A colloquial point refers to a toolkit provided by a third-party service provider that implements a function of a software product.
The windows are always rectangular and they are stacked on top of each other along an imaginary straight line perpendicular to the screen. The windows stacked together are called z-order. Each window has a unique position in z-order. The window that is forward in the z-order position is in front of or on top of the window that is rearward in the z-order position. The position of a window in z-order affects its appearance.
S102: based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image, performing fusion processing on the background video and the background image to obtain a fusion video;
in this step, the background image metadata (metadata is background image information obtained from the memory address of the image data) of multiple channels, and the size, coordinates, and z-order of the background image of each channel are combined and transmitted to the opengl service processing layer as a data set.
And the OpengGL fusion layer sequentially draws each background layer from bottom to top according to the hierarchical relation and the coordinates of each layer. The main OpengGL fusion process comprises:
1, initializing an OpengGL off-screen environment to serve as a target window for background fusion drawing, and simultaneously setting alpha blend related parameters of OpengGL.
And 2, rendering the fusion layer in an OpengGL off-screen environment sequentially according to background image combination parameters (z sequence, image width/height and coordinates) transmitted by the application layer and an image original data address (OpengGL alpha blend).
And 3, reading the fused background image information from the video memory to the memory, and returning the background image information to the called application layer. Double pbo (pixel buffer object) may be used here to speed up the read of the display memory to the memory.
The multichannel background fusion uses an OpengGL hardware acceleration mode, and since OpengGL is a cross-platform GPU rendering calculation technology, developed codes can be directly operated on Windows and Mac operating systems, and development time and workload for adapting to different operating systems and subsequent maintenance cost can be reduced.
S103: acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object;
in the step, an AI matting software is used for extracting a target object in a real-time video to obtain an extracted video containing the target object.
In this embodiment, the target object is a lecturer or a lecturer.
S104: and respectively taking the extracted video and the fused video as a foreground and a background for fusion processing.
In this step, the fused video is obtained, and then the extracted video containing scoremap generated by AI matting (scoremap contains the transparency of the pixel points of the portrait part in the camera picture and the transparency information of the pixel points of the portrait background) is superimposed and fused with the synthesized background picture, so as to achieve the purpose of rich display.
As can be seen from the above description, in the video background processing method provided in the embodiment of the present invention, a background video and a background image are respectively obtained, and a window stacking order and window coordinates of each video frame in the background video and the background image on a display window are respectively determined; based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image, performing fusion processing on the background video and the background image to obtain a fusion video; acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object; and respectively taking the extracted video and the fused video as a foreground and a background for fusion processing. Splicing traces can be weakened, the video quality of the fused video is improved, customized multilayer background scenes are realized, and the purpose of real-time and rich display is achieved.
In an embodiment of the present invention, referring to fig. 2, after step S101 in the embodiment of the video background processing method, step S105 is further included, which specifically includes the following contents:
s105: and modifying the window stacking sequence or the window coordinates on the display window.
In this embodiment, the modification process may be performed on the window stacking order or the window coordinates on the display window.
The window stacking sequence changing process can change the stacking sequence (Z sequence) of the background video or the background image on the display window, so as to adjust the display sequence of the background video or the background image on the display window.
The change processing of the window coordinates can change the position of the background video or the background image on the display window, avoid the problem of display overlapping of the background video or the background image, and further improve the richness of display.
In an embodiment of the present invention, referring to fig. 3, after step S101 in the embodiment of the video background processing method, the method further includes step S106 and step S107, which specifically include the following contents:
s106: and respectively determining a display area of the background video on the display window and a display area of the background image on the display window.
S107: and performing scaling processing on a display area of the background video on the display window or a display area of the background image on the display window.
In this embodiment, a display area of the background video or the background image on the display window may be referred to as a ratio of the background video or the background image to the display window, or a display frame of the background video or the background image on the display window.
The display area of the background video or the background image on the display window is zoomed, so that the richness of display can be further improved.
In an embodiment of the present invention, a specific implementation manner of step S104 in the video background processing method is provided, and referring to fig. 4, the specific implementation manner specifically includes the following contents:
s1041: determining the transparency of all pixel points on the target object in each video frame of the extracted video;
s1042: fusing the extracted video and the fused video based on the transparency of all the pixel points on the target object;
in this step, the transparency of all pixel points corresponding to the target object on the display window is determined, and the transparency of the pixel points is the transparency of each pixel point corresponding to the target object in the extracted video on the display window. In this step, the transparency of the pixel point of the target object on the display window is opaque. That is, the target object can be displayed completely on the display window.
In order to improve the video quality of the fused video, determining the confidence of each pixel point at the joint of the target object and the background; and determining the confidence as the transparency of each pixel point at the joint of the target object and the background. And determining the proportion of the target object and the background of the pixel point based on the proportion of the pixel point on the display window (the confidence of each pixel point at the joint of the target object and the background).
An embodiment of the present invention provides a specific implementation manner of a video background processing apparatus capable of implementing all contents in the video background processing method, and referring to fig. 5, the video background processing apparatus specifically includes the following contents:
an acquiring unit 10, configured to acquire a background video and a background image, respectively, and determine a window stacking order and window coordinates of each video frame in the background video and the background image on a display window, respectively;
a background unit 20, configured to perform fusion processing on the background video and the background image to obtain a fusion video based on a window stacking sequence and a window coordinate corresponding to each video frame in the background video and a window stacking sequence and a window coordinate corresponding to the background image;
the extraction unit 30 is configured to acquire a real-time video and extract a target object in the real-time video to obtain an extracted video including the target object;
and the fusion unit 40 is configured to perform fusion processing on the extracted video and the fused video as a foreground and a background, respectively.
In an embodiment of the present invention, the method further includes:
and the modifying unit is used for modifying the window stacking sequence or the window coordinates on the display window.
In an embodiment of the present invention, the method further includes:
and the adjusting unit is used for respectively determining a display area of the background video on the display window and a display area of the background image on the display window.
In an embodiment of the present invention, the method further includes:
and the zooming unit is used for zooming the display area of the background video or the display area of the background image on the display window.
In an embodiment of the present invention, the fusion unit 40 includes:
the pixel module is used for determining the transparency of all pixel points on the target object in each video frame of the extracted video;
the fusion module is used for fusing the extracted video and the fused video based on the transparency of all the pixel points on the target object;
and the transparency of the pixel points is the transparency degree of each pixel point corresponding to the target object in the extracted video on a display window.
The embodiment of the video background processing apparatus provided in the present invention may be specifically configured to execute the processing procedure of the embodiment of the video background processing method in the foregoing embodiment, and the functions of the processing procedure are not described herein again, and refer to the detailed description of the embodiment of the method.
As can be seen from the foregoing description, the video background processing apparatus provided in the embodiment of the present invention respectively obtains a background video and a background image, and respectively determines a window stacking order and window coordinates of each video frame in the background video and the background image on a display window; based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image, performing fusion processing on the background video and the background image to obtain a fusion video; acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object; and respectively taking the extracted video and the fused video as a foreground and a background for fusion processing. Splicing traces can be weakened, the video quality of the fused video is improved, customized multilayer background scenes are realized, and the purpose of real-time and rich display is achieved.
An embodiment of an electronic device for implementing all or part of contents in the video background processing method embodiment is provided in the embodiments of the present invention, and referring to fig. 6, the electronic device specifically includes the following contents:
a processor (processor)810, a communication Interface 820, a memory 830 and a communication bus 840, wherein the processor 810, the communication Interface 820 and the memory 830 communicate with each other via the communication bus 840. The processor 810 may call the computer instructions in the memory 830 to perform the following method:
respectively acquiring a background video and a background image, and respectively determining a window stacking sequence and window coordinates of each video frame in the background video and the background image on a display window;
based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image, performing fusion processing on the background video and the background image to obtain a fusion video;
acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object;
and respectively taking the extracted video and the fused video as a foreground and a background for fusion processing.
An embodiment of the present invention provides a computer-readable storage medium for implementing all or part of the contents in the embodiment of the video background processing method, where the computer-readable storage medium has stored thereon computer instructions, and when the computer instructions are executed, the computer instructions cause the computer to perform all the steps of the video background processing method in the above embodiment, for example, when the processor executes the computer instructions, the following steps are implemented:
respectively acquiring a background video and a background image, and respectively determining a window stacking sequence and window coordinates of each video frame in the background video and the background image on a display window;
based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image, performing fusion processing on the background video and the background image to obtain a fusion video;
acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object;
and respectively taking the extracted video and the fused video as a foreground and a background for fusion processing.
Although the present invention provides method steps as described in the examples or flowcharts, more or fewer steps may be included based on routine or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or client product executes, it may execute sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus (system) embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to part of the description of the method embodiment for relevant points.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention is not limited to any single aspect, nor is it limited to any single embodiment, nor is it limited to any combination and/or permutation of these aspects and/or embodiments. Moreover, each aspect and/or embodiment of the present invention may be utilized alone or in combination with one or more other aspects and/or embodiments thereof.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A method for video background processing, comprising:
respectively acquiring a background video and a background image, and respectively determining a window stacking sequence and window coordinates of each video frame in the background video and the background image on a display window;
based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image, performing fusion processing on the background video and the background image to obtain a fusion video;
acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object;
and respectively taking the extracted video and the fused video as a foreground and a background for fusion processing.
2. The video background processing method according to claim 1, further comprising, after said determining a window stacking order and window coordinates of each video frame and the background image in the background video on a display window:
and modifying the window stacking sequence or the window coordinates on the display window.
3. The video background processing method according to claim 1, further comprising, after said determining a window stacking order and window coordinates of each video frame and the background image in the background video on a display window:
and respectively determining a display area of the background video on the display window and a display area of the background image on the display window.
4. The video background processing method according to claim 3, further comprising, after said separately determining a display area of the background video on the display window and a display area of the background image on the display window:
and performing scaling processing on a display area of the background video on the display window or a display area of the background image on the display window.
5. The video background processing method according to claim 1, wherein the fusing the extracted video and the fused video as a foreground and a background respectively comprises:
determining the transparency of all pixel points on the target object in each video frame of the extracted video;
fusing the extracted video and the fused video based on the transparency of all the pixel points on the target object;
and the transparency of the pixel points is the transparency degree of each pixel point corresponding to the target object in the extracted video on a display window.
6. A video background processing apparatus, comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for respectively acquiring a background video and a background image and respectively determining the window stacking sequence and window coordinates of each video frame in the background video and the background image on a display window;
the background unit is used for fusing the background video and the background image to obtain a fused video based on the window stacking sequence and the window coordinates corresponding to each video frame in the background video and the window stacking sequence and the window coordinates corresponding to the background image;
the extraction unit is used for acquiring a real-time video and extracting a target object in the real-time video to obtain an extracted video containing the target object;
and the fusion unit is used for respectively taking the extracted video and the fused video as a foreground and a background to perform fusion processing.
7. The video background processing apparatus according to claim 6, further comprising:
and the modifying unit is used for modifying the window stacking sequence or the window coordinates on the display window.
8. The video background processing apparatus according to claim 6, further comprising:
and the adjusting unit is used for respectively determining a display area of the background video on the display window and a display area of the background image on the display window.
9. An electronic device, comprising: a processor, a memory, a communication interface, and a communication bus; wherein,
the processor, the communication interface and the memory complete mutual communication through a communication bus;
the processor is adapted to invoke computer instructions in the memory to perform the steps of the video background processing method of any of claims 1 to 5.
10. A computer-readable storage medium storing computer instructions that, when executed, cause the computer to perform the steps of the video background processing method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111652168.7A CN114339401A (en) | 2021-12-30 | 2021-12-30 | Video background processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111652168.7A CN114339401A (en) | 2021-12-30 | 2021-12-30 | Video background processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114339401A true CN114339401A (en) | 2022-04-12 |
Family
ID=81018425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111652168.7A Pending CN114339401A (en) | 2021-12-30 | 2021-12-30 | Video background processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114339401A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006060289A (en) * | 2004-08-17 | 2006-03-02 | Oki Electric Ind Co Ltd | Video image distributing system |
US20080001970A1 (en) * | 2006-06-29 | 2008-01-03 | Jason Herrick | Method and system for mosaic mode display of video |
CN201690532U (en) * | 2010-02-08 | 2010-12-29 | 深圳市同洲电子股份有限公司 | Video processing device and digital television receiving terminal |
CN105744340A (en) * | 2016-02-26 | 2016-07-06 | 上海卓越睿新数码科技有限公司 | Real-time screen fusion method for live broadcast video and presentation file |
US20170039867A1 (en) * | 2013-03-15 | 2017-02-09 | Study Social, Inc. | Mobile video presentation, digital compositing, and streaming techniques implemented via a computer network |
EP3386204A1 (en) * | 2017-04-04 | 2018-10-10 | Thomson Licensing | Device and method for managing remotely displayed contents by augmented reality |
CN110290425A (en) * | 2019-07-29 | 2019-09-27 | 腾讯科技(深圳)有限公司 | A kind of method for processing video frequency, device and storage medium |
CN110996150A (en) * | 2019-11-18 | 2020-04-10 | 咪咕动漫有限公司 | Video fusion method, electronic device and storage medium |
CN112261434A (en) * | 2020-10-22 | 2021-01-22 | 广州华多网络科技有限公司 | Interface layout control and processing method and corresponding device, equipment and medium |
CN112839190A (en) * | 2021-01-22 | 2021-05-25 | 九天华纳(北京)科技有限公司 | Method for synchronously recording or live broadcasting video of virtual image and real scene |
-
2021
- 2021-12-30 CN CN202111652168.7A patent/CN114339401A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006060289A (en) * | 2004-08-17 | 2006-03-02 | Oki Electric Ind Co Ltd | Video image distributing system |
US20080001970A1 (en) * | 2006-06-29 | 2008-01-03 | Jason Herrick | Method and system for mosaic mode display of video |
CN201690532U (en) * | 2010-02-08 | 2010-12-29 | 深圳市同洲电子股份有限公司 | Video processing device and digital television receiving terminal |
US20170039867A1 (en) * | 2013-03-15 | 2017-02-09 | Study Social, Inc. | Mobile video presentation, digital compositing, and streaming techniques implemented via a computer network |
US10515561B1 (en) * | 2013-03-15 | 2019-12-24 | Study Social, Inc. | Video presentation, digital compositing, and streaming techniques implemented via a computer network |
CN105744340A (en) * | 2016-02-26 | 2016-07-06 | 上海卓越睿新数码科技有限公司 | Real-time screen fusion method for live broadcast video and presentation file |
EP3386204A1 (en) * | 2017-04-04 | 2018-10-10 | Thomson Licensing | Device and method for managing remotely displayed contents by augmented reality |
CN110290425A (en) * | 2019-07-29 | 2019-09-27 | 腾讯科技(深圳)有限公司 | A kind of method for processing video frequency, device and storage medium |
CN110996150A (en) * | 2019-11-18 | 2020-04-10 | 咪咕动漫有限公司 | Video fusion method, electronic device and storage medium |
CN112261434A (en) * | 2020-10-22 | 2021-01-22 | 广州华多网络科技有限公司 | Interface layout control and processing method and corresponding device, equipment and medium |
CN112839190A (en) * | 2021-01-22 | 2021-05-25 | 九天华纳(北京)科技有限公司 | Method for synchronously recording or live broadcasting video of virtual image and real scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6655737B2 (en) | Multi-view scene segmentation and propagation | |
CN108939556B (en) | Screenshot method and device based on game platform | |
JP4879326B2 (en) | System and method for synthesizing a three-dimensional image | |
CN112868224B (en) | Method, apparatus and storage medium for capturing and editing dynamic depth image | |
CN111161392B (en) | Video generation method and device and computer system | |
US10049490B2 (en) | Generating virtual shadows for displayable elements | |
KR20130138177A (en) | Displaying graphics in multi-view scenes | |
US20210374972A1 (en) | Panoramic video data processing method, terminal, and storage medium | |
CN112804459A (en) | Image display method and device based on virtual camera, storage medium and electronic equipment | |
WO2021135320A1 (en) | Video generation method and apparatus, and computer system | |
US20190164347A1 (en) | Method of displaying at least one virtual object in mixed reality, and an associated terminal and system | |
CN114513520A (en) | Web three-dimensional visualization technology based on synchronous rendering of client and server | |
US20240193846A1 (en) | Scene rendering method, electronic device, and non-transitory readable storage medium | |
CN110060201B (en) | Hot spot interaction method for panoramic video | |
US9143754B2 (en) | Systems and methods for modifying stereoscopic images | |
KR102161437B1 (en) | Apparatus for sharing contents using spatial map of augmented reality and method thereof | |
KR102561903B1 (en) | AI-based XR content service method using cloud server | |
CN116801037A (en) | Augmented reality live broadcast method for projecting image of live person to remote real environment | |
CN114339401A (en) | Video background processing method and device | |
CN115311397A (en) | Method, apparatus, device and storage medium for image rendering | |
CN113947671A (en) | Panoramic 360-degree image segmentation and synthesis method, system and medium | |
CN115034956A (en) | Method and system capable of adjusting 360-degree panoramic view angle | |
Noda et al. | Generation of Omnidirectional Image Without Photographer | |
CN117132708A (en) | Image generation method, device, electronic equipment and storage medium | |
CN114513614A (en) | Device and method for special effect rendering of video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |