CN111372013A - Video rapid synthesis method and device, computer equipment and storage medium - Google Patents

Video rapid synthesis method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111372013A
CN111372013A CN202010183166.7A CN202010183166A CN111372013A CN 111372013 A CN111372013 A CN 111372013A CN 202010183166 A CN202010183166 A CN 202010183166A CN 111372013 A CN111372013 A CN 111372013A
Authority
CN
China
Prior art keywords
picture
video
scene
synthesized
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010183166.7A
Other languages
Chinese (zh)
Inventor
胡思伟
王梓彦
周婕
梁杰
纪亚忠
胡木火
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Qiutian Information Technology Co ltd
Original Assignee
Guangzhou Qiutian Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Qiutian Information Technology Co ltd filed Critical Guangzhou Qiutian Information Technology Co ltd
Priority to CN202010183166.7A priority Critical patent/CN111372013A/en
Publication of CN111372013A publication Critical patent/CN111372013A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The invention relates to the technical field of computer technology, in particular to a video rapid synthesis method, a device, computer equipment and a storage medium, which comprises the following steps: s10: if a scene interaction request is acquired, acquiring a corresponding scene material video from the scene interaction request; s20: acquiring a live-action shot picture in real time, and positioning a foreground picture and a background picture from the live-action shot picture; s30: separating the foreground picture from the background picture to obtain a picture to be synthesized, and tracking the foreground picture in the picture to be synthesized; s40: and synthesizing the picture to be synthesized and the scene material video to obtain interactive video data. The invention has the effect of enriching the interactive mode of the user scene service.

Description

Video rapid synthesis method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for fast video synthesis, a computer device, and a storage medium.
Background
At present, along with the continuous improvement of the life quality of people, the entertainment modes in the life of people are more and more abundant, wherein the scene services are added into various entertainment, work and life of people, such as KTV, party activities and wedding celebration.
In the existing scene service, playing a corresponding video according to a user or a scene requirement is involved, and a user can interact according to the content played in the video, for example, in KTV, the user plays an MV (music video, music short) of a song by a song-ordering manner and sing along with the MV of the song.
The above prior art solutions have the following drawbacks: although the existing scene service can realize the interaction between the user and the video, during the interaction, the user can only follow the content in the video to perform corresponding interaction, and cannot further participate in the video, for example, for wedding celebration or wedding photo shooting, the user may be limited by time, money or other reasons, and cannot go to the scenic spot of the mood apparatus to shoot, so that the interaction mode is single, and therefore, the scene service has an improvement space.
Disclosure of Invention
The invention aims to provide a video rapid synthesis method, a video rapid synthesis device, computer equipment and a storage medium for enriching interaction modes of user scene services.
The above object of the present invention is achieved by the following technical solutions:
a video rapid synthesis method comprises the following steps:
s10: if a scene interaction request is acquired, acquiring a corresponding scene material video from the scene interaction request;
s20: acquiring a live-action shot picture in real time, and positioning a foreground picture and a background picture from the live-action shot picture;
s30: separating the foreground picture from the background picture to obtain a picture to be synthesized, and tracking the foreground picture in the picture to be synthesized;
s40: and synthesizing the picture to be synthesized and the scene material video to obtain interactive video data.
By adopting the technical scheme, after the scene interaction request is obtained, the corresponding scene material video is selected from the preset material library according to the requirement of the user in the scene interaction request, so that the interaction requirement of the scene service of the user can be met; meanwhile, when the user performs scene service interaction, the foreground picture in the live-action shot picture shot by the user in real time is separated from the background picture, the separated foreground picture is tracked, and the tracked foreground picture is synthesized into the scene material video, so that the user can acquire the interactive video data, the user can participate in the scene material video, an interactive picture can be formed between the user and the scene material video, the interaction mode of the user in the scene service can be improved, and the use experience of the user is facilitated to be improved.
The present invention in a preferred example may be further configured to: before step S10, the video fast synthesis method further includes the following steps:
s101: acquiring video data of a material to be processed;
s102: and positioning the position of a picture to be synthesized from each piece of material video data to be processed to obtain the scene material video.
By adopting the technical scheme, the video data of the materials to be processed are obtained in advance, the selection of the user in selecting the scene material video can be enriched, the position of the picture to be synthesized is positioned in each video data of the materials to be processed, the foreground picture in the live-action shot picture is added to the position of the picture to be synthesized, the effect of the obtained interactive video data is more consistent with the content of the scene material video when the foreground picture is synthesized in the corresponding video data of the materials to be processed, the interactive effect between the user and the scene material video is improved, and the user experience is facilitated to be improved.
The present invention in a preferred example may be further configured to: step S20 specifically includes the following steps:
s21: acquiring background picture color data;
s22: and searching a foreground picture outline from the live-action shot picture according to the background color data, and positioning the foreground picture according to the foreground picture outline.
By adopting the technical scheme, the foreground picture can be positioned by finding out the foreground picture outline, so that the separation of the picture in the foreground picture outline from the background picture color is facilitated.
The present invention in a preferred example may be further configured to: step S30 specifically includes the following steps:
s31: extracting the background color of the background picture according to the background picture color data to obtain the picture to be synthesized;
s32: and tracking the foreground picture in the picture to be synthesized through the foreground picture outline.
By adopting the technical scheme, the background picture color data are extracted, so that the picture to be synthesized only containing the foreground picture outline can be obtained, the foreground picture can be tracked, and the tracked picture can be added to the scene material video conveniently.
The present invention in a preferred example may be further configured to: step S40 specifically includes the following steps:
s41: acquiring the position of the picture to be synthesized from the scene material video;
s42: and adding the foreground picture to the position of the picture to be synthesized to obtain the interactive video data.
By adopting the technical scheme, the interactive video data can be obtained in real time by adding the tracked foreground picture to the position of the picture to be synthesized.
The second aim of the invention is realized by the following technical scheme:
a video fast compositing device, the video fast compositing device comprising:
the material video acquisition module is used for acquiring a corresponding scene material video from the scene interaction request if the scene interaction request is acquired;
the shooting module is used for acquiring a live-action shooting picture in real time and positioning a foreground picture and a background picture from the live-action shooting picture;
the real-time matting module is used for separating the foreground picture and the background picture to obtain a picture to be synthesized and tracking the foreground picture in the picture to be synthesized;
and the video synthesis module is used for synthesizing the picture to be synthesized and the scene material video to obtain interactive video data.
By adopting the technical scheme, after the scene interaction request is obtained, the corresponding scene material video is selected from the preset material library according to the requirement of the user in the scene interaction request, so that the interaction requirement of the scene service of the user can be met; meanwhile, when the user performs scene service interaction, the foreground picture in the live-action shot picture shot by the user in real time is separated from the background picture, the separated foreground picture is tracked, and the tracked foreground picture is synthesized into the scene material video, so that the user can acquire the interactive video data, the user can participate in the scene material video, an interactive picture can be formed between the user and the scene material video, the interaction mode of the user in the scene service can be improved, and the use experience of the user is facilitated to be improved.
The third object of the invention is realized by the following technical scheme:
a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above video fast compositing method when executing the computer program.
The fourth object of the invention is realized by the following technical scheme:
a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the above-described video fast compositing method.
In summary, the invention includes at least one of the following beneficial technical effects:
1. after a scene interaction request is acquired, according to the requirements of a user in the scene interaction request, selecting a corresponding scene material video from a preset material library, so that the interaction requirements of the user for scene services can be met; meanwhile, when the user performs scene service interaction, a foreground picture in a live-action shot picture shot by the user in real time is separated from a background picture, the separated foreground picture is tracked, and the tracked foreground picture is synthesized into a scene material video, so that the user can acquire the interactive video data, can participate in the scene material video, and can form an interactive picture between the user and the scene material video, thereby improving the interaction mode of the user in the scene service and being beneficial to improving the use experience of the user;
2. the method comprises the steps of obtaining video data of materials to be processed in advance, enriching the selection of a user when the user selects the scene material video, positioning a position of a picture to be synthesized in each video data of the materials to be processed, and adding a foreground picture in a live-action shot picture to the position of the picture to be synthesized, so that when the foreground picture is synthesized in the corresponding video data of the materials to be processed, the effect of obtaining interactive video data is more consistent with the content of the scene material video, the interactive effect between the user and the scene material video is improved, and the user experience is facilitated to be improved;
3. the foreground picture obtained by tracking is added to the position of the picture to be synthesized, so that the interactive video data can be obtained in real time.
Drawings
FIG. 1 is a flow chart of a video fast compositing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another implementation of steps in a video fast synthesis method according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating an implementation of step S20 in the video fast synthesis method according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating an implementation of step S30 in the video fast synthesis method according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating an implementation of step S40 in the video fast synthesis method according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a video fast compositing apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The first embodiment is as follows:
in an embodiment, as shown in fig. 1, the present invention discloses a video fast synthesis method, which specifically includes the following steps:
s10: and if the scene interaction request is acquired, acquiring the corresponding scene material video from the scene interaction request.
In this embodiment, the scene interaction request refers to a message triggered by a user and requesting scene interaction. For example, in a scene of KTV, when a user wishes to interact with an MV in a song of a point, a message triggering the scene interaction request; or when the user wants to reproduce the long road during the activities of the party members, the scene interaction request is triggered; or when the wedding photo is taken at wedding celebration, the user can select the scene of the mood instrument and interact in the scene. The scene material video refers to a preset scene, which may be a virtual scene, a pre-recorded real scene, or a video material combining a virtual scene and a recorded real scene.
Specifically, when a user request interacts with a pre-recorded and pre-processed scene material video, the scene interaction request is triggered, and the scene material video which the user needs to interact with is acquired according to the scene interaction request.
For example, in an application scenario of KTV, a user selects a corresponding song as a scenario interaction request, verifies the permission of the user to trigger the scenario interaction request, for example, whether to pay a response money, and selects a corresponding scenario from the scenario interaction request after the verification is passed, where the scenario may be a preset virtual scenario or an MV corresponding to the song, and is not limited herein.
S20: and acquiring a live-action shot picture in real time, and positioning a foreground picture and a background picture from the live-action shot picture.
In this embodiment, the live-action shot picture refers to a picture shot by the image pickup device when the user performs the scenized interaction. The foreground picture refers to a picture including a user, a corresponding prop and the like in a live-action shot picture. The background picture is a picture of the background at the time of live-action shooting. Understandably, the foreground picture is the portion that the user wishes to incorporate into the video of the scene material.
Specifically, when the live-action picture is shot, a corresponding background is set, and in order to better position the shot foreground picture, the background may be a green curtain background. The user interacts with the scene material video before the background of the green screen, and the picture of the user during interaction is shot through the camera equipment and is used as the live-action shot picture. By various existing image matting software or plug-ins, such as Chroma Key Kit plug-in, the Chroma Key Kit can remove background pictures of multiple colors by importing the fact shot picture into the Chroma Key Kit plug-in, a green background is taken as a preferred scheme in this embodiment, and further, the Chroma Key Kit plug-in locates a corresponding foreground picture and a corresponding background picture in the live-action shot picture.
S30: and separating the foreground picture from the background picture to obtain a picture to be synthesized, and tracking the foreground picture in the picture to be synthesized.
In this embodiment, the picture to be synthesized is a picture that needs to be synthesized with the scene material video.
Specifically, the extracted foreground picture is subjected to synthesis, tracking, three-dimensional and other auxiliary software correspondingly to obtain the picture to be synthesized, and the foreground picture of the picture to be synthesized is tracked through the auxiliary software. The foreground picture separated from the live-action shot picture is imported into Unity3D software through the Chroma KeyKit plug-in.
S40: and synthesizing the picture to be synthesized and the scene material video to obtain interactive video data.
In this embodiment, the interactive video data refers to video data obtained by a user interacting with a scene material video.
Specifically, through the Unity3D software, the picture to be synthesized is spliced into the scene material video screen, so as to obtain the interactive video data.
In this embodiment, after a scene interaction request is acquired, a corresponding scene material video is selected from a preset material library according to a user requirement in the scene interaction request, so that an interaction requirement of a user for a scene service can be met; meanwhile, when the user performs scene service interaction, the foreground picture in the live-action shot picture shot by the user in real time is separated from the background picture, the separated foreground picture is tracked, and the tracked foreground picture is synthesized into the scene material video, so that the user can acquire the interactive video data, the user can participate in the scene material video, an interactive picture can be formed between the user and the scene material video, the interaction mode of the user in the scene service can be improved, and the use experience of the user is facilitated to be improved.
In one embodiment, as shown in fig. 2, before step S10, the video fast compositing method further includes the following steps:
s101: and acquiring video data of the material to be processed.
In this embodiment, the video data of the material to be processed refers to a video of the material that needs to be preprocessed.
Specifically, the video data of the material to be processed is acquired through modes such as network video data collection, live-action shooting or virtual video production.
S102: and positioning the position of a picture to be synthesized from each piece of material video data to be processed to obtain a scene material video.
In this embodiment, the position of the picture to be synthesized refers to information that a foreground picture in the live-action shot picture needs to be synthesized to a specific position in the scene material video.
Specifically, according to the specific content of the video in each piece of material video data to be processed, the position where the shot foreground picture needs to be spliced in the material video data to be processed is set. Preferably, after the position of the to-be-synthesized picture is set, the scaling of the foreground picture can be set, the scaling in each to-be-processed material video data can be set according to the content of the to-be-synthesized picture, and the position and the scaling of the to-be-synthesized picture can be adaptively adjusted according to the change of the content of the to-be-processed material video data. And after the scaling of each to-be-processed material video data and the position of the to-be-synthesized picture are set, obtaining the scene material video.
In an embodiment, as shown in fig. 3, in step S20, the method includes the following steps:
s21: and acquiring background picture color data.
In this embodiment, the background picture color data refers to the color of the background picture.
Specifically, the background picture color data is acquired by setting a background of a single color, for example, a green screen, at a place where the user takes the live-action picture.
S22: and according to the background color data, finding the foreground picture outline from the live-action shot picture, and positioning the foreground picture according to the foreground picture outline.
Specifically, since the background color data is a single color, the boundary of the foreground frame can be reversely selected according to the background color data to serve as the foreground frame contour, and the frame within the foreground frame contour can be used as the foreground frame.
In an embodiment, as shown in fig. 4, in step S30, the method separates a foreground picture from a background picture to obtain a picture to be synthesized, and tracks the foreground picture in the picture to be synthesized, which specifically includes the following steps:
s31: and extracting the background color of the background picture according to the background picture color data to obtain the picture to be synthesized.
Specifically, according to the background color data, the background color data is selected in the function of extracting the background color, and the background color in the live-action shot picture is removed, so that the picture to be synthesized is obtained.
S32: and tracking the foreground picture in the picture to be synthesized through the foreground picture outline.
Specifically, after the background picture is removed, the current picture is recorded as the foreground picture obtained by tracking.
In an embodiment, as shown in fig. 5, in step S40, the method for synthesizing the picture to be synthesized and the scene material video to obtain the interactive video data specifically includes the following steps:
s41: and acquiring the position of a picture to be synthesized from the scene material video.
Specifically, according to the setting in step S102, in the scene material video in the scene interaction request triggered by the user, the position of the picture to be synthesized of the scene material video is obtained.
S42: and adding the foreground picture to the position of the picture to be synthesized to obtain interactive video data.
Specifically, according to the position of the picture to be synthesized, the foreground picture is scaled according to the scaling in the scene material video, and then added to the scene material video to obtain the interactive video data.
Some specific examples are given:
in a song requesting scene of KTV, a user selects a song to be sung from a song library of a song selecting screen, selects a corresponding scene material video from the scene library, pays corresponding cost by scanning a two-dimensional code and the like, loads the scene material video to a main screen, interacts with the scene material video on a green screen background plate, obtains a picture to be synthesized in the whole interactive process, adds the picture to be synthesized to the scene material video in real time to obtain interactive video data, previews the interactive video data in real time on the song requesting screen, and obtains the interactive video data at a mobile terminal in a mode of WeChat public numbers and the like.
When a party is active, for example, a scene such as a long character road is re-walked, a preset picture, for example, some pictures during the long character road, is generated into the scene material video, and a user participating in the activity of the party can interact with the picture of the long character in a designated area and generate the interactive video data in real time, so that the user participating in the activity can experience the picture of the long character in the same year.
During a wedding event, a person participating in a wedding selects a scene of a mood, which can be a virtual scene or other scenic spots, such as a scene of a Paris iron match, a Fuji mountain or a seaside, interacts in a scene material video according to the scene material video corresponding to the scene, and further can store interactive video data obtained through interaction to serve as a souvenir or shoot a wedding photo by using the method.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example two:
in one embodiment, a video fast synthesis apparatus is provided, and the video fast synthesis apparatus corresponds to the video fast synthesis method in the above embodiments one to one. As shown in fig. 6, the video fast synthesis apparatus includes a material video acquisition module 10, a shooting module 20, a real-time matting module 30 and a video synthesis module 40. The functional modules are explained in detail as follows:
the material video acquiring module 10 is configured to, if a scene interaction request is acquired, acquire a corresponding scene material video from the scene interaction request;
the shooting module 20 is configured to obtain live-action shot pictures in real time, and position a foreground picture and a background picture from the live-action shot pictures;
a real-time matting module 30, configured to separate the foreground image from the background image to obtain an image to be synthesized, and track the foreground image in the image to be synthesized;
and the video synthesis module 40 is configured to synthesize the picture to be synthesized and the scene material video to obtain interactive video data.
Preferably, the video fast synthesis apparatus further comprises:
the system comprises an original material acquisition module 101, a processing module and a processing module, wherein the original material acquisition module is used for acquiring video data of a material to be processed;
and the material video preprocessing module 102 is configured to obtain the scene material video after positioning a to-be-synthesized picture position from each to-be-processed material video data.
Preferably, the real-time matting module 20 comprises:
a color obtaining sub-module 21, configured to obtain background picture color data;
and the foreground image obtaining submodule 22 is configured to search a foreground image contour from the live-action shot image according to the background color data, and position the foreground image according to the foreground image contour.
Preferably, the real-time matting module 30 includes:
the picture separation submodule is used for extracting the background color of the background picture according to the background picture color data to obtain the picture to be synthesized;
and the picture tracking submodule is used for tracking the foreground picture in the picture to be synthesized through the foreground picture outline.
Preferably, the video composition module 40 includes:
a position obtaining submodule 41, configured to obtain the position of the picture to be synthesized from the scene material video;
and the video synthesis submodule 42 is configured to add the foreground picture to the position of the picture to be synthesized, so as to obtain the interactive video data.
For specific limitations of the video fast synthesis apparatus, reference may be made to the above limitations of the video fast synthesis method, which are not described herein again. The modules in the video fast synthesis device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Example three:
in one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing scene material videos. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for fast video composition.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
s10: if a scene interaction request is acquired, acquiring a corresponding scene material video from the scene interaction request;
s20: acquiring a live-action shot picture in real time, and positioning a foreground picture and a background picture from the live-action shot picture;
s30: separating the foreground picture from the background picture to obtain a picture to be synthesized, and tracking the foreground picture in the picture to be synthesized;
s40: and synthesizing the picture to be synthesized and the scene material video to obtain interactive video data.
Example four:
in one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
s10: if a scene interaction request is acquired, acquiring a corresponding scene material video from the scene interaction request;
s20: acquiring a live-action shot picture in real time, and positioning a foreground picture and a background picture from the live-action shot picture;
s30: separating the foreground picture from the background picture to obtain a picture to be synthesized, and tracking the foreground picture in the picture to be synthesized;
s40: and synthesizing the picture to be synthesized and the scene material video to obtain interactive video data.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A video fast synthesis method is characterized by comprising the following steps:
s10: if a scene interaction request is acquired, acquiring a corresponding scene material video from the scene interaction request;
s20: acquiring a live-action shot picture in real time, and positioning a foreground picture and a background picture from the live-action shot picture;
s30: separating the foreground picture from the background picture to obtain a picture to be synthesized, and tracking the foreground picture in the picture to be synthesized;
s40: and synthesizing the picture to be synthesized and the scene material video to obtain interactive video data.
2. The video fast synthesis method according to claim 1, wherein before step S10, the video fast synthesis method further comprises the steps of:
s101: acquiring video data of a material to be processed;
s102: and positioning the position of a picture to be synthesized from each piece of material video data to be processed to obtain the scene material video.
3. The method for fast synthesizing video according to claim 1, wherein the step S20 specifically comprises the following steps:
s21: acquiring background picture color data;
s22: and searching a foreground picture outline from the live-action shot picture according to the background color data, and positioning the foreground picture according to the foreground picture outline.
4. The method for fast synthesizing video according to claim 3, wherein the step S30 specifically comprises the following steps:
s31: extracting the background color of the background picture according to the background picture color data to obtain the picture to be synthesized;
s32: and tracking the foreground picture in the picture to be synthesized through the foreground picture outline.
5. The method for fast synthesizing video according to claim 2, wherein the step S40 specifically comprises the following steps:
s41: acquiring the position of the picture to be synthesized from the scene material video;
s42: and adding the foreground picture to the position of the picture to be synthesized to obtain the interactive video data.
6. A video fast synthesis apparatus, characterized in that the video fast synthesis apparatus comprises:
the material video acquisition module is used for acquiring a corresponding scene material video from the scene interaction request if the scene interaction request is acquired;
the shooting module is used for acquiring a live-action shooting picture in real time and positioning a foreground picture and a background picture from the live-action shooting picture;
the real-time matting module is used for separating the foreground picture and the background picture to obtain a picture to be synthesized and tracking the foreground picture in the picture to be synthesized;
and the video synthesis module is used for synthesizing the picture to be synthesized and the scene material video to obtain interactive video data.
7. The video fast synthesis apparatus according to claim 6, wherein the video fast synthesis apparatus further comprises:
the raw material acquisition module is used for acquiring video data of a material to be processed;
and the material video preprocessing module is used for positioning the position of a picture to be synthesized from each piece of material video data to be processed to obtain the scene material video.
8. The apparatus for fast video synthesis according to claim 6, wherein the real-time matting module comprises:
the color obtaining submodule is used for obtaining color data of the background picture;
and the foreground image acquisition submodule is used for searching a foreground image outline from the live-action shot image according to the background color data and positioning the foreground image according to the foreground image outline.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the video fast compositing method according to any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the video fast compositing method according to any one of claims 1 to 5.
CN202010183166.7A 2020-03-16 2020-03-16 Video rapid synthesis method and device, computer equipment and storage medium Pending CN111372013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010183166.7A CN111372013A (en) 2020-03-16 2020-03-16 Video rapid synthesis method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010183166.7A CN111372013A (en) 2020-03-16 2020-03-16 Video rapid synthesis method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111372013A true CN111372013A (en) 2020-07-03

Family

ID=71210747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010183166.7A Pending CN111372013A (en) 2020-03-16 2020-03-16 Video rapid synthesis method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111372013A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130076790A1 (en) * 2007-01-22 2013-03-28 Total Immersion Method for augmenting a real scene
CN106303289A (en) * 2015-06-05 2017-01-04 福建凯米网络科技有限公司 A kind of real object and virtual scene are merged the method for display, Apparatus and system
CN106792246A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of interactive method and system of fusion type virtual scene
US20180108110A1 (en) * 2016-10-19 2018-04-19 Microsoft Technology Licensing, Llc Stereoscopic virtual reality through caching and image based rendering
CN110176077A (en) * 2019-05-23 2019-08-27 北京悉见科技有限公司 The method, apparatus and computer storage medium that augmented reality is taken pictures

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130076790A1 (en) * 2007-01-22 2013-03-28 Total Immersion Method for augmenting a real scene
CN106303289A (en) * 2015-06-05 2017-01-04 福建凯米网络科技有限公司 A kind of real object and virtual scene are merged the method for display, Apparatus and system
US20180108110A1 (en) * 2016-10-19 2018-04-19 Microsoft Technology Licensing, Llc Stereoscopic virtual reality through caching and image based rendering
CN106792246A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of interactive method and system of fusion type virtual scene
CN110176077A (en) * 2019-05-23 2019-08-27 北京悉见科技有限公司 The method, apparatus and computer storage medium that augmented reality is taken pictures

Similar Documents

Publication Publication Date Title
US20200065526A1 (en) Systems and methods for detecting modifications in a video clip
US20130236160A1 (en) Content preparation systems and methods for interactive video systems
JP2007019769A (en) Tag information display control apparatus, information processing apparatus, display apparatus, and tag information display control method and program
US11164604B2 (en) Video editing method and apparatus, computer device and readable storage medium
JP2007018198A (en) Device for generating index information with link information, device for generating image data with tag information, method for generating index information with link information, method for generating image data with tag information, and program
CN112218154B (en) Video acquisition method and device, storage medium and electronic device
KR101817145B1 (en) system and method for chroma-key composing using multi-layers
JP2019047366A (en) Content delivery server, content delivery method and content delivery program
JP4434094B2 (en) Tag information generation apparatus, tag information generation method and program
JP2021158612A (en) Video distribution device, video distribution method, and video distribution program
CN109413352A (en) Processing method, device, equipment and the storage medium of video data
CN111372013A (en) Video rapid synthesis method and device, computer equipment and storage medium
KR102337020B1 (en) Augmented reality video production system and method using 3d scan data
JPH11266422A (en) Broadcast program management system, broadcast program management method, and recording medium recorded with broadcast program management processing program
JP2005505067A (en) Interactive broadcast or input method and system
KR101401961B1 (en) System for sharing augmented reality contents and method thereof
KR102131106B1 (en) Karaoke player that enables personal broadcasting by connecting to a broadcasting service server based on a two-dimensional code and operating method thereof
KR20140134100A (en) Method for generating user video and Apparatus therefor
CN106331525A (en) Realization method for interactive film
CN113438436B (en) Video playing method, video conference method, live broadcast method and related equipment
CN109523941B (en) Indoor accompanying tour guide method and device based on cloud identification technology
CN113497894A (en) Video shooting method, device, terminal and storage medium
JP2011172103A (en) Image generating apparatus
JP4142427B2 (en) Image synthesizer
US9715900B2 (en) Methods, circuits, devices, systems and associated computer executable code for composing composite content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200703

RJ01 Rejection of invention patent application after publication