CN114286164A - Video synthesis method and device, electronic equipment and storage medium - Google Patents

Video synthesis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114286164A
CN114286164A CN202111623498.3A CN202111623498A CN114286164A CN 114286164 A CN114286164 A CN 114286164A CN 202111623498 A CN202111623498 A CN 202111623498A CN 114286164 A CN114286164 A CN 114286164A
Authority
CN
China
Prior art keywords
video
recording
synthesis
duration
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111623498.3A
Other languages
Chinese (zh)
Other versions
CN114286164B (en
Inventor
王雷
王宇航
曾鹏轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Siming Qichuang Technology Co ltd
Original Assignee
Beijing Siming Qichuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Siming Qichuang Technology Co ltd filed Critical Beijing Siming Qichuang Technology Co ltd
Priority to CN202111623498.3A priority Critical patent/CN114286164B/en
Publication of CN114286164A publication Critical patent/CN114286164A/en
Application granted granted Critical
Publication of CN114286164B publication Critical patent/CN114286164B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

The application belongs to the technical field of video synthesis, and discloses a method, a device, electronic equipment and a medium for video synthesis, wherein the method comprises the steps of acquiring course identification information when a video synthesis request message sent by a user is determined to be received; acquiring a recording step set aiming at the course identification information; responding to the recording operation executed by the user aiming at each recording step in the recording step set respectively, and obtaining the video composite material of the user; and carrying out video synthesis on the video synthesis material of the user to obtain a synthesized video. Therefore, a video composite material set can be obtained according to the recording operation of each recording step in the recording step set by a user, video synthesis is carried out on the video composite material to obtain a composite video, and the video is synthesized according to the video composite material, so that the content of the composite video is enriched, and the video synthesis effect is improved.

Description

Video synthesis method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of video synthesis technologies, and in particular, to a method and an apparatus for video synthesis, an electronic device, and a storage medium.
Background
With the rapid development of multimedia technology, video synthesis is becoming more and more popular, for example, in order to enhance the students' mastery of course contents, a video creation link is usually designed to synthesize video during or after the course teaching task.
In the prior art, video composition is generally performed in a screen recording mode.
However, in this way, the content of the synthesized video is relatively single, and the personalized material cannot be selected to synthesize the video, which results in poor video synthesis effect.
Therefore, when synthesizing a video, how to synthesize the video according to the personalized materials and improve the video synthesis effect is a technical problem to be solved.
Disclosure of Invention
The application aims to provide a video synthesis method, a video synthesis device, electronic equipment and a storage medium, which are used for synthesizing a video according to personalized materials when the video is synthesized, so that the video synthesis effect is improved.
In one aspect, a method for video composition is provided, including:
responding to a video synthesis instruction of a user for a target course, and acquiring course identification information of the target course;
acquiring a recording step set aiming at the course identification information;
responding to the recording operation executed by the user aiming at each recording step in the recording step set respectively, and obtaining a video composite material set of the user, wherein the video composite material set comprises at least one video composite material;
and carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
In the implementation process, the video synthetic material can be obtained according to the recording step specified by the target course, the video is synthesized by using the video synthetic material, and the user can select the personalized video material to synthesize the video through the recording step.
In one embodiment, obtaining a set of video composition materials of a user in response to a recording operation performed by the user for each recording step in the set of recording steps comprises:
if the recording operation is a material uploading operation, responding to the material uploading operation executed by the user aiming at each recording step in the recording step set respectively, and acquiring a video composite material uploaded by the user;
and if the recording operation is the material acquisition operation, responding to the material acquisition operation which is respectively executed by the user aiming at each recording step in the recording step set, and carrying out audio/video acquisition on the user to obtain a video composite material.
In the implementation process, the audio and video acquisition can be performed on the user according to the material acquisition operation of the user, the material uploaded by the user can be acquired, the personalized selection can be performed according to the preference of the user, and the flexibility and diversity of material acquisition are realized.
In one embodiment, before video composition of the video composition material set to obtain the composite video, the method further includes:
and if the recording step for indicating the acquisition of the specified material is determined to exist, acquiring the video composite material according to the material address information and the material identification information contained in the recording step.
In the implementation process, if the user terminal caches the specified material locally, the video composite material is directly obtained according to the material address information and the material identification information contained in the recording step, so that the material for video composite can be directly obtained locally, and the time cost for obtaining the video composite material is reduced.
In one embodiment, video compositing a set of video compositing material to obtain a composite video, comprises:
determining recording duration according to recording materials contained in the video synthesis material set;
determining the video synthesis time length according to the recording time length;
and according to the video synthesis duration, carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
In the implementation process, the video synthesis time length can be determined according to the recording time length of the recording material, so that the synthesized video can be higher in fusion degree with the recording, and the effect of synthesizing the video is further improved.
In one embodiment, video synthesis is performed on a video synthesis material set according to a video synthesis duration to obtain a synthesized video, including:
if the video composite material set is determined to be the set of the recording material and the video material, acquiring the video material duration of the video material in the video composite material set;
if the video material duration is not greater than the video synthesis duration, performing video synthesis on the recording material in the video synthesis material set and the video material in the video synthesis material set to obtain a synthesized video;
if the video material duration is longer than the video synthesis duration, the video material is segmented according to the video synthesis duration to obtain segmented video materials, wherein the video material duration of the segmented video materials is the video synthesis duration, and the recording material and the segmented video materials are subjected to video synthesis to obtain a synthesized video.
In the implementation process, if the video composite material set is determined to be the set of the recording material and the video material, whether the video material of the composite video needs to be segmented or not can be judged according to the recording duration of the recording material, so that the composite video effect can be better.
In one aspect, an apparatus for video composition is provided, including:
a response unit: the video synthesis instruction for the target course is responded to by the user, and course identification information of the target course is acquired;
an acquisition unit: the recording step set is used for acquiring the set of recording steps set aiming at the course identification information;
an obtaining unit: the video synthesis material set is used for responding to the recording operation executed by the user aiming at each recording step in the recording step set respectively, and the video synthesis material set of the user is obtained, wherein the video synthesis material set comprises at least one video synthesis material;
a synthesis unit: and the video synthesis device is used for carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
In one embodiment, the response unit is configured to:
if the recording operation is a material uploading operation, responding to the material uploading operation executed by the user aiming at each recording step in the recording step set respectively, and acquiring a video composite material uploaded by the user;
and if the recording operation is the material acquisition operation, responding to the material acquisition operation which is respectively executed by the user aiming at each recording step in the recording step set, and carrying out audio and video acquisition on the user to obtain a video synthesis material.
In one embodiment, the synthesis unit is configured to:
and if the recording step for indicating the acquisition of the specified material is determined to exist, acquiring the video composite material according to the material address information and the material identification information contained in the recording step.
In one embodiment, the synthesis unit is configured to:
determining recording duration according to recording materials contained in the video synthesis material set;
determining the video synthesis time length according to the recording time length;
and according to the video synthesis duration, carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
In one embodiment, the synthesis unit is specifically configured to:
if the video composite material set is determined to be the set of the recording material and the video material, acquiring the video material duration of the video material in the video composite material set;
if the video material duration is not greater than the video synthesis duration, performing video synthesis on the recording material in the video synthesis material set and the video material in the video synthesis material set to obtain a synthesized video;
if the video material duration is longer than the video synthesis duration, the video material is segmented according to the video synthesis duration to obtain segmented video materials, wherein the video material duration of the segmented video materials is the video synthesis duration, and the recording material and the segmented video materials are subjected to video synthesis to obtain a synthesized video.
In one aspect, an electronic device is provided, comprising a processor and a memory, the memory storing computer readable instructions which, when executed by the processor, perform the steps of the method provided in any of the various alternative implementations of video compositing as described above.
In one aspect, a readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the steps of the method as provided in any of the various alternative implementations of video compositing as described above.
In one aspect, a computer program product is provided which, when run on a computer, causes the computer to perform the steps of the method as provided in any of the various alternative implementations of video compositing as described above.
In the embodiment of the application, when the video synthesis request message sent by the user is determined to be received, the course identification information is obtained, the recording step set aiming at the course identification information is obtained, the video synthesis material of the user is obtained in response to the recording operation executed by the user aiming at each recording step in the recording step set, and the video synthesis material of the user is subjected to video synthesis to obtain the synthesized video. Therefore, the user can select the personalized video synthesis material to synthesize the video according to the video synthesis recording step, so that the expected video synthesis effect is achieved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic architecture diagram of a video composition system according to an embodiment of the present application;
fig. 2 is a flowchart illustrating an implementation of a video composition method according to an embodiment of the present disclosure;
fig. 3 is a first diagram illustrating an example of video composition provided by an embodiment of the present application;
fig. 4 is a diagram of an example of video composition provided in an embodiment of the present application;
fig. 5 is a third exemplary diagram of video composition provided in an embodiment of the present application;
fig. 6 is a fourth exemplary diagram of video composition provided by an embodiment of the present application;
fig. 7 is a flowchart illustrating a detailed implementation of a method for video composition according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an apparatus for video composition according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
First, some terms referred to in the embodiments of the present application will be described to facilitate understanding by those skilled in the art.
The terminal equipment: may be a mobile terminal, a fixed terminal, or a portable terminal such as a mobile handset, station, unit, device, multimedia computer, multimedia tablet, internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system device, personal navigation device, personal digital assistant, audio/video player, digital camera/camcorder, positioning device, television receiver, radio broadcast receiver, electronic book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It is also contemplated that the terminal device can support any type of interface to the user (e.g., wearable device), and the like.
A server: the cloud server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, big data and artificial intelligence platform and the like.
Multimedia materials: refers to various audio and visual tool materials used in multimedia courseware and multimedia related engineering. The multimedia material is a basic component of the multimedia courseware, is a basic unit for bearing teaching information, and comprises texts, graphs, images, animation, videos, audios and the like.
Identity Document (ID): also called serial number or account number, is a relatively unique code in a certain system, and is equivalent to an "identification card" in a specific thing, the identification number is generally not changed, and as to what is used for identifying the thing, the identification number is determined by the rule set by the designer.
Apple System (internet Operating System-Cisco, iOS): is a complex operating system optimized for internetworking, like a local operating system.
In order to synthesize a video according to personalized materials when the video is synthesized and achieve a desired video synthesis effect, embodiments of the present application provide a method and an apparatus for synthesizing a video, an electronic device, and a storage medium.
Fig. 1 is a schematic diagram of an architecture of a video composition system according to an embodiment of the present disclosure, where the video composition system includes a video composition device and a server, where the number of the servers may be 1 or n, where n is a positive integer, which is not limited herein.
The video synthesizing device comprises: the user terminal may be configured to, in response to a video composition instruction of a user for a target course, obtain course identification information of the target course, send a recording step request message including the course identification information to the server, receive a recording step set made by the server for the course identification information, and obtain a video composition material set of the user in response to a recording operation performed by the user for each recording step in the recording step set, and perform video composition on the video composition material set to obtain a composite video.
A server: the system comprises a recording step request message sending request message to the user terminal, wherein the recording step request message contains course identification information, the recording step set is set according to the course identification information, and the recording step set is sent to the user terminal.
In the embodiment of the present application, the execution subject may be a video composition device in the video composition system shown in fig. 1, and in practical applications, the video composition device may be an electronic device such as a terminal device and a server, which is not limited herein.
Referring to fig. 2, an implementation flow chart of a method for video composition provided in an embodiment of the present application is shown, and with reference to the user terminal shown in fig. 1, a specific implementation flow of the method is as follows:
step 200: and responding to the video synthesis instruction of the user for the target course, and acquiring course identification information of the target course.
Specifically, the user terminal obtains, in response to a video composition instruction for the target course by the user, course identification information, that is, a course ID, of the target course.
Optionally, the course identification information may be a course number or a course code, which is not limited herein.
Therefore, when the user terminal executes the subsequent steps, the recording step set can be accurately acquired according to the course ID.
Step 201: and acquiring a recording step set aiming at the course identification information.
Specifically, the user terminal receives a recording step set by the server based on the course ID.
In one embodiment, the user terminal sends a recording step request message containing course identification information to the server, and receives a recording step set returned by the server.
In one embodiment, the user terminal sends a request message containing a course ID to the server, and the server determines that the course is a programming course and the content of the currently learned programming course is the stacking block according to the course ID, formulates a set of programming steps required by the stacking block, and sends the set of programming steps to the user terminal.
Therefore, the recorded video materials can be rapidly and efficiently obtained according to the recording step set made by the server based on the course identification information.
Step 202: and responding to the recording operation executed by the user aiming at each recording step in the recording step set respectively, and obtaining a video composite material set of the user.
It should be noted that the video composition material set includes at least one video composition material.
Specifically, when step 202 is executed, the following steps may be executed:
s2021: and if the recording operation is a material uploading operation, responding to the material uploading operation executed by the user aiming at each recording step in the recording step set respectively, and acquiring the video composite material uploaded by the user.
Specifically, if the user terminal determines that the materials required in each recording step need to be uploaded by the user, the video composite materials uploaded by the user in each recording step are acquired.
Therefore, the user terminal can obtain the video composite material uploaded by the user in each recording step, and when the video is subsequently synthesized, the personalized video is synthesized according to the video material uploaded by the user.
S2022: and if the recording operation is a material acquisition operation, responding to the material acquisition operation which is respectively executed by the user aiming at each recording step in the recording step set, and acquiring the material of the user to obtain a video composite material.
Specifically, the user terminal obtains the recording material of the user in response to the recording step and the recorded recording of the user in the recording step set, or obtains the video material of the user in response to the video recording step and the recorded video in the recording step set, or obtains the picture material of the user in response to the picture recording step in the recording step set.
It should be noted that the video composition material may also be other materials besides the recording material, the video material and the picture material, for example, an animation material, which is not limited herein.
Further, in performing step 202, the following steps may be performed:
and if the recording step for indicating the acquisition of the specified material is determined to exist, acquiring the video composite material according to the material address information and the material identification information contained in the recording step.
Specifically, if the user terminal determines that a recording step for instructing acquisition of the specified material exists, the video composite material is acquired according to material address information and material identification information included in the recording step.
In one embodiment, if the user terminal locally caches specified materials required by each recording step, the video composition material is selected according to a material cache address, such as a C-disc-material folder, and material identification information, such as a picture material, included in each recording step.
Therefore, the video composite material required in the step of recording the composite video can be directly obtained from the user terminal, the efficiency of obtaining the composite video material is improved, and the efficiency of video composite is further improved.
Step 203: and carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
Specifically, when step 203 is executed, the following steps may be executed:
s2031: and determining the recording duration according to the recording materials contained in the video synthesis material set.
S2032: and determining the video synthesis time length according to the recording time length.
In one embodiment, the recording duration of the recording material is used as the video composition duration of the composite video.
In one embodiment, if the video composition material set further includes an audio material, the video composition time length is determined according to the audio time length of the audio material and the recording time length of the recording material.
Optionally, the recording material may be collected sounds related to the lesson, which are spoken by the user, and the audio material may be light music or songs.
In one embodiment, the sum of the audio duration, the recording duration, and the blank duration is used as the video synthesis duration.
For example, in order to make the video composition effect more influential, a section of open-field white recording or a section of theme tune may be added in front of the recording material, a section of trailer tune may be added at the end of the recording material, and the duration of the video composition may be obtained by adding the open-field white duration or the sum of the duration of the theme tune, the recording duration, and the duration of the trailer tune before the recording.
In one embodiment, an open-field white recording is added in front of the recording material, a blank sound is added at the recording end part of the recording material, and the sum of the open-field white duration, the recording duration and the blank sound duration added before the recording is used as the duration of the video synthesis.
Further, the video composition time duration may also be a preset time duration, which is not limited herein.
Therefore, the video synthesis time length can be quickly and accurately determined according to the recording time length, and the video synthesis efficiency is further improved.
S2033: and according to the video synthesis duration, carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
Referring to fig. 3, in an embodiment, if it is determined that a set of a picture material and a recording material is in a video Composition material set, a recording material duration of the recording material in the video Composition material set is obtained, the picture material is added to a video track (video track) of an empty template based on the recording material duration of the recording material, the video track (video track) of the picture material and an audio track (audio track) of the recording material are respectively added to a multi-video Composition module (AV multimedia Composition), and a video output module (AV Asset Export Session) is used to derive a composite video.
Specifically, when step S2033 is performed, the following steps may be performed:
step 1: and if the video composite material set is determined to be the set of the recording material and the video material, acquiring the video material duration of the video material in the video composite material set.
Step 2: and if the duration of the video material is not greater than the video synthesis duration, carrying out video synthesis on the recording material in the video synthesis material set and the video material in the video synthesis material set to obtain a synthesized video.
Referring to fig. 4, in an embodiment, if the duration of the Video material is consistent with the recording duration of the recording material (i.e., Video synthesis duration), or the duration of the Video material is less than the recording duration of the recording material (i.e., Video synthesis duration), the Video material and the recording material are input into an iOS platform or an android platform, and are synthesized by using an AV multimedia Composition based on Video synthesis software (AVFoundation), the Video track of the Video material and the Audio track of the recording material are each added to the AV multimedia Composition, and the synthesized Video is derived by using an AV Asset Export Session.
In one embodiment, if the video composite material set is determined to be a set of video materials, recording materials and picture materials, the video material duration of the video materials in the video composite material set is obtained, if the video material duration is not greater than the video composite duration, the material remaining duration is determined, the play interval duration of each picture material in the video composite material set is respectively determined according to the material remaining duration and the number of the picture materials in the video composite material set, and the video materials, the recording materials and the picture materials in the video composite material set are subjected to video synthesis according to the video composite duration and the play interval duration of each picture material to obtain a composite video.
And step 3: and if the video material duration is greater than the video synthesis duration, segmenting the video material according to the video synthesis duration to obtain the segmented video material.
It should be noted that the video material duration of the divided video material is the video synthesis duration, and the recording material and the divided video material are subjected to video synthesis to obtain a synthesized video.
Referring to fig. 5, in an embodiment, if the duration of the video material is greater than the video composition duration, the video track is segmented according to the video composition duration (i.e., the duration of the audio track) to obtain segmented video tracks, and the recording track and the segmented video tracks are video-composited to obtain a composite video.
Referring to fig. 6, in one embodiment, if it is determined that the video composition material set is a set of video material, recording material and picture material, the video material duration (i.e. video track 1) of the video material in the video composition material set is obtained, if the video track 1 of the video material is greater than the audio track 1 of the recording material, the video track is divided to obtain divided video tracks, and the video track 1 of the divided video material and the audio track 1 of the recording material are video-synthesized to obtain synthesized video 1, if the video track 2 of the video material is consistent with the audio track 2 of the recording material, the video track 2 of the video material and the audio track 2 of the recording material are video-synthesized directly to obtain synthesized video 2, and if the picture track 3 of the picture material is consistent with the audio track 3 of the recording material, the picture track 3 of the picture material and the audio track 3 of the recording material are video-synthesized directly, and finally, carrying out video synthesis on the synthesized video 1, the synthesized video 2 and the synthesized video 3 to obtain a synthesized video.
Therefore, the video synthesis duration can be determined according to the recording material duration of the recording material in the video synthesis material, whether the picture material or the video material in the video synthesis material needs to be adjusted or not is further judged, the video synthesis efficiency is improved, and the finally synthesized video is smoother.
Referring to fig. 7, a detailed implementation flowchart of a method for video composition according to an embodiment of the present application is shown, and the detailed implementation flow of the method is as follows:
step 700: and responding to the video synthesis instruction of the user for the target course, and acquiring course identification information of the target course.
Step 701: and acquiring a recording step set aiming at the course identification information.
Step 702: and responding to the recording operation executed by the user aiming at each recording step in the recording step set respectively, and obtaining a video composite material set of the user.
Step 703: and if the video composite material set is determined to be the set of the recording material and the video material, acquiring the video material duration of the video material in the video composite material set.
Step 704: and judging whether the video material duration is greater than the video synthesis duration, if not, executing a step 705, and if so, executing a step 706.
Step 705: and carrying out video synthesis on the recording material in the video synthesis material set and the video material in the video synthesis material set to obtain a synthesized video.
Step 706: and according to the video synthesis duration, segmenting the video material to obtain the segmented video material.
Step 707: and carrying out video synthesis on the recording material and the segmented video material to obtain a synthesized video.
Specifically, when step 700 to step 707 are executed, the specific steps refer to step 200 to step 203, which are not described herein again.
Referring to fig. 8, a schematic structural diagram of a video compositing apparatus according to an embodiment of the present application is shown, including:
response unit 801: the video synthesis instruction for the target course is responded to by the user, and course identification information of the target course is acquired;
the acquisition unit 802: the recording step set is used for acquiring the set of recording steps set aiming at the course identification information;
an obtaining unit 803: the video synthesis material set is used for responding to the recording operation executed by the user aiming at each recording step in the recording step set respectively, and the video synthesis material set of the user is obtained, wherein the video synthesis material set comprises at least one video synthesis material;
the synthesis unit 804: and the video synthesis device is used for carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
In one embodiment, the response unit 801 is configured to:
if the recording operation is a material uploading operation, responding to the material uploading operation executed by the user aiming at each recording step in the recording step set respectively, and acquiring a video composite material uploaded by the user;
and if the recording operation is the material acquisition operation, responding to the material acquisition operation which is respectively executed by the user aiming at each recording step in the recording step set, and carrying out audio and video acquisition on the user to obtain a video synthesis material.
In one embodiment, the synthesizing unit 804 is further configured to:
and if the recording step for indicating the acquisition of the specified material is determined to exist, acquiring the video composite material according to the material address information and the material identification information contained in the recording step.
In one embodiment, the synthesizing unit 804 is configured to:
determining recording duration according to recording materials contained in the video synthesis material set;
determining the video synthesis time length according to the recording time length;
and according to the video synthesis duration, carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
In one embodiment, the synthesizing unit 804 is configured to:
if the video composite material set is determined to be the set of the recording material and the video material, acquiring the video material duration of the video material in the video composite material set;
if the video material duration is not greater than the video synthesis duration, performing video synthesis on the recording material in the video synthesis material set and the video material in the video synthesis material set to obtain a synthesized video;
if the video material duration is longer than the video synthesis duration, the video material is segmented according to the video synthesis duration to obtain segmented video materials, wherein the video material duration of the segmented video materials is the video synthesis duration, and the recording material and the segmented video materials are subjected to video synthesis to obtain a synthesized video.
In the embodiment of the application, when the video synthesis request message sent by the user is determined to be received, the course identification information is obtained, the recording step set aiming at the course identification information is obtained, the video synthesis material of the user is obtained in response to the recording operation executed by the user aiming at each recording step in the recording step set, and the video synthesis material of the user is subjected to video synthesis to obtain the synthesized video. Therefore, the user can select the personalized video composite material to synthesize the video according to the video composite recording step, so that the user can synthesize the personalized video according to the selected video composite material, and the content of the composite video is enriched, so that the user can achieve the expected composite effect.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The electronic apparatus 9000 includes: the processor 9090 and the memory 9020 may further include a power supply 9030, a display unit 9040, and an input unit 9050.
The processor 9090 is a control center of the electronic device 9000, connects various components by various interfaces and lines, and executes software programs and/or data stored in the memory 9020 to perform various functions of the electronic device 9000, thereby integrally monitoring the electronic device 9000.
In this embodiment of the present application, when the processor 9090 calls the computer program stored in the memory 9020, a method for video composition provided in the embodiment shown in fig. 2 is performed.
Optionally, processor 9090 may include one or more processing units; preferably, the processor 9090 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor may not be integrated into the processor 9090. In some embodiments, the processor, memory, and/or memory may be implemented on a single chip, or in some embodiments, they may be implemented separately on separate chips.
The memory 9020 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, various applications, and the like; the storage data area may store data created from use of the electronic device 9000, and the like. Further, the memory 9020 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The electronic device 9000 further comprises a power supply 9030 (e.g., a battery) to supply power to various components, which may be logically connected to the processor 9090 via a power management system, so as to manage charging, discharging, and power consumption via the power management system.
The display unit 9040 can be used to display information input by a user or information provided to the user, various menus of the electronic device 9000, and the like, and in the embodiment of the present invention, the display unit is mainly used to display a display interface of each application in the electronic device 9000 and objects such as texts and pictures displayed in the display interface. The display unit 9040 may include a display panel 9041. The Display panel 9041 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 9050 may be configured to receive information such as numbers or characters input by a user. The input unit 9050 may include a touch panel 9051 and other input devices 9052. Among other things, the touch panel 9051, also referred to as a touch screen, may collect touch operations by a user (e.g., operations by a user on or near the touch panel 9051 using any suitable object or accessory such as a finger, a touch pen, etc.).
Specifically, the touch panel 9051 may detect a touch operation of the user, detect signals caused by the touch operation, convert the signals into touch point coordinates, send the touch point coordinates to the processor 9090, receive a command sent from the processor 9090, and execute the command. In addition, the touch panel 9051 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. Other input devices 9052 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, power on/off keys, etc.), a trackball, a mouse, a joystick, and the like.
Of course, the touch panel 9051 may cover the display panel 9041, and when the touch panel 9051 detects a touch operation on or near the touch panel 9051, the touch panel is transmitted to the processor 9090 to determine the type of the touch event, and then the processor 9090 provides a corresponding visual output on the display panel 9041 according to the type of the touch event. Although in fig. 9 the touch panel 9051 and the display panel 9041 are two separate components to implement the input and output functions of the electronic device 9000, in some embodiments the touch panel 9051 and the display panel 9041 may be integrated to implement the input and output functions of the electronic device 9000.
The electronic device 9000 can also include one or more sensors, such as a pressure sensor, a gravitational acceleration sensor, a proximity light sensor, and the like. Of course, the electronic device 9000 may further comprise other components such as a camera, which are not shown in fig. 9 and will not be described in detail herein since these components are not the components used in this embodiment of the present application.
Those skilled in the art will appreciate that fig. 9 is merely an example of an electronic device and is not intended to limit the electronic device and may include more or fewer components than those shown, or some components may be combined, or different components.
In an embodiment of the present application, a readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the communication device may perform the steps in the above embodiments.
For convenience of description, the above parts are separately described as modules (or units) according to functional division. Of course, the functionality of the various modules (or units) may be implemented in the same one or more pieces of software or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (12)

1. A method for video compositing, comprising:
responding to a video synthesis instruction of a user for a target course, and acquiring course identification information of the target course;
acquiring a recording step set aiming at the course identification information;
responding to the recording operation executed by the user aiming at each recording step in the recording step set respectively, and obtaining a video composite material set of the user, wherein the video composite material set comprises at least one video composite material;
and carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
2. The method of claim 1, wherein obtaining the set of video composition material for the user in response to the user performing a recording operation for each respective recording step in the set of recording steps comprises:
if the recording operation is a material uploading operation, responding to the material uploading operation executed by the user aiming at each recording step in the recording step set respectively, and obtaining a video composite material uploaded by the user;
and if the recording operation is a material acquisition operation, responding to the material acquisition operation which is respectively executed by the user aiming at each recording step in the recording step set, and carrying out audio and video acquisition on the user to obtain a video synthesis material.
3. The method according to claim 1 or 2, wherein before said video compositing said set of video compositing material to obtain a composite video, further comprising:
and if the recording step for indicating the acquisition of the specified material is determined to exist, acquiring the video composite material according to the material address information and the material identification information contained in the recording step.
4. The method of claim 3, wherein video compositing the set of video compositing materials to obtain a composite video comprises:
determining recording duration according to recording materials contained in the video synthesis material set;
determining the video synthesis time length according to the recording time length;
and according to the video synthesis duration, carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
5. The method of claim 4, wherein the video-composing the set of video composition material to obtain a composite video according to the video composition duration comprises:
if the video composite material set is determined to be the set of the recording material and the video material, acquiring the video material duration of the video material in the video composite material set;
if the video material duration is not greater than the video synthesis duration, performing video synthesis on the recording material in the video synthesis material set and the video material in the video synthesis material set to obtain a synthesized video;
if the video material duration is greater than the video synthesis duration, segmenting the video material according to the video synthesis duration to obtain segmented video materials, wherein the video material duration of the segmented video materials is the video synthesis duration, and carrying out video synthesis on the recording material and the segmented video materials to obtain the synthesized video.
6. An apparatus for video compositing, comprising:
the response unit is used for responding to a video synthesis instruction of a user for a target course, and acquiring course identification information of the target course;
an acquisition unit configured to acquire a set of recording steps set for the course identification information;
an obtaining unit, configured to obtain a video composition material set of the user in response to a recording operation that is respectively performed by the user for each recording step in the recording step set, where the video composition material set includes at least one video composition material;
and the synthesizing unit is used for carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
7. The apparatus according to claim 6, wherein the response unit is specifically configured to:
if the recording operation is a material uploading operation, responding to the material uploading operation executed by the user aiming at each recording step in the recording step set respectively, and obtaining a video composite material uploaded by the user;
and if the recording operation is a material acquisition operation, responding to the material acquisition operation which is respectively executed by the user aiming at each recording step in the recording step set, and carrying out audio and video acquisition on the user to obtain a video synthesis material.
8. The apparatus of claim 6 or 7, wherein the synthesis unit is further configured to:
and if the recording step for indicating the acquisition of the specified material is determined to exist, acquiring the video composite material according to the material address information and the material identification information contained in the recording step.
9. The apparatus according to claim 8, wherein the synthesis unit is specifically configured to:
determining recording duration according to recording materials contained in the video synthesis material set;
determining the video synthesis time length according to the recording time length;
and according to the video synthesis duration, carrying out video synthesis on the video synthesis material set to obtain a synthesized video.
10. The apparatus according to claim 8, wherein the synthesis unit is specifically configured to:
if the video composite material set is determined to be the set of the recording material and the video material, acquiring the video material duration of the video material in the video composite material set;
if the video material duration is not greater than the video synthesis duration, performing video synthesis on the recording material in the video synthesis material set and the video material in the video synthesis material set to obtain a synthesized video;
if the video material duration is greater than the video synthesis duration, segmenting the video material according to the video synthesis duration to obtain segmented video materials, wherein the video material duration of the segmented video materials is the video synthesis duration, and carrying out video synthesis on the recording material and the segmented video materials to obtain the synthesized video.
11. An electronic device comprising a processor and a memory, the memory storing computer readable instructions that, when executed by the processor, perform the method of any of claims 1-5.
12. A storage medium on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the method according to any one of claims 1-5.
CN202111623498.3A 2021-12-28 2021-12-28 Video synthesis method and device, electronic equipment and storage medium Active CN114286164B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111623498.3A CN114286164B (en) 2021-12-28 2021-12-28 Video synthesis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111623498.3A CN114286164B (en) 2021-12-28 2021-12-28 Video synthesis method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114286164A true CN114286164A (en) 2022-04-05
CN114286164B CN114286164B (en) 2024-02-09

Family

ID=80876942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111623498.3A Active CN114286164B (en) 2021-12-28 2021-12-28 Video synthesis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114286164B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032776A1 (en) * 2009-01-21 2014-01-30 Anantha Pradeep Methods and apparatus for providing personalized media in video
CN106572395A (en) * 2016-11-08 2017-04-19 广东小天才科技有限公司 Video processing method and device
KR101833644B1 (en) * 2016-11-25 2018-03-02 국방과학연구소 System and Method for recording screen shot combined with event information, graphic information, and template image
CN107770626A (en) * 2017-11-06 2018-03-06 腾讯科技(深圳)有限公司 Processing method, image synthesizing method, device and the storage medium of video material
WO2021008055A1 (en) * 2019-07-17 2021-01-21 广州酷狗计算机科技有限公司 Video synthesis method and apparatus, and terminal and storage medium
WO2021073315A1 (en) * 2019-10-14 2021-04-22 北京字节跳动网络技术有限公司 Video file generation method and device, terminal and storage medium
CN112822563A (en) * 2019-11-15 2021-05-18 北京字节跳动网络技术有限公司 Method, device, electronic equipment and computer readable medium for generating video
CN113055624A (en) * 2020-12-31 2021-06-29 创盛视联数码科技(北京)有限公司 Course playback method, server, client and electronic equipment
CN113132780A (en) * 2021-04-21 2021-07-16 北京乐学帮网络技术有限公司 Video synthesis method and device, electronic equipment and readable storage medium
CN113163229A (en) * 2021-03-05 2021-07-23 深圳点猫科技有限公司 Split screen recording and broadcasting method, device, system and medium based on online education
CN113838490A (en) * 2020-06-24 2021-12-24 华为技术有限公司 Video synthesis method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032776A1 (en) * 2009-01-21 2014-01-30 Anantha Pradeep Methods and apparatus for providing personalized media in video
CN106572395A (en) * 2016-11-08 2017-04-19 广东小天才科技有限公司 Video processing method and device
KR101833644B1 (en) * 2016-11-25 2018-03-02 국방과학연구소 System and Method for recording screen shot combined with event information, graphic information, and template image
CN107770626A (en) * 2017-11-06 2018-03-06 腾讯科技(深圳)有限公司 Processing method, image synthesizing method, device and the storage medium of video material
WO2021008055A1 (en) * 2019-07-17 2021-01-21 广州酷狗计算机科技有限公司 Video synthesis method and apparatus, and terminal and storage medium
WO2021073315A1 (en) * 2019-10-14 2021-04-22 北京字节跳动网络技术有限公司 Video file generation method and device, terminal and storage medium
CN112822563A (en) * 2019-11-15 2021-05-18 北京字节跳动网络技术有限公司 Method, device, electronic equipment and computer readable medium for generating video
CN113838490A (en) * 2020-06-24 2021-12-24 华为技术有限公司 Video synthesis method and device, electronic equipment and storage medium
CN113055624A (en) * 2020-12-31 2021-06-29 创盛视联数码科技(北京)有限公司 Course playback method, server, client and electronic equipment
CN113163229A (en) * 2021-03-05 2021-07-23 深圳点猫科技有限公司 Split screen recording and broadcasting method, device, system and medium based on online education
CN113132780A (en) * 2021-04-21 2021-07-16 北京乐学帮网络技术有限公司 Video synthesis method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN114286164B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
US20130076788A1 (en) Apparatus, method and software products for dynamic content management
US10558698B2 (en) Lyric page generation method and lyric page generation apparatus
CN109597900A (en) The management of local and remote media item
CN103607461A (en) Information sharing method and cloud server
CN111970571B (en) Video production method, device, equipment and storage medium
CN113132780A (en) Video synthesis method and device, electronic equipment and readable storage medium
CN107885483B (en) Audio information verification method and device, storage medium and electronic equipment
CN107450874B (en) Multimedia data double-screen playing method and system
CN112817790A (en) Method for simulating user behavior
CN103488669A (en) Information processing apparatus, information processing method and program
CN112417203A (en) Song recommendation method, terminal and storage medium
CN109462777B (en) Video heat updating method, device, terminal and storage medium
CN111126390A (en) Correlation method and device for identifying identification pattern in media content
US20140157147A1 (en) Feedback system, feedback method and recording media thereof
CN111626021A (en) Presentation generation method and device
CN106936830A (en) A kind of playing method and device of multi-medium data
CN114286164B (en) Video synthesis method and device, electronic equipment and storage medium
CN114943978B (en) Table reconstruction method and electronic equipment
CN107749201B (en) Click-to-read object processing method and device, storage medium and electronic equipment
CN113453051B (en) Virtual prop display method, device, terminal and computer readable storage medium
CN115658195A (en) Display method, display device, electronic equipment and storage medium
US20140289625A1 (en) System to generate a mixed media experience
CN104123112A (en) Image processing method and electronic equipment
CN116800988A (en) Video generation method, apparatus, device, storage medium, and program product
CN103136277A (en) Multimedia file playing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant