WO2022143253A1 - 视频生成方法、装置、设备及存储介质 - Google Patents

视频生成方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022143253A1
WO2022143253A1 PCT/CN2021/139606 CN2021139606W WO2022143253A1 WO 2022143253 A1 WO2022143253 A1 WO 2022143253A1 CN 2021139606 W CN2021139606 W CN 2021139606W WO 2022143253 A1 WO2022143253 A1 WO 2022143253A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
user
templates
image
interface
Prior art date
Application number
PCT/CN2021/139606
Other languages
English (en)
French (fr)
Inventor
王俊强
郑紫阳
关伟鸿
吕海涛
林婉铃
叶佳莉
林伟文
李杨
张展尘
曾颖雯
车欣
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to JP2023535441A priority Critical patent/JP2023553622A/ja
Priority to EP21913994.6A priority patent/EP4243427A4/en
Publication of WO2022143253A1 publication Critical patent/WO2022143253A1/zh
Priority to US18/331,340 priority patent/US20230317117A1/en

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Definitions

  • the embodiments of the present disclosure relate to the technical field of video processing, and in particular, to a video generation method, apparatus, device, and storage medium.
  • the video application provided by the related art can provide users with the functions of shooting and sharing videos. As more and more users shoot or share videos through video applications, how to improve the quality of the videos captured by the users, simplify the capturing operations of the users, and improve the fun of capturing videos is an urgent problem to be solved at present.
  • the embodiments of the present disclosure provide a video generation method, apparatus, device, and storage medium.
  • a first aspect of the embodiments of the present disclosure provides a video generation method, the method comprising:
  • Obtain the video theme and video production instructions configured by the user; obtain user images and multiple video templates matching the video theme according to the video production instructions, wherein the video templates include preset scene materials and reserved positions for user images ; Embed the user image on the reserved positions of at least some of the video templates in the plurality of video templates, so that the user image is combined with the scene materials on these video templates to generate at least one video, and obtains the pending video in these videos.
  • Published video publish the to-be-published video on the preset video playback platform.
  • a second aspect of the embodiments of the present disclosure provides a video generation apparatus, the apparatus comprising:
  • the first acquisition module is used to acquire the video theme and video production instructions configured by the user;
  • the second acquisition module is configured to acquire user images and multiple video templates matching the video theme according to the video production instruction, wherein the video templates include preset scene materials and reserved positions for user images;
  • a video generation module configured to embed the user image in the reserved positions of at least part of the video templates in the plurality of video templates, so that the user image is combined with the scene material on these video templates to generate at least one video;
  • the third acquisition module is used to acquire the video to be published in at least one video
  • the publishing module is used to publish the to-be-published video to the preset video playback platform.
  • a third aspect of the embodiments of the present disclosure provides a terminal device, the terminal device includes a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor can execute the above-mentioned first aspect Methods.
  • a fourth aspect of the embodiments of the present disclosure provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the processor can execute the method of the first aspect.
  • a user image and a plurality of video templates matching the video theme are obtained according to the video production instruction, and the user image is embedded in at least part of the pre-set of the video template. leave the location, so that the user image is combined with the scene material on the embedded video template to generate at least one video, and by acquiring the to-be-released video in the at least one video, the to-be-released video is released to the preset video playback platform.
  • the solution provided by the embodiments of the present disclosure pre-sets multiple video templates for each theme, and at the same time pre-designs corresponding scene materials in the video template and reserves the embedding position of the user image in the video template, so that in the video production process, only Embedding user images into multiple video templates can generate at least one video at a time, without the need for users to repeatedly shoot, simplifying user operations, improving video generation efficiency and user experience, and pre-designed scene materials can not only help users better It can express the subject content (such as the user's mood), improve the quality and interest of the video, and can also reduce the requirements for the user's shooting ability, help the user to better express the theme to be expressed, and improve the user's enthusiasm for making videos. For consumers, improved video quality can also improve the viewing experience.
  • FIG. 1 is a flowchart of a method for generating a video according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a display interface of a video template provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a user image capturing interface according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic display diagram of a first display interface provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic display diagram of a third display interface provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic display diagram of an interactive interface provided by an embodiment of the present disclosure.
  • FIG. 7 is a flowchart of another video generation method provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic display diagram of a user information interface displaying a mood configuration button according to an embodiment of the present disclosure
  • FIG. 9 is a schematic display diagram of a mood configuration interface provided by an embodiment of the present disclosure.
  • FIG. 10 is a flowchart of a mood setting provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic display diagram of a second display interface according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic display diagram of a video publishing interface provided by an embodiment of the present disclosure.
  • FIG. 13 is a flowchart of another video generation method provided by an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of a video generation apparatus according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a video generation method provided by an embodiment of the present disclosure, and the embodiment of the present disclosure can be applied to the situation of how to conveniently generate a video required by a user based on a user image.
  • the video generation method may be executed by a video generation apparatus, which may be implemented by software and/or hardware, and may be integrated on any terminal device, such as a mobile terminal, a tablet computer, and the like.
  • the video generating apparatus can be implemented as an independent application program, and can also be integrated into a video interactive application as a functional module.
  • the video generation method provided by the embodiment of the present disclosure may include:
  • the video production instruction is used to instruct the terminal device to generate a video on demand for the user.
  • the preset interface of the video interaction application includes a control or button for triggering a video production instruction, and the user can trigger the video production instruction by touching the control or button.
  • the preset interface can be any interface in the video interactive application, such as the main interface or user information interface of the video interactive application, on the basis of ensuring that better application interactivity can be achieved and a higher user experience can be provided. etc., the display positions of controls or buttons on the preset interface can also be determined according to design requirements.
  • the video topics mentioned in the embodiments of the present disclosure are used to classify video templates or to classify videos to be generated.
  • the types of video topics may include a series of user moods (referring to the mood states presented by users in a virtual social space), Love series, office series, etc.
  • Different types of video themes correspond to different video templates.
  • the video templates can be further subdivided for different topic sub-categories.
  • the sub-categories corresponding to the user's mood may include but are not limited to: happy, sad, angry, ashamed, etc., and each sub-category may correspond to multiple video templates.
  • the user can configure the desired video theme before triggering the video production command, or trigger the video production command before completing the configuration of the video theme.
  • the acquired user image may be an image currently taken by the user, or may be an existing image acquired from the user's album according to the user's image selection operation or uploading operation, which is not specifically described in this embodiment of the present disclosure. It is limited, that is, the technical solutions of the embodiments of the present disclosure have wide applicability to user images from any source.
  • the user image refers to an arbitrary image including a human face.
  • user images may be acquired after acquiring multiple (referring to at least two) video templates that match the video themes configured by the user, and multiple video templates that match the video themes configured by the user may be acquired after acquiring the user images.
  • video templates may be acquired after acquiring multiple (referring to at least two) video templates that match the video themes configured by the user, and multiple video templates that match the video themes configured by the user may be acquired after acquiring the user images.
  • obtaining the user image includes: outputting a shooting interface; and obtaining a user image captured by the user based on the shooting interface.
  • the shooting interface can be switched to the trigger interface of the video production instruction, or switched to the display interface of the video template, and the trigger interface of the video production instruction or the display interface of the video template can both be displayed for guiding The prompt information for the user to enter the shooting interface, so as to enhance the interface interactivity and improve the user experience.
  • FIG. 2 is a schematic diagram of a display interface of a video template provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a user image capture interface provided by an embodiment of the present disclosure.
  • the display interface displays a prompt message “go to take a picture” for guiding the user to enter the photographing interface.
  • the user can enter the user image photographing interface shown in FIG. 3 by touching the button 21 .
  • the user image shooting interface shown in FIG. 3 displays a shooting control 31, and at the same time displays a prompt message "facing the screen, try to fill the face frame with the face" for guiding the user to shoot.
  • the method provided by the embodiment of the present disclosure may further adjust the user expression on the user image based on a preset model, so that the user expression matches the video theme configured by the user.
  • the preset model is a pre-trained model with the function of adjusting the expression of the person on the image.
  • the training process of the preset model may include: obtaining a sample user image and an expression-adjusted target sample image, where the target sample image matches a preset theme; using the sample user image as an input for model training, The target sample image is used as the output of model training, and the preset model is obtained by training.
  • the specific algorithm used in the training process of the preset model is not specifically limited in the embodiment of the present disclosure, and may be determined according to training requirements.
  • the display effect of the user's expression can be optimized to ensure that the final generated video matches the theme configured by the user.
  • the requirement on the user's shooting ability can also be reduced, and even if the user's expression on the user's image does not match the subject, there is no need to replace the user's image, which realizes intelligent adjustment of the user's image.
  • S103 Embed the user image in the reserved positions of at least part of the video templates, so that the user image is combined with the scene material on at least part of the video templates to generate at least one video.
  • the user image can be embedded in the reserved position of each video template according to the preset strategy, or the user image can be embedded in some video templates according to the preset strategy on the reserved position.
  • the preset strategy may include but is not limited to: embedding the user image in the reserved position of the video template selected by the user according to the user's selection operation on the video template; or, according to the current performance information of the terminal device, embedding the user image Embedded in the reserved positions of a preset number of video templates, the preset number is determined according to the current performance information of the terminal device, and the higher the current performance of the terminal device, the larger the preset number can be.
  • a face recognition technology may be used to identify the face area on the user image, and then the face area and the reserved location area of the video template are fused.
  • the multiple video templates can be processed. show. At least one target template is determined according to the user's selection operation on the video template. Then, a user image is acquired, and the user image is embedded in a reserved position of at least one target template to generate a video required by the user.
  • Example 2 for the case of acquiring multiple video templates matching the video theme configured by the user after acquiring the user image, after acquiring the user image, the user image can be directly embedded in the reserved position of each video template.
  • the user images are respectively embedded in the reserved positions of some video templates to generate at least one video, from which the user can select the desired video.
  • All of the generated at least one video may be stored locally on the device, or the video selected by the user may be stored locally on the device according to the user's video selection operation.
  • the video to be published in this embodiment may be understood as a video selected by the user from at least one video generated above. It can also be understood as a video generated based on a template selected by the user from the plurality of video templates obtained above.
  • the step of displaying the multiple video templates to the user, so that the user selects at least one video template from the video templates as the target template may also be included.
  • the user can continue to select one or more templates from the selected target templates, and the videos generated after embedding the user's image in these templates are regarded as the videos to be published.
  • the obtained user image can also be embedded in the target templates to generate at least one video, and then, The generated at least one video is displayed to the user, so that the user can select a video to be published from the videos.
  • a video based on the target template can be displayed through a preset first display interface, the user can select the video to be published on the first display interface, and at the same time, the first display interface can also include The first button; the first button is used for the user to trigger the video release instruction, and the position of the first button on the first display interface can be determined according to the interface layout; when the first button is triggered, the video selected by the user is released to the pre-screen on the video playback platform you set up.
  • FIG. 4 is a schematic display diagram of a first display interface provided by an embodiment of the present disclosure. As shown in FIG.
  • the first display interface can display the generated video in the form of a list, and supports the user to slide left and right to switch the current interface
  • the displayed video if the currently displayed video is the focus video (ie the selected video), the user can trigger the publishing operation of the focus video by touching the first button 41 .
  • the video to be published may be a video selected by the user from videos generated based on the target template, or may be a video generated based on a template selected by the user from the target template.
  • the solution provided by the embodiment of the present disclosure pre-sets multiple video templates for each video theme, pre-designs corresponding scene materials in the video template, and reserves the embedding position of the user image in the video template (that is, the user's face information and the In this way, during the video production process, as long as the user image is embedded into multiple video templates, at least one video can be generated at a time, and the user does not need to repeatedly shoot, which simplifies the user operation.
  • the problem that the user needs to repeatedly shoot images when generating at least one video for the user in the existing solution is solved, and the video generation efficiency and user experience are improved.
  • pre-designed scene material can not only help users better express the theme content, but also improve the quality and interest of the video.
  • the method provided by the embodiment of the present disclosure further includes:
  • the video on the preset video playing platform is displayed to the user on the third display interface, wherein the video is also a video generated by the method in the embodiment of FIG. 1 above.
  • the third display interface may include a first icon; when it is detected that the user performs a preset touch operation on the first icon on the third display interface, the user is provided with an An interactive interface for publishers to interact;
  • the interaction information is generated based on the operations detected on the interaction interface for the preset options, and the interaction information is sent to the publisher of the video.
  • the preset options on the interactive interface may include at least one of the following: an option for sending a message, an option for greeting, and an option for viewing a video release record. These options can be used to trigger operations such as, but not limited to, sending a message to the video publisher, greeting the video publisher, and viewing the video publishing record of the video publisher (for example, a video used to express the user's historical mood).
  • the interactive interface can be implemented by being superimposed on the third display interface; alternatively, the interactive interface can also be implemented using a new interface switched from the third display interface; alternatively, the interactive interface can also be implemented by switching from the third display interface to the new interface. After the interface, the superimposed display is realized in the form of the new interface.
  • the new interface may be the user information interface of the video publisher, and the first icon may be the user avatar icon of the video publisher. After the current user touches the first icon, in addition to triggering the display of the interactive interface, if the video publisher It is not the object of the user's attention, and the user's attention to the video publisher can also be realized.
  • the display position of the first icon on the third display interface may be determined based on the page design, and the shape of the first icon may also be determined flexibly.
  • the user's touch operation or information input operation is supported, so as to generate interactive information according to the user's touch operation or information input operation.
  • a preset interactive sentence can be displayed on the interactive interface. Operation, take the sentence selected by the user as the interaction information that needs to be sent to the video publisher.
  • the interactive information can be generated, that is, automatically triggering the sending operation to the video publisher, or can be sent after receiving a sending instruction triggered by the user.
  • a confirm button and a cancel button may be displayed on the interactive interface, the confirm button is used for the user to trigger the sending of the instruction, and the cancel button is used for the user to trigger the cancellation of the sending of the instruction.
  • the third display interface for playing video to the interactive interface flexible interaction between users and video publishers is realized, and the implementation mode of interaction between users is enriched, making the interaction mode more flexible.
  • FIG. 5 is a schematic display diagram of a third display interface provided by an embodiment of the present disclosure, that is, a video playback interface.
  • a first icon 51 is displayed on the third display interface.
  • the terminal device can display interaction interface.
  • FIG. 6 is a schematic display diagram of an interactive interface provided by an embodiment of the present disclosure. Specifically, the interactive interface is implemented in the form of being superimposed and displayed on the user information interface of the video publisher after switching from the third display interface to the user information interface of the video publisher. As shown in Figure 6, the interactive interface supports sending messages, greetings, and viewing the other party's historically released video records. It should be noted that FIG. 6 is used as an example, and each interactive function supported on the interactive interface is implemented in the form of an independent interactive interface.
  • multiple interactive functions supported on the interactive interface can also be displayed in an integrated manner.
  • other information can also be displayed on the interactive interface, such as the video theme currently configured by the user, etc., such as the mood currently configured by the user, and the user's account name, etc., which can be adjusted according to the interface design, which is not specified in the embodiment of the present disclosure. limited.
  • FIG. 7 is a flowchart of another video generation method provided by an embodiment of the present disclosure, which is further optimized and expanded based on the foregoing technical solution, and may be combined with the foregoing optional implementation manners. Moreover, FIG. 7 exemplifies the technical solution of the embodiment of the present disclosure by taking the video theme as an example of the mood set by the user.
  • the video generation method provided by the embodiment of the present disclosure includes:
  • the user can trigger the mood configuration instruction through the mood configuration button on the user information interface provided by the video interaction application.
  • the mood configuration instruction can be used to instruct the terminal device to display the mood configuration interface.
  • the mood configuration interface includes buttons for sharing mood; when the user triggers the button, a video production instruction from the user is received.
  • FIG. 8 is a schematic display diagram of a user information interface displaying mood configuration buttons according to an embodiment of the present disclosure.
  • a mood configuration button 82 is displayed at the lower right corner of the user icon 81 ; after the user touches the mood configuration button 82 , the terminal device displays a mood configuration interface.
  • the mood configuration interface may include an interface. After the user selects the current mood according to the mood icon displayed on the interface, the user can trigger the video production instruction by touching the button for sharing the mood.
  • the mood configuration interface may also include a mood selection sub-interface (displaying a plurality of mood icons) and a mood configuration display sub-interface. The keys for sharing mood may be displayed on the mood configuration display sub-interface.
  • the terminal device After pressing the key 82, the terminal device first displays the mood selection sub-interface, and after detecting that the user has completed the mood selection operation, it switches to display the mood configuration display sub-interface.
  • the mood configuration interface can be implemented in the form of being superimposed and displayed on the user information interface, or can be implemented in the form of a new interface switched from the user information interface.
  • a specific mood configuration interface is superimposed and displayed on the user information interface, and includes a mood selection sub-interface and a mood configuration display sub-interface.
  • a key 91 for sharing the mood is displayed.
  • the terminal device can automatically switch to display the mood configuration display sub-interface according to the user's selection operation on the mood icon, and then the user touches the button 91, Trigger a video production command.
  • the mood configuration interface 9 are only examples, and can be flexibly designed according to requirements in practical applications, which are not specifically limited in the embodiments of the present disclosure.
  • the mood configuration display sub-interface for example, the mood configuration display sub-interface, a user icon and prompt information of successful mood configuration may also be displayed.
  • S203 Obtain the mood configured by the user on the mood configuration interface.
  • the video production instruction is triggered after the user touches a button on the mood configuration interface for sharing mood.
  • the second display interface includes a third button, and the step of acquiring the user image is performed when the user triggers the third button.
  • the second display interface further includes prompt information for guiding the user to trigger the third button to enter the shooting interface.
  • the schematic diagram of the display of the second display interface can refer to FIG. 2, the terminal device determines at least one target template according to the user's selection operation on the second display interface;
  • the touch operation of the "take photo” button 21) shown in FIG. 2 switches to enter the shooting interface, and acquires the user image obtained by the user based on the shooting interface.
  • embedding the user image in the reserved positions of at least part of the video templates among the plurality of video templates includes: replacing the preset image in the target template with the user image.
  • the preset image refers to a preset image including the face region of a sample user during the video template generation process, such as a cartoon character image.
  • the face area on the user image is replaced with the face area on the preset image in the target template, thereby generating at least one video.
  • the terminal device may publish the focus video selected by the user to a preset video playing platform according to the user's selection operation on the at least one video.
  • a mood configuration interface is displayed by configuring instructions according to the user's mood, and the mood configured by the user on the mood configuration interface is obtained.
  • the user's configured mood is obtained Matching multiple video templates, display multiple video templates, then determine at least one target template according to the user's selection operation on the video template, and finally embed the user image into the at least one target template, and generate at least one for the representation.
  • the video of the user's mood realizes the effect that at least one video can be generated at a time as long as the user image is embedded in at least one target template during the video production process, and the user does not need to repeatedly shoot images in the process of generating different videos, which simplifies the user experience.
  • the operation improves the user experience, and the pre-designed scene material in the video template can not only help the user better express the current mood, improve the quality and interest of the video, but also reduce the requirements for the user's shooting ability, even if the user's image
  • the shooting quality of the video is poor, and high-quality videos can also be generated for users based on video templates, which improves the enthusiasm of users to make videos.
  • the improvement of video quality can also improve the viewing experience.
  • the method provided by the embodiment of the present disclosure further includes:
  • the value of the preset threshold is set according to the value of the preset duration, and the larger the value of the preset duration is, the larger the value of the preset threshold may be correspondingly.
  • the preset time duration may be 24 hours
  • the preset threshold may be 1, which means that the user can only share his mood once a day.
  • FIG. 10 is a flow chart of a mood setting provided by an embodiment of the present disclosure, which is used to illustrate the embodiment of the present disclosure and should not be construed as a specific limitation to the embodiment of the present disclosure.
  • the mood setting process may include: the terminal device confirms a certain mood selected by the user according to the user's selection operation on the mood icon on the mood configuration interface; the terminal device detects whether the user is currently in a mood state, and if not, Then set the mood selected by the user as the current mood, and if so, overwrite the current mood with the new mood currently selected by the user; after receiving the user's video production instruction, check whether the user has released a mood video today, and if so, a prompt will appear.
  • the user can publish the mood video, that is, the terminal device can execute the video production instruction to obtain multiple video templates that match the mood configured by the user, and make a response to the obtained multiple videos.
  • the template is displayed so that the user can select at least one target template from a plurality of video templates, wherein, when the target template includes the user's historical image (that is, the user's image inserted into the target template in the past), if the user's release is received instruction, the video on a certain template selected by the user from the target template is released to the preset video playing platform, and the current video production is ended.
  • a photographing instruction from the user is received, the operation of acquiring the user image is performed, and a video is generated by replacing the historical image in the target template with the acquired user image.
  • the target template does not include the user's historical image
  • the user image is directly captured according to the user's shooting instruction, and the captured user image is embedded in the reserved position of the target template to generate a video.
  • the video template containing the user's historical image is displayed to the user, and when the user selects a template that already contains the historical image to publish, the video generated by combining the historical image and the template can be directly published to the preset video On the playback platform, the efficiency of video publishing has been improved.
  • FIG. 11 is a schematic display diagram of a second display interface provided by an embodiment of the present disclosure.
  • the currently displayed video template is a target template selected by the user including the user’s historical images, and the user touches the second button on the second display interface.
  • the historical video used to express the user's historical mood can be used as a video to express the user's current mood (that is, the current mood is the same as the historical mood), and published to the video playback platform, thereby improving the sharing efficiency of the user's video.
  • FIG. 12 is a schematic display diagram of a video publishing interface provided by an embodiment of the present disclosure.
  • the terminal device can switch to the video publishing interface. Touch the release button 1202 to release the video to be released to the video playback platform.
  • the video publishing interface may further include a draft box button 1201. After the user touches the draft box button 1201, the video to be published can be stored in the draft box. When you publish a video next time, you can directly publish the video stored in the draft box, or perform editing operations based on the video in the draft box, which helps to improve the efficiency of video sharing.
  • FIG. 13 is a flowchart of another video generation method provided by an embodiment of the present disclosure, which is used to illustrate the embodiment of the present disclosure and should not be construed as a specific limitation to the embodiment of the present disclosure.
  • the video generation method may include: after determining that the user can publish videos and acquiring multiple video templates matching the mood configured by the user, the terminal device determines whether the user is a new user, and the new user is a Refers to the user who has not embedded the face image into any video template to generate an exclusive video before the current time; if so, the user clicks the "take photo" button on the display interface of the video template to enter the face shooting interface (that is, the aforementioned user image shooting interface), obtain the user's face image captured by the user on the face shooting interface; then fuse the user's face image with multiple video materials (that is, video templates) to obtain at least one video with the user's face; finally , according to the user's selection operation in the generated at least one video, determine the video to be published
  • the face shooting interface will still be entered, and the user's face image will be fused with multiple video materials (ie, video templates). , get at least one video with the user's face.
  • FIG. 14 is a schematic structural diagram of a video generation apparatus 1400 according to an embodiment of the present disclosure.
  • the apparatus may be implemented by software and/or hardware, and may be integrated on any terminal device.
  • the video generating apparatus 1400 may include a first receiving module 1401, a first acquiring module 1402, and a video generating module 1403, wherein:
  • the first obtaining module 1401 is used to obtain the video theme and video production instruction configured by the user;
  • the second obtaining module 1402 is configured to obtain the user image and a plurality of video templates matching the video theme according to the video production instruction, wherein the video template includes preset scene materials and reserved positions of the user image;
  • a video generation module 1403, configured to embed the user image in the reserved positions of at least part of the video templates in the plurality of video templates, so that the user image is combined with the scene material on at least part of the video templates to generate at least one video;
  • the first publishing module 1405 is configured to publish the to-be-published video to a preset video playing platform.
  • the video to be published includes a video selected by the user from the at least one video, or a video generated based on a template selected by the user from the plurality of video templates.
  • the video theme includes the mood configured by the user on the mood configuration interface.
  • the video generating apparatus 1400 provided by the embodiment of the present disclosure further includes:
  • a mood sharing number determination module which is used to determine whether the user's mood sharing times within a preset time period exceeds a preset threshold
  • the prompt information output module is used for outputting prompt information for indicating that the number of times of mood sharing exceeds the limit
  • the first obtaining module 1402 is specifically configured to, if not, execute an operation of obtaining a user image and a plurality of video templates matching the video theme configured by the user according to the video production instruction.
  • the first obtaining module 1402 is specifically used for:
  • the user image is acquired after acquiring a plurality of video templates matching the video theme configured by the user.
  • the video generating apparatus 1400 provided by the embodiment of the present disclosure further includes:
  • the second display module is configured to display multiple video templates to the user, so that the user can select at least one video template from the multiple video templates as a target template.
  • the second publishing module is used to include the user's historical image in the reserved position of the target template, and when receiving the user's publishing instruction, publish the video of the template selected by the user from the target template to the preset video playback platform , and end this video production.
  • the video generation module is configured to perform the operation of acquiring the user image when receiving the shooting instruction from the user, and in the operation of embedding the user image in the reserved positions of at least part of the video templates in the plurality of video templates, convert the target The historical image in the template is replaced with the user image.
  • the video generating apparatus 1400 provided by the embodiment of the present disclosure further includes:
  • the expression adjustment module is used to adjust the user expression on the user image based on the preset model, so that the user expression matches the video theme configured by the user.
  • the video generating apparatus 1400 provided by the embodiment of the present disclosure further includes:
  • a video playback module for playing a video on a video playback platform on a display interface, wherein the video on the video playback platform refers to a video generated based on the above-mentioned video template;
  • the interactive interface display module provides an interactive interface for interacting with the publisher of the video when a preset touch operation is detected on the display interface;
  • the interactive information sending module is used to generate interactive information based on the user's operation on the options on the interactive interface, and send the interactive information to the publisher of the video.
  • the options on the interactive interface include at least one of the following: options for sending messages, Option to say hello and option to view video post transcript.
  • the video generation apparatus provided by the embodiments of the present disclosure can execute any video generation method provided by the embodiments of the present disclosure, and has functional modules and beneficial effects corresponding to the execution methods.
  • any video generation method provided by the embodiments of the present disclosure, and has functional modules and beneficial effects corresponding to the execution methods.
  • FIG. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 15 , the terminal device 1500 includes one or more processors 1501 and a memory 1502 .
  • the processor 1501 may be a central processing unit (CPU) or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and may control other components in the terminal device 1500 to perform desired functions.
  • CPU central processing unit
  • the processor 1501 may be a central processing unit (CPU) or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and may control other components in the terminal device 1500 to perform desired functions.
  • Memory 1502 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • Volatile memory may include, for example, random access memory (RAM) and/or cache memory, among others.
  • Non-volatile memory may include, for example, read only memory (ROM), hard disk, flash memory, and the like.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 1501 may execute the program instructions to implement any video generation method provided by the embodiments of the present disclosure, and may also implement other desired functions.
  • Various contents such as input signals, signal components, noise components, etc. may also be stored in the computer-readable storage medium.
  • the terminal device 1500 may further include: an input device 1503 and an output device 1504, these components are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
  • the input device 1503 may also include, for example, a keyboard, a mouse, and the like.
  • the output device 1504 can output various information to the outside, including the determined distance information, direction information, and the like.
  • the output device 1504 may include, for example, displays, speakers, printers, and communication networks and their connected remote output devices, among others.
  • terminal device 1500 may also include any other appropriate components according to the specific application.
  • embodiments of the present disclosure may also be computer program products comprising computer program instructions that, when executed by a processor, cause the processor to execute any video generation method provided by the embodiments of the present disclosure.
  • the computer program product may write program code for performing operations of embodiments of the present disclosure in any combination of one or more programming languages, including object-oriented programming languages, such as Java, C++, etc., as well as conventional procedural programming language, such as "C" language or similar programming language.
  • the program code may execute entirely on the user terminal device, partly on the user device, as a stand-alone software package, partly on the user terminal device and partly on the remote terminal device, or entirely on the remote terminal device or server execute on.
  • embodiments of the present disclosure may also be computer-readable storage media on which computer program instructions are stored, and when executed by the processor, the computer program instructions cause the processor to execute any video generation method provided by the embodiments of the present disclosure.
  • a computer-readable storage medium can employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, for example, but not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses or devices, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Circuits (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开涉及一种视频生成方法、装置、设备及存储介质,其中,该方法包括:在获取到用户配置的视频主题以及视频制作指令时,根据视频制作指令获取用户图像以及与视频主题匹配的多个视频模板,通过将用户图像嵌入至少部分视频模板的预留位置上,使得用户图像与视频模板上的情景素材结合生成至少一个视频,并通过获取至少一个视频中的待发布的视频,将待发布的视频发布到预设的视频播放平台上。本公开实施例可以实现在视频制作过程中只要将用户图像嵌入到多个视频模板中就能够一次生成至少一个个视频的效果,不需要用户重复拍摄,并且预先设计好情景素材不但能够帮助用户更好的表达主题内容,提高视频的质量和趣味性,还能够降低对用户拍摄能力的要求。

Description

视频生成方法、装置、设备及存储介质
本公开要求于2020年12月31日提交中国国家知识产权局、申请号为202011626264.X、发明名称为“视频生成方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及视频处理技术领域,尤其涉及一种视频生成方法、装置、设备及存储介质。
背景技术
相关技术提供的视频应用可以为用户提供拍摄和分享视频的功能。随着越来越多的用户通过视频应用拍摄或分享视频,如何提高用户拍摄的视频的质量,简化用户的拍摄操作,提高视频拍摄的趣味性是当前亟需解决的问题。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开实施例提供了一种视频生成方法、装置、设备及存储介质。
本公开实施例的第一方面提供了一种视频生成方法,该方法包括:
获取用户配置的视频主题以及视频制作指令;根据视频制作指令,获取用户图像,以及与视频主题相匹配的多个视频模板,其中,视频模板中包括预设的情景素材和用户图像的预留位置;将用户图像嵌入到所述多个视频模板中的至少部分视频模板的预留位置上,使得用户图像分别与该些视频模板上的情景素材结合生成至少一个视频,获取该些视频中的待发布的视频,将待发布的视频发布到预设的视频播放平台上。
本公开实施例的第二方面提供了一种视频生成装置,该装置包括:
第一获取模块,用于获取用户配置的视频主题以及视频制作指令;
第二获取模块,用于根据视频制作指令,获取用户图像,以及与视频主题相匹配的多个视频模板,其中,视频模板中包括预设的情景素材和用户图像的预留位置;
视频生成模块,用于将用户图像嵌入到所述多个视频模板中的至少部分视频模板的预留位置上,使得用户图像分别与这些视频模板上的情景素材结合生成至少一个个视频;
第三获取模块,用于获取至少一个视频中待发布的视频;
发布模块,用于将待发布的视频发布到预设的视频播放平台上。
本公开实施例的第三方面提供了一种终端设备,该终端设备包括存储器和处理器,其中,存储器中存储有计算机程序,当计算机程序被处理器执行时,处理器可以执行上述第一方面的方法。
本公开实施例的第四方面提供了一种计算机可读存储介质,该存储介质中存储有计算机程序,当该计算机程序被处理器执行时,使得处理器可以执行上述第一方面的方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例,在获取到用户配置的视频主题以及视频制作指令时,根据视频制作指令获取用户图像,以及与视频主题相匹配的多个视频模板,通过将用户图像嵌入至少部分视频模板的预留位置上,使得用户图像与被嵌入的视频模板上的情景素材进行结合生成至少一个视频,并通过获取至少一个视频中的待发布的视频,将待发布的视频发布到预设的视频播放平台上。本公开实施例提供的方案针对每个主题预先设置多个视频模板,同时在视频模板中预先设计好相应的情景素材并在视频模板中预留用户图像的嵌入位置,这样在视频制作过程中只要将用户图像嵌入到多个视频模板中就能够一次生成至少一个视频,不需要用户重复拍摄,简化了用户操作,提高了视频生成效率和用户体验,并且预先设计好情景素材不但能够帮助用户更好的表达主题内容(比如用户的心情),提高视频的质量和趣味性,还能够降低对用户拍摄能力的要求,帮助用户更好表达所要表达的主题,提高了用户制作视频的积极性,另外对于视频消费者来说,视频质量的提高也能够提升观看体验。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种视频生成方法的流程图;
图2为本公开实施例提供的一种视频模板的显示界面的示意图;
图3为本公开实施例提供的一种用户图像拍摄界面的示意图图;
图4为本公开实施例提供的一种第一显示界面的显示示意图;
图5为本公开实施例提供的一种第三显示界面的显示示意图;
图6为本公开实施例提供的一种交互界面的显示示意图;
图7为本公开实施例提供的另一种视频生成方法的流程图;
图8为本公开实施例提供的一种显示有心情配置按键的用户信息界面的显示示意图;
图9为本公开实施例提供的一种心情配置界面的显示示意图;
图10为本公开实施例提供的一种心情设置的流程图;
图11为本公开实施例提供一种第二显示界面的显示示意图;
图12为本公开实施例提供的一种视频发布界面的显示示意图;
图13为本公开实施例提供的另一种视频生成方法的流程图;
图14为本公开实施例提供的一种视频生成装置的结构示意图;
图15为本公开实施例提供的一种终端设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施 例只是本公开的一部分实施例,而不是全部的实施例。
图1为本公开实施例提供的一种视频生成方法的流程图,本公开实施例可以适用于如何基于用户图像便捷生成用户需求视频的情况。该视频生成方法可以由视频生成装置执行,该装置可以采用软件和/或硬件实现,并可集成在任意的终端设备上,例如移动终端、平板电脑等。并且,该视频生成装置可以作为独立的应用程序实现,也可以作为功能模块集成在视频交互应用中。
如图1所示,本公开实施例提供的视频生成方法可以包括:
S101、获取用户配置的视频主题以及视频制作指令。
其中,视频制作指令用于指示终端设备为用户生成需求视频。以视频交互应用为例,视频交互应用的预设界面中包括用于触发视频制作指令的控件或按键,用户可以通过触控该控件或按键触发视频制作指令。其中,在保证可以实现较好的应用交互性以及为用户提供较高的使用体验的基础上,该预设界面可以是视频交互应用中的任意界面,例如视频交互应用的主界面或者用户信息界面等,控件或按键在预设界面的显示位置也可以根据设计需求而定。
本公开实施例提及的视频主题用于对视频模板进行分类或者用于对待生成的视频进行分类,例如视频主题的类型可以包括用户心情(指用户在虚拟社交空间中呈现的心情状态)系列、恋爱系列、办公系列等。不同类型的视频主题对应不同的视频模板。并且,在每个类型的视频主题下,针对不同的主题子分类,还可以对视频模板进一步进行细分。例如,用户心情对应的子分类可以包括但不限于:开心、悲伤、生气、嫉妒等,每个子分类下又可以对应多个视频模板。用户可以在触发视频制作指令之前,配置需求的视频主题,也可以在完成视频主题的配置之前,触发视频制作指令。
S102、根据视频制作指令,获取用户图像,以及与视频主题相匹配的多个视频模板,其中,视频模板中包括预设的情景素材和用户图像的预留位置。
在本公开实施例中,获取的用户图像可以是用户当前拍摄的图像,也 可以是根据用户的图像选择操作或上传操作,从用户相册中获取的已有图像,本公开实施例对此不作具体限定,即本公开实施例的技术方案针对任意来源的用户图像均具有广泛的适用性。用户图像是指包括人脸的任意图像。
关于用户图像和视频模板的获取顺序,本公开实施例不作具体限定。示例性的,可以在获取到与用户配置的视频主题相匹配的多个(指至少两个)视频模板之后获取用户图像,还可以在获取用户图像之后获取与用户配置的视频主题相匹配的多个视频模板。
以获取用户当前拍摄的图像为例,获取用户图像包括:输出拍摄界面;获取用户基于拍摄界面拍摄得到的用户图像。其中,该拍摄界面可以是由视频制作指令的触发界面切换进入,也可以是由视频模板的显示界面切换进入,并且,视频制作指令的触发界面或者视频模板的显示界面上均可以显示用于引导用户进入拍摄界面的提示信息,以提升界面交互性,提高用户体验。
图2为本公开实施例提供的一种视频模板的显示界面的示意图,图3为本公开实施例提供的一种用户图像拍摄界面的示意图。如图2所示,该显示界面上显示有用于引导用户进入拍摄界面的提示信息“去拍照”,用户通过触控按键21,即可进入图3所示的用户图像拍摄界面。图3所示的用户图像拍摄界面上显示有拍摄控件31,同时显示有用于引导用户拍摄的提示信息“正对屏幕,尽量让面部填满人脸框”。
在一种实施方式中,获取用户图像之后,本公开实施例提供的方法还可以基于预设模型对用户图像上的用户表情进行调整,以使用户表情与用户配置的视频主题相匹配。其中,预设模型是预先训练的具有调整图像上的人物表情功能的模型。在一种实施方式中,预设模型的训练过程可以包括:获取样本用户图像以及表情调整后的目标样本图像,该目标样本图像与预设主题相匹配;将样本用户图像作为模型训练的输入,将目标样本图像作为模型训练的输出,训练得到预设模型。关于预设模型训练过程中采用的具体算法,本公开实施例不作具体限定,可以根据训练需求而定。
在本公开实施例中,通过利用预设模型对用户图像上的用户表情进行 调整,可以优化用户表情的显示效果,确保最终生成的视频与用户配置的主题相匹配。同时,还可以降低对用户拍摄能力的要求,即使用户图像上的用户表情与主题不匹配,也无需更换用户图像,实现了对用户图像的智能化调整。
S103、将用户图像嵌入到多个视频模板中的至少部分视频模板的预留位置上,使得用户图像分别与至少部分视频模板上的情景素材结合生成至少一个视频。
将用户图像嵌入到多个视频模板的预留位置的过程中,可以按照预设策略将用户图像嵌入每个视频模板的预留位置上,也可以按照预设策略将用户图像嵌入到部分视频模板的预留位置上。其中,该预设策略可以包括但不限于:根据用户对视频模板的选择操作,将用户图像嵌入到用户选择的视频模板的预留位置上;或者,根据终端设备的当前性能信息,将用户图像嵌入到预设数量的视频模板的预留位置上,该预设数量根据终端设备的当前性能信息确定,终端设备的当前性能越高,则预设数量的取值可以越大。具体的,在得到用户图像之后,可以利用人脸识别技术,识别用户图像上的人脸区域,然后将人脸区域与视频模板的预留位置区域进行融合。
示例性一,针对在获取到与视频主题相匹配的多个视频模板之后获取用户图像的情况,可以在获取到与用户配置的视频主题相匹配的多个视频模板后,将多个视频模板进行显示。根据用户对视频模板的选择操作,确定至少一个目标模板。然后获取用户图像,将用户图像嵌入到至少一个目标模板的预留位置上,生成用户需求的视频。
示例性二,针对在获取用户图像之后获取与用户配置的视频主题相匹配的多个视频模板的情况,在获取用户图像之后,可以直接将用户图像分别嵌入到每个视频模板的预留位置上或者将用户图像分别嵌入到部分视频模板的预留位置上,生成至少一个视频,进而用户可以从中选择需求的视频。
生成的至少一个视频可以全部存储在设备本地,也可以根据用户的视频选择操作,将用户选择的视频存储在设备本地。
S104、获取至少一个视频中待发布的视频。
其中,本实施例中所称的待发布的视频可以理解为用户从上述生成的至少一个视频中选中的视频。也可以理解为基于用户从上述获取到的多个视频模板中选中的模板生成的视频。比如,在一种实施方式中,在获取到多个视频模板之后,还可以包括向用户展示该多个视频模板,使得用户从该些视频模板中选择至少一个视频模板作为目标模板的步骤。在此步骤的基础上,用户可以从已选择的目标模板中继续选择一个或多个模板,并将用户图像嵌入该些模板后生成的视频作为待发布的视频。或者,在用户从多个与视频主题匹配的视频模板中选中至少一个视频模板作为目标模板后,还可以先将获取到的用户图像嵌入该些目标模板中,生成至少一个视频,然后,再将生成的至少一个视频显示给用户,以使用户从该些视频中选择待发布的视频。例如,在一个示例中,可以通过预设的第一显示界面对基于目标模板成的视频进行显示,用户可以在第一显示界面上选择要发布的视频,同时,第一显示界面上还可以包括第一按键;该第一按键用于用户触发视频发布指令,第一按键在第一显示界面上的位置可以根据界面布局而定;当第一按键被触发时,用户选择的视频被发布到预设的视频播放平台上。比如,图4是本公开实施例提供的一种第一显示界面的显示示意图,如图4所示,第一显示界面可以采用列表形式展示生成的视频,并且支持用户的左右滑动,切换当前界面显示的视频;如果当前显示的视频即为焦点视频(即被选中的视频),用户可以通过触控第一按键41,触发焦点视频的发布操作。也就是说,在本实施例中待发布的视频可以是用户从基于目标模板生成的视频中选中的视频,也可以是基于用户从目标模板中选中的模板生成的视频。
S105、将待发布的视频发布到预设的视频播放平台上。
本公开实施例提供的方案针对每个视频主题预先设置多个视频模板,在视频模板中预先设计好相应的情景素材并在视频模板中预留用户图像的嵌入位置(即将用户的人脸信息与视频模板进行融合),这样在视频制作过程中只要将用户图像嵌入到多个视频模板中就能够一次生成至少一个视频,不需要用户重复拍摄,简化了用户操作。解决了现有方案中在为用户生成至少一个视频时需要用户重复拍摄图像的问题,提高了视频生成效率 和用户体验。并且预先设计好情景素材不但能够帮助用户更好的表达主题内容,提高视频的质量和趣味性。还能够降低对用户拍摄能力的要求,帮助用户更好表达所要表达的主题,即使用户图像的拍摄质量欠佳,基于视频模板也可以为用户生成高质量的视频,提高了用户制作视频的积极性,解决了现有方案中用户图像的拍摄质量直接影响生成的视频质量的问题。另外对于视频消费者来说,视频质量的提高也能够提升观看体验。
在一种实施方式中,本公开实施例提供的方法还包括:
在第三显示界面上向用户展示预设的视频播放平台上的视频,其中,该视频也是通过上述图1实施例的方法生成得到的视频。
在本公开实施例中,第三显示界面上可以包括第一图标;当检测到用户对第三显示界面上的第一图标执行了预设的触控操作时,向用户提供用于与视频的发布者进行互动的交互界面;
基于在交互界面上检测到的针对预设选项的操作生成交互信息,并将交互信息发送给视频的发布者。
其中,交互界面上的预设选项至少可以包括如下一种:用于发消息的选项、打招呼的选项以及查看视频发布记录的选项。通过该些选项可以但不限于触发向视频发布者发消息、打招呼以及查看视频发布者的视频发布记录(例如用于表达用户历史心情的视频)等操作。交互界面可以采用叠加显示在第三显示界面上的形式实现;或者,交互界面也可以采用由第三显示界面切换进入的新界面实现;或者,交互界面还可以采用由第三显示界面切换进入新界面后,叠加显示在该新界面上的形式实现。其中,该新界面可以视频发布者的用户信息界面,进而第一图标可以是视频发布者的用户头像图标,当前用户触控第一图标后,除了可以触发交互界面的显示外,如果视频发布者不是用户的关注对象,还可以实现用户对视频发布者的关注。第一图标在第三显示界面的显示位置可以基于页面设计确定,第一图标的形状也可以灵活确定。
在交互界面上,支持用户的触控操作或者信息输入操作,从而根据用户的触控操作或者信息输入操作生成交互信息,例如交互界面上可以显示预先设置的交互语句,根据用户对交互语句的选择操作,将用户选择的语 句作为需要发送至视频发布者的交互信息。交互信息可以在生成之后,即自动触发向视频发布者的发送操作,也可以在收到用户触发的发送指令再进行发送。示例性的,交互界面上可以显示确定按键和取消按键,确定按键用于用户触发发送指令,取消按键用于用户触发取消发送指令。本公开实施例中,通过由用于播放视频的第三显示界面切换至交互界面,实现了用户与视频发布者的灵活互动,丰富了用户之间的交互实现方式,使得交互方式更加灵活。
图5是本公开实施例提供的一种第三显示界面的显示示意图,即视频播放界面,第三显示界面上显示有第一图标51,用户触控第一图标51之后,终端设备可以显示交互界面。图6是本公开实施例提供的一种交互界面的显示示意图。具体的,该交互界面采用由第三显示界面切换进入视频发布者的用户信息界面后,叠加显示在该视频发布者的用户信息界面上的形式实现。如图6所示,交互界面上支持发消息、打招呼、查看对方的历史发布视频记录。需要说明的是,图6作为示例,交互界面上支持的每种交互功能分别以独立交互界面的形式实现,应当理解,交互界面上支持的多种交互功能还可以集成显示。此外,交互界面上还可以显示其他的信息,例如用户当前配置的视频主题等,具体如用户当前配置的心情,以及用户的账户名称等,具体可以根据界面设计进行调整,本公开实施例不作具体限定。
图7为本公开实施例提供的另一种视频生成方法的流程图,基于上述技术方案进一步优化与扩展,并可以与上述各个可选实施方式进行结合。并且,图7具体以视频主题是指用户设置的心情为例,对本公开实施例的技术方案进行示例性说明。
如图7所示,本公开实施例提供的视频生成方法包括:
S201、接收用户触发的心情配置指令。
示例性的,用户可以通过视频交互应用提供的用户信息界面上的心情配置按键来触发心情配置指令。心情配置指令可以用于指示终端设备显示心情配置界面。
S202、根据心情配置指令,输出心情配置界面。
其中,心情配置界面上包括用于分享心情的按键;在用户触发按键时,接收到用户的视频制作指令。
图8是本公开实施例提供的一种显示有心情配置按键的用户信息界面的显示示意图。如图8所示,在用户图标81的右下角处显示有心情配置按键82;用户触控心情配置按键82之后,终端设备显示心情配置界面。该心情配置界面可以包括一个界面,用户根据界面上显示的心情图标选择当前心情后,可以通过触控用于分享心情的按键,触发视频制作指令。该心情配置界面也可以包括心情选择子界面(显示有多个心情图标)和心情配置显示子界面,用于分享心情的按键可以显示在心情配置显示子界面上,此时,用户触控心情配置按键82之后,终端设备首先显示心情选择子界面,检测到用户完成心情选择操作后,切换显示心情配置显示子界面。心情配置界面可以采用叠加显示在用户信息界面上的形式实现,也可以采用由用户信息界面切换进入的新界面的形式实现。
图9是本公开实施例提供的一种心情配置界面的显示示意图,具体的心情配置界面叠加显示在用户信息界面上,并包括心情选择子界面和心情配置显示子界面,心情配置显示子界面上显示有用于分享心情的按键91。当用户在心情选择子界面选择当前心情后,例如用户选择当前心情:超级开心,终端设备可以根据用户对心情图标的选择操作,自动切换显示心情配置显示子界面,然后用户通过触控按键91,触发视频制作指令。图9中所显示的心情配置界面的界面布局、心情图标的样式和心情图标的显示数量,仅作为一种示例,在实际应用中可以根据需求进行灵活设计,本公开实施例不作具体限定。并且,在心情配置界面上,具体例如心情配置显示子界面上,还可以显示用户图标、心情配置成功的提示信息。
S203、获取用户在心情配置界面上配置的心情。
S204、接收到用户的视频制作指令。
其中,视频制作指令由用户触控心情配置界面上用于分享心情的按键后触发。
S205、根据视频制作指令,获取与用户配置的心情相匹配的多个视频模板,其中,视频模板中包括预设的情景素材和用户图像的预留位置。
S206、在第二显示界面上对多个视频模板进行显示,以使用户从多个视频模板中选择至少一个视频模板作为目标模板。
S207、获取用户图像。
其中,第二显示界面上包括第三按键,在用户触发第三按键时执行获取用户图像的步骤。在一种实施方式中,第二显示界面上还包括用于引导用户触发第三按键进入拍摄界面的提示信息。
作为一种示例,第二显示界面的显示示意图可以参考图2,终端设备根据用户在第二显示界面的选择操作,确定至少一个目标模板;然后根据用户对第二显示界面的第三按键(例如图2中所示的“去拍照”按键21)的触控操作,切换进入拍摄界面,获取用户基于拍摄界面拍摄得到的用户图像。
S208、将用户图像嵌入到目标模板的预留位置上,使得用户图像与目标模板上的情景素材结合生成至少一个视频。
在一种实施方式中,将用户图像嵌入到多个视频模板中的至少部分视频模板的预留位置上,包括:采用用户图像替换目标模板中的预设图像。其中,该预设图像是指在视频模板生成过程中,预先设置的包含样本用户的人脸区域的图像,例如卡通人物图像。利用人脸识别技术识别出用户图像上的人脸后,将用户图像上的人脸区域替换目标模板中预设图像上的人脸区域,从而生成至少一个视频。通过用户图像替换目标模板中预设图像,提高了利用用户图像一次性生成多个视频的便捷性。
生成至少一个视频后,终端设备可以根据用户对至少一个视频的选择操作,将用户选择的焦点视频发布到预设的视频播放平台上。
根据本公开实施例的技术方案,通过根据用户的心情配置指令,显示心情配置界面,并获取用户在心情配置界面上配置的心情,接收到用户的视频制作指令后,获取与用户配置的心情相匹配的多个视频模板,并对多个视频模板进行显示,然后根据用户对视频模板的选择操作,确定至少一个目标模板,最后将用户图像嵌入至少一个目标模板中,同时生成至少一个用于表征用户心情的视频,实现了在视频制作过程中只要将用户图像嵌入到至少一个目标模板中就能够一次生成至少一个视频的效果,生成不同 的视频过程中完全不需要用户重复拍摄图像,简化了用户操作,提高了用户体验,并且视频模板中预先设计好的情景素材不但能够帮助用户更好的表达当前的心情,提高视频的质量和趣味性,还能够降低对用户拍摄能力的要求,即使用户图像的拍摄质量欠佳,基于视频模板也可以为用户生成高质量的视频,提高了用户制作视频的积极性,另外对于视频消费者来说,视频质量的提高也能够提升观看体验。
在上述技术方案的基础上,在一种实施方式中,接收到用户的视频制作指令之后,本公开实施例提供的方法还包括:
判断用户在预设时长内的心情分享次数是否超过预设阈值;
其中,若是,则输出用于指示不能执行本次心情分享的提示信息;
若否,则执行根据视频制作指令,获取用户图像以及与用户配置的主题相匹配的多个视频模板的操作。
其中,根据预设时长的取值,来设置预设阈值的取值,预设时长取值越大,相应的,预设阈值的取值可以越大。例如,预设时长取值为24小时,预设阈值可以取值为1,即表示用户每天只能进行一次心情分享。通过对用户在预设时长内的心情分享次数进行有效控制,可以缓解对视频播放平台的资源消耗,避免预设时长内视频播放平台接收的心情分享请求过多而导致平台功能瘫痪、视频播放平台对其他请求的响应能力降低,进而影响用户的视频分享体验的现象。
图10是本公开实施例提供的一种心情设置的流程图,用于对本公开实施例进行示例性说明,不应理解为对本公开实施例的具体限定。如图10所示,该心情设置过程可以包括:终端设备根据用户在心情配置界面上对心情图标的选择操作,确认用户选中的某个心情;终端设备检测用户当前是否有心情状态,如果否,则将用户选中的心情设置当前心情,如果是,则将用户当前选择的新心情覆盖当前心情;在接收到用户的视频制作指令之后,检测用户今日是否发布过心情视频,如果是,则出现提示,提示今日不能发布心情视频,如果否,则用户可以发布心情视频,即终端设备可以执行根据视频制作指令,获取与用户配置的心情相匹配的多个视频模板,并对获取到的多个视频模板进行显示,以使用户从多个视频模板中选择至 少一个目标模板,其中,当目标模板中包括用户的历史图像(即过去插入到目标模板中的用户图像)时,若接收到用户的发布指令,则将用户从目标模板中选择的某个模板上的视频发布到预设的视频播放平台上,并结束本次视频制作。若接收到用户的拍摄指令,则执行获取用户图像的操作,通过采用获取到的用户图像替换目标模板中的历史图像来生成视频。当目标模板中不包括用户的历史图像时,则直接根据用户的拍摄指令拍摄用户图像,并将拍摄得到的用户图像嵌入目标模板的预留位置上生成视频。
本实施例通过将包含用户历史图像的视频模板展示给用户,当用户选择某个已包含历史图像的模板进行发布时,可以直接将历史图像和该模板结合后生成的视频发布到预设的视频播放平台上,提高了视频发布的效率。
图11是本公开实施例提供一种第二显示界面的显示示意图。如图11所示,具体以用户配置的“超级开心”的心情为例,当前显示的视频模板为用户选择的包括用户历史图像的一个目标模板,用户触控第二显示界面上的第二按键111,触发针对目标模板上的视频的发布操作。即如果用户在历史视频的制作过程中,使用当前目标模板制作了历史视频,则当用户再次选择当前目标模板时,可以直接将当前目标模板上的历史视频作为当前视频,以心情主题为例,用于表达用户历史心情的历史视频可以作为表达用户当前心情的视频(即当前心情与历史心情相同),并发布至视频播放平台,从而提高用户视频的分享效率。
图12是本公开实施例提供的一种视频发布界面的显示示意图。用户触控第二显示界面上的第二按键,触发针对目标模板上的视频的发布操作后,终端设备可以切换进入视频发布界面,如图12所示,用户完成发布信息的编辑操作后,可以触控发布按键1202,将待发布的视频发布至视频播放平台。同时,视频发布界面上还可以包括草稿箱按键1201,用户触控该草稿箱按键1201后,可以将待发布的视频存储在草稿箱内。当下一次发布视频时,可以直接将草稿箱内存储的视频进行发布,或者基于草稿箱内的视频进行编辑操作,有助于提高视频分享效率。
图13为本公开实施例提供的另一种视频生成方法的流程图,用于对本公开实施例进行示例性说明,不应理解为对本公开实施例的具体限定。如 图13所示,该视频生成方法可以包括:在确定用户可以进行视频发布,并获取与用户配置的心情相匹配的多个视频模板后,终端设备确定用户是否是新用户,该新用户是指在当前时间之前没有将人脸图像嵌入任何视频模板生成专属视频的用户;如果是,在用户点击视频模板的显示界面上的“去拍照”按键,进入人脸拍摄界面(即前述用户图像拍摄界面),获取用户在人脸拍摄界面拍摄得到的用户人脸图像;然后将用户人脸图像与多个视频素材(即视频模板)进行融合处理,得到带有用户人脸的至少一个视频;最后,根据用户在生成的至少一个视频中的选择操作,确定待发布的视频,并进行发布;针对用户不是新用户的情况,即用户在当前时间之前已经生成过带有用户人脸图像的视频,则可以根据用户的发布指令直接将用户选择的某个已融合人脸图像的视频模板上的视频发布到预设的视频播放平台上。若之前已经生成过带有用户人脸图像的视频,但是用户仍旧触发了拍摄指令,那么仍旧会进入人脸拍摄界面,并将用户人脸图像与多个视频素材(即视频模板)进行融合处理,得到带有用户人脸的至少一个视频。
图14为本公开实施例提供的一种视频生成装置1400的结构示意图,该装置可以采用软件和/或硬件实现,并可集成在任意的终端设备上。
如图14所示,本公开实施例提供的视频生成装置1400可以包括第一接收模块1401、第一获取模块1402和视频生成模块1403,其中:
第一获取模块1401,用于获取用户配置的视频主题以及视频制作指令;
第二获取模块1402,用于根据视频制作指令,获取用户图像,以及与视频主题相匹配的多个视频模板,其中,视频模板中包括预设的情景素材和用户图像的预留位置;
视频生成模块1403,用于将用户图像嵌入到多个视频模板中的至少部分视频模板的预留位置上,使得用户图像分别与至少部分视频模板上的情景素材结合生成至少一个视频;
第三获取模块1404,用于获取至少一个视频中待发布的视频;
第一发布模块1405,用于将待发布的视频发布到预设的视频播放平台上。
在一种可能的实现方式中,待发布的视频包括用户从所述至少一个视 频中选择的视频,或者基于用户从所述多个视频模板中选择的模板生成的视频。
在一种可能的实现方式中,视频主题包括用户在心情配置界面上配置的心情。
在一种可能的实现方式中,本公开实施例提供的视频生成装置1400还包括:
心情分享次数确定模块,用于判断用户在预设时长内的心情分享次数是否超过预设阈值;
提示信息输出模块,用于若是,则输出用于指示心情分享次数超限的提示信息;
第一获取模块1402具体用于,若否,则执行根据视频制作指令,获取用户图像以及与用户配置的视频主题相匹配的多个视频模板的操作。
在一种可能的实现方式中,第一获取模块1402具体用于:
在获取到与用户配置的视频主题相匹配的多个视频模板之后获取用户图像。
在一种可能的实现方式中,本公开实施例提供的视频生成装置1400还包括:
第二显示模块,用于向用户展示多个视频模板,以使用户从多个视频模板中选择至少一个视频模板作为目标模板。
第二发布模块,用于在目标模板的预留位置上包括用户的历史图像,且接收到用户的发布指令时,将用户从目标模板中选择的模板的视频发布到预设的视频播放平台上,并结束本次视频制作。
视频生成模块,用于在接收到用户的拍摄指令时,执行获取用户图像的操作,并在将用户图像嵌入到多个视频模板中的至少部分视频模板的预留位置上的操作中,将目标模板中的历史图像替换为所述用户图像。
在一种可能的实现方式中,本公开实施例提供的视频生成装置1400还包括:
表情调整模块,用于基于预设模型对用户图像上的用户表情进行调整,以使用户表情与用户配置的视频主题相匹配。
在一种可能的实现方式中,本公开实施例提供的视频生成装置1400还包括:
视频播放模块,用于在显示界面上播放视频播放平台上的视频,其中,该视频播放平台上的视频是指基于上述视频模板生成的视频;
交互界面显示模块,当在显示界面上检测到预设的触控操作时,提供用于与视频的发布者进行互动的交互界面;
交互信息发送模块,用于基于用户对交互界面上的选项的操作生成交互信息,并将交互信息发送给视频的发布者,交互界面上的选项至少包括如下一种:用于发消息的选项、用于打招呼的选项以及查看视频发布记录的选项。
本公开实施例所提供的视频生成装置可执行本公开实施例所提供的任意视频生成方法,具备执行方法相应的功能模块和有益效果。本公开装置实施例中未详尽描述的内容可以参考本公开任意方法实施例中的描述。
图15为本公开实施例提供的一种终端设备的结构示意图。如图15所示,终端设备1500包括一个或多个处理器1501和存储器1502。
处理器1501可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制终端设备1500中的其他组件以执行期望的功能。
存储器1502可以包括一个或多个计算机程序产品,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器1501可以运行程序指令,以实现本公开实施例提供的任意视频生成方法,还可以实现其他期望的功能。在计算机可读存储介质中还可以存储诸如输入信号、信号分量、噪声分量等各种内容。
在一个示例中,终端设备1500还可以包括:输入装置1503和输出装置1504,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
此外,该输入装置1503还可以包括例如键盘、鼠标等等。
该输出装置1504可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出装置1504可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图15中仅示出了该终端设备1500中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,终端设备1500还可以包括任何其他适当的组件。
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机程序指令,计算机程序指令在被处理器运行时使得处理器执行本公开实施例所提供的任意视频生成方法。
计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户终端设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户终端设备上部分在远程终端设备上执行、或者完全在远程终端设备或服务器上执行。
此外,本公开的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,计算机程序指令在被处理器运行时使得处理器执行本公开实施例所提供的任意视频生成方法。
计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求 或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (11)

  1. 一种视频生成方法,其特征在于,包括:
    获取用户配置的视频主题以及视频制作指令;
    根据所述视频制作指令,获取用户图像,以及与所述视频主题相匹配的多个视频模板,其中,所述视频模板中包括预设的情景素材和用户图像的预留位置;
    将所述用户图像嵌入到所述多个视频模板中的至少部分视频模板的预留位置上,使得所述用户图像分别与所述至少部分视频模板上的情景素材结合生成至少一个视频;
    获取所述至少一个视频中待发布的视频;
    将所述待发布的视频发布到预设的视频播放平台上。
  2. 根据权利要求1所述的方法,其特征在于,所述待发布的视频包括所述用户从所述至少一个视频中选择的视频,或者基于所述用户从所述多个视频模板中选择的模板生成的视频。
  3. 根据权利要求1所述的方法,其特征在于,所述视频主题包括用户在心情配置界面上配置的心情。
  4. 根据权利要求3所述的方法,其特征在于,所述获取用户配置的视频主题以及视频制作指令之后,所述方法还包括:
    判断所述用户在预设时长内的心情分享次数是否超过预设阈值;
    其中,若是,则输出用于指示心情分享次数超限的提示信息;
    若否,则执行所述根据所述视频制作指令,获取用户图像以及与所述视频主题相匹配的多个视频模板的步骤。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述视频制作指令,获取用户图像,以及与视频主题相匹配的多个视频模板,包括:
    在获取到与所述主题相匹配的多个视频模板之后获取用户图像。
  6. 根据权利要求5所述的方法,其特征在于,在获取到与所述视频主题相匹配的多个视频模板之后,所述方法还包括:
    向用户展示所述多个视频模板,以使用户从所述多个视频模板中选择至少一个视频模板作为目标模板;
    在所述目标模板的预留位置上包括所述用户的历史图像时,若接收到所述用户的发布指令,则将所述用户从所述目标模板中选择的模板的视频发布到预设的视频播放平台上,并结束本次视频制作;
    若接收到用户的拍摄指令,则执行所述获取用户图像的操作,并在所述将所述用户图像嵌入到所述多个视频模板中的至少部分视频模板的预留位置上的操作中,将所述目标模板中的所述历史图像替换为所述用户图像。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述获取用户图像之后,所述方法还包括:
    基于预设模型对所述用户图像上的用户表情进行调整,以使所述用户表情与所述视频主题相匹配。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在显示界面上播放所述视频播放平台上的视频,其中,所述视频播放平台上的视频是指基于所述视频模板生成的视频;
    当在所述显示界面上检测到预设的触控操作时,提供用于与视频的发布者进行互动的交互界面;
    基于在所述交互界面上检测到的针对预设选项的操作生成交互信息,并将所述交互信息发送给所述发布者,所述预设选项至少包括如下一种:用于发消息的选项、用于打招呼的选项以及查看视频发布记录的选项。
  9. 一种视频生成装置,其特征在于,包括:
    第一获取模块,用于获取用户配置的视频主题以及视频制作指令;
    第二获取模块,用于根据所述视频制作指令,获取用户图像,以及与所述视频主题相匹配的多个视频模板,其中,所述视频模板中包括预设的情景素材和用户图像的预留位置;
    视频生成模块,用于将所述用户图像嵌入到所述多个视频模板中的至少部分视频模板的预留位置上,使得所述用户图像分别与所述至少部分视频模板上的情景素材结合生成至少一个视频;
    第三获取模块,用于获取所述至少一个视频中待发布的视频;
    第一发布模块,用于将所述待发布的视频发布到预设的视频播放平台上。
  10. 一种终端设备,其特征在于,包括存储器和处理器,其中,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行时,所述处理器执行权利要求1-8中任一项所述的视频生成方法。
  11. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时,所述处理器执行权利要求1-8中任一项所述的视频生成方法。
PCT/CN2021/139606 2020-12-31 2021-12-20 视频生成方法、装置、设备及存储介质 WO2022143253A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023535441A JP2023553622A (ja) 2020-12-31 2021-12-20 ビデオ生成方法、装置、機器および記憶媒体
EP21913994.6A EP4243427A4 (en) 2020-12-31 2021-12-20 VIDEO PRODUCTION METHOD AND APPARATUS, APPARATUS AND STORAGE MEDIUM
US18/331,340 US20230317117A1 (en) 2020-12-31 2023-06-08 Video generation method and apparatus, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011626264.XA CN112866798B (zh) 2020-12-31 2020-12-31 视频生成方法、装置、设备及存储介质
CN202011626264.X 2020-12-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/331,340 Continuation US20230317117A1 (en) 2020-12-31 2023-06-08 Video generation method and apparatus, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022143253A1 true WO2022143253A1 (zh) 2022-07-07

Family

ID=75999456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/139606 WO2022143253A1 (zh) 2020-12-31 2021-12-20 视频生成方法、装置、设备及存储介质

Country Status (5)

Country Link
US (1) US20230317117A1 (zh)
EP (1) EP4243427A4 (zh)
JP (1) JP2023553622A (zh)
CN (1) CN112866798B (zh)
WO (1) WO2022143253A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866798B (zh) * 2020-12-31 2023-05-05 北京字跳网络技术有限公司 视频生成方法、装置、设备及存储介质
CN113784058A (zh) * 2021-09-09 2021-12-10 上海来日梦信息科技有限公司 一种影像生成方法、装置、存储介质及电子设备
CN113852767B (zh) * 2021-09-23 2024-02-13 北京字跳网络技术有限公司 视频编辑方法、装置、设备及介质
CN115955594A (zh) * 2022-12-06 2023-04-11 北京字跳网络技术有限公司 一种图像处理方法及装置
CN116074594A (zh) * 2023-01-05 2023-05-05 北京达佳互联信息技术有限公司 视频处理方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549655A (zh) * 2018-03-09 2018-09-18 阿里巴巴集团控股有限公司 一种影视作品的制作方法、装置及设备
WO2018225968A1 (ko) * 2017-06-08 2018-12-13 주식회사안그라픽스 동영상 템플릿의 조합 시스템 및 그 방법
CN110266971A (zh) * 2019-05-31 2019-09-20 上海萌鱼网络科技有限公司 一种短视频制作方法和系统
CN110825912A (zh) * 2019-10-30 2020-02-21 北京达佳互联信息技术有限公司 视频生成方法、装置、电子设备及存储介质
CN111541950A (zh) * 2020-05-07 2020-08-14 腾讯科技(深圳)有限公司 表情的生成方法、装置、电子设备及存储介质
CN111935504A (zh) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 视频制作方法、装置、设备及存储介质
CN112866798A (zh) * 2020-12-31 2021-05-28 北京字跳网络技术有限公司 视频生成方法、装置、设备及存储介质

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100153520A1 (en) * 2008-12-16 2010-06-17 Michael Daun Methods, systems, and media for creating, producing, and distributing video templates and video clips
US9247306B2 (en) * 2012-05-21 2016-01-26 Intellectual Ventures Fund 83 Llc Forming a multimedia product using video chat
CN104349175A (zh) * 2014-08-18 2015-02-11 周敏燕 一种基于手机终端的视频制作系统及方法
US10964078B2 (en) * 2016-08-10 2021-03-30 Zeekit Online Shopping Ltd. System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
US9998796B1 (en) * 2016-12-12 2018-06-12 Facebook, Inc. Enhancing live video streams using themed experiences
WO2020150692A1 (en) * 2019-01-18 2020-07-23 Snap Inc. Systems and methods for template-based generation of personalized videos
US11394888B2 (en) * 2019-01-18 2022-07-19 Snap Inc. Personalized videos
CN109769141B (zh) * 2019-01-31 2020-07-14 北京字节跳动网络技术有限公司 一种视频生成方法、装置、电子设备及存储介质
US11151794B1 (en) * 2019-06-28 2021-10-19 Snap Inc. Messaging system with augmented reality messages
CN111243632B (zh) * 2020-01-02 2022-06-24 北京达佳互联信息技术有限公司 多媒体资源的生成方法、装置、设备及存储介质
US10943371B1 (en) * 2020-03-31 2021-03-09 Snap Inc. Customizing soundtracks and hairstyles in modifiable videos of multimedia messaging application
US11263260B2 (en) * 2020-03-31 2022-03-01 Snap Inc. Searching and ranking modifiable videos in multimedia messaging application
US11514203B2 (en) * 2020-05-18 2022-11-29 Best Apps, Llc Computer aided systems and methods for creating custom products
US11212383B2 (en) * 2020-05-20 2021-12-28 Snap Inc. Customizing text messages in modifiable videos of multimedia messaging application
US11704851B2 (en) * 2020-05-27 2023-07-18 Snap Inc. Personalized videos using selfies and stock videos
CN112073649B (zh) * 2020-09-04 2022-12-13 北京字节跳动网络技术有限公司 多媒体数据的处理方法、生成方法及相关设备
US20220100351A1 (en) * 2020-09-30 2022-03-31 Snap Inc. Media content transmission and management

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018225968A1 (ko) * 2017-06-08 2018-12-13 주식회사안그라픽스 동영상 템플릿의 조합 시스템 및 그 방법
CN108549655A (zh) * 2018-03-09 2018-09-18 阿里巴巴集团控股有限公司 一种影视作品的制作方法、装置及设备
CN110266971A (zh) * 2019-05-31 2019-09-20 上海萌鱼网络科技有限公司 一种短视频制作方法和系统
CN110825912A (zh) * 2019-10-30 2020-02-21 北京达佳互联信息技术有限公司 视频生成方法、装置、电子设备及存储介质
CN111541950A (zh) * 2020-05-07 2020-08-14 腾讯科技(深圳)有限公司 表情的生成方法、装置、电子设备及存储介质
CN111935504A (zh) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 视频制作方法、装置、设备及存储介质
CN112866798A (zh) * 2020-12-31 2021-05-28 北京字跳网络技术有限公司 视频生成方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4243427A4

Also Published As

Publication number Publication date
CN112866798A (zh) 2021-05-28
EP4243427A4 (en) 2024-03-06
EP4243427A1 (en) 2023-09-13
CN112866798B (zh) 2023-05-05
JP2023553622A (ja) 2023-12-25
US20230317117A1 (en) 2023-10-05

Similar Documents

Publication Publication Date Title
WO2022143253A1 (zh) 视频生成方法、装置、设备及存储介质
WO2019086037A1 (zh) 视频素材的处理方法、视频合成方法、终端设备及存储介质
CN110312169B (zh) 视频数据处理方法、电子设备及存储介质
US9888207B2 (en) Automatic camera selection
WO2020029525A1 (zh) 视频封面生成方法、装置、电子设备及存储介质
US20150172238A1 (en) Sharing content on devices with reduced user actions
US20150264303A1 (en) Stop Recording and Send Using a Single Action
US10178346B2 (en) Highlighting unread messages
US20190246064A1 (en) Automatic camera selection
US9749585B2 (en) Highlighting unread messages
US20160205427A1 (en) User terminal apparatus, system, and control method thereof
CN110798615A (zh) 一种拍摄方法、装置、存储介质以及终端
US20150264309A1 (en) Playback of Interconnected Videos
WO2015142601A1 (en) Stop recording and send using a single action
CN106464976B (zh) 显示设备、用户终端设备、服务器及其控制方法
JP2024529251A (ja) メディアファイル処理方法、装置、デバイス、可読記憶媒体および製品
WO2023125159A1 (zh) 视频生成电路、方法和电子设备
CN110162350A (zh) 通知栏信息的显示方法、装置、服务器及存储介质
CN114007128A (zh) 一种显示设备及配网方法
CN114866636B (zh) 一种留言显示方法、终端设备、智能设备及服务器
CN117177066B (zh) 一种拍摄方法及相关设备
WO2023061057A1 (zh) 直播内容展示方法、装置、电子设备及可读存储介质
WO2021052115A1 (zh) 演唱作品的生成方法、发布方法和显示设备
CN117956216A (zh) 一种字幕的展示方法、装置、电子设备和存储介质
CN116471450A (zh) 一种视频内容生成方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913994

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023535441

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 2021913994

Country of ref document: EP

Effective date: 20230608

NENP Non-entry into the national phase

Ref country code: DE