CN116095388A - Video generation method, video playing method and related equipment - Google Patents

Video generation method, video playing method and related equipment Download PDF

Info

Publication number
CN116095388A
CN116095388A CN202310101241.4A CN202310101241A CN116095388A CN 116095388 A CN116095388 A CN 116095388A CN 202310101241 A CN202310101241 A CN 202310101241A CN 116095388 A CN116095388 A CN 116095388A
Authority
CN
China
Prior art keywords
video
effect
target
user
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310101241.4A
Other languages
Chinese (zh)
Inventor
汪杰
姜海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310101241.4A priority Critical patent/CN116095388A/en
Publication of CN116095388A publication Critical patent/CN116095388A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The disclosure relates to a video generation method, a video playing method and related equipment, wherein the video generation method comprises the following steps: displaying a first video, wherein the first video comprises a plurality of frames of images; and responding to the preset operation of the user on the first video, and displaying a second video, wherein the second video is obtained after inserting an effect video segment into the first video, the preset operation is used for triggering the selection of a target frame image in the first video and the addition of a target display effect, and the effect video segment is a video segment which is generated according to the target frame image, displays the target display effect and continuously presets the playing duration. According to the video editing method and device, the video works with the effect video clips are automatically generated in response to the triggering operation of selecting the video frame images and adding the display effects by the user, the effect clips can be added for the video rapidly, the operation is simple, the professional threshold of a video producer is reduced, and the video editing requirements of public users can be met.

Description

Video generation method, video playing method and related equipment
Technical Field
The disclosure relates to the technical field of video processing, and in particular relates to a video generation method, a video playing method and related equipment.
Background
With the advent of various video platforms, publishing video works has become an important means for people to entertain and share stories. However, with the explosion of various video works on the network, the quality requirements of people on the video works are also higher and higher. How to make its own video work more attractive to viewers is a matter of great concern to video producers.
At present, the scheme for adding effects into videos in the related art is complex in operation, has high professional requirements for video producers, and is difficult to meet the requirements of mass users.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure provides a video generation method, a video playing method and related equipment, so as to at least solve the technical problem of complex operation in the scheme of adding effects in videos in the related technology. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a video generating method including: displaying a first video, wherein the first video comprises a plurality of frames of images; and responding to a preset operation of a user on the first video, displaying a second video, wherein the second video is a video obtained after inserting an effect video segment into the first video, the preset operation is used for triggering the selection of a target frame image in the first video and the addition of a target display effect, and the effect video segment is a video segment which is generated according to the target frame image, displays the target display effect and lasts for a preset playing duration.
In some embodiments, in response to a preset operation of the user on the first video, displaying a second video includes: determining the target frame image in response to a user selection operation of any frame image in the first video; determining a target display effect to be added in response to an effect adding operation of a user; generating a corresponding effect video clip according to the target frame image and the target display effect; inserting the generated effect video clips into the positions corresponding to the target frame images in the first video to obtain the second video; the second video is shown.
In some embodiments, determining the target frame image in response to a user selection operation of any one frame image in the first video includes: displaying multi-frame images contained in the first video on a video time axis; and determining the target frame image in response to a user selection operation of the target moment on the video time axis.
In some embodiments, determining the target frame image in response to a user selection of a target time instant on the video timeline includes: displaying a time cursor on the video timeline; the target frame image is determined in response to a sliding operation of a time cursor by a user on a video time axis.
In some embodiments, in response to an effect adding operation by a user, determining a target display effect to be added includes: displaying one or more effect materials, wherein each effect material corresponds to one display effect; and determining a target display effect to be added in response to a selection operation of a user on target effect materials, wherein the target effect materials are any one of the displayed effect materials.
In some embodiments, after inserting the generated effect video clip into the first video at a position corresponding to the target frame image, to obtain the second video, the method further includes: displaying multi-frame images contained in the second video on the video time axis; responding to the operation that a user slides a time cursor to a target time period on a video time axis, and displaying the target effect material in a selected state, wherein the target time period is a time period in which a target display effect corresponding to the target effect material is displayed in a second video; and in response to the user sliding the time cursor outside the target time period on the video time axis, displaying the target effect material as a non-selected state.
In some embodiments, the method further comprises: if the time cursor is positioned in a target time period on the video time axis, responding to the selection operation of a user on other effect materials, canceling the selected state of the target effect materials, and replacing the display effect in the target time period with the display effect corresponding to the other effect materials; if the time cursor is positioned outside the target time period on the video time axis, responding to the selection operation of the user on any effect material, and determining the display effect to be added for the image at the corresponding moment.
In some embodiments, after determining the target display effect to be added in response to a user selection operation of the target effect material, the method further comprises: acquiring a first playing time length of the target effect material; and generating the effect video clip of the first playing duration according to the target frame image and the target effect material.
In some embodiments, after determining the target display effect to be added in response to a user selection operation of the target effect material, the method further comprises: acquiring a second playing time length configured by user definition; and generating the effect video clips of the second playing duration according to the target frame images and the target effect materials.
In some embodiments, in the case that the target effect material includes text information, the method further includes: and responding to the replacement operation of the text information in the target effect material by the user, and replacing the text information contained in the target effect material.
In some embodiments, after replacing the text information contained in the target effect material in response to a user's replacement operation of the text information in the target effect material, the method further comprises: caching the text information after the target effect material is replaced; and generating an effect video segment of the replaced text information in response to a secondary selection operation of the user on the target effect material, wherein the secondary selection operation is a selection operation executed again on the target effect material after the user replaces the text information contained in the target effect material in a primary video editing process.
In some embodiments, in a case where the target display effect corresponding to the target effect material includes a face effect, the method further includes: identifying a face image contained in the target frame image; and generating corresponding effect video clips according to the target frame images and the target effect materials based on the face image recognition results.
In some embodiments, generating a corresponding effect video clip from the target frame image and the target display effect includes: transmitting the target frame image to a server, wherein the server is configured to generate a corresponding effect video clip according to display information according to the target frame image and the target display effect; and receiving the effect video clips returned by the server.
In some embodiments, generating a corresponding effect video clip from the target frame image and the target display effect includes: acquiring display information of the target display effect; and generating corresponding effect video clips according to the target frame images and the display information of the target display effect.
In some embodiments, after generating the respective effect video clip from the target frame image and the target display effect, the method further comprises: and responding to the adding operation of the user on the audio and/or the text, and adding the corresponding audio and/or the text in the effect video clip.
According to a second aspect of the embodiments of the present disclosure, there is also provided a video playing method, including: and displaying the video, wherein the video comprises an effect video segment, and the effect video segment is generated based on the fact that target frame images selected from the video last for a preset playing time period and a target display effect is added.
In some embodiments, the video played by the video playing method may be a video generated by any one of the video generating methods described above.
According to a third aspect of the embodiments of the present disclosure, there is also provided a video generating apparatus including: a first video presentation module configured to present a first video, wherein the first video comprises a plurality of frames of images; the second video display module is configured to display a second video in response to a preset operation of a user on the first video, wherein the second video is a video obtained after an effect video segment is inserted into the first video, the preset operation is used for triggering selection of a target frame image in the first video and addition of a target display effect, and the effect video segment is a video segment which is generated according to the target frame image, displays the target display effect and lasts for a preset playing duration.
In some embodiments, the second video display module may specifically include: a frame image selection module configured to determine the target frame image in response to a user selection operation of any one frame image in the first video; an effect adding module configured to determine a target display effect to be added in response to an effect adding operation of a user; the effect video segment generation module is configured to generate corresponding effect video segments according to the target frame image and the target display effect; the video generation module is configured to insert the generated effect video clip into a position corresponding to the target frame image in the first video to obtain the second video; and the display module is configured to display the second video.
In some embodiments, the frame image selecting module may specifically include: a video timeline presentation module configured to present a multi-frame image contained by the first video on a video timeline; and the video time axis selection module is configured to respond to the selection operation of a user on the target moment on the video time axis and determine the target frame image.
In some embodiments, the frame image selecting module may further include: a time cursor display module configured to display a time cursor on the video timeline; the video timeline selection module is further configured to determine the target frame image in response to a user sliding operation of a time cursor on a video timeline.
In some embodiments, the effect adding module may specifically include: the effect material display module is configured to display one or more effect materials, wherein each effect material corresponds to one display effect; and the effect material selection module is used for responding to the selection operation of the user on the target effect material and determining the target display effect to be added, wherein the target effect material is any one of the displayed effect materials.
In some embodiments, the display module may specifically include: a first display sub-module configured to display a multi-frame image contained in the second video on the video time axis; the second display sub-module is configured to respond to the operation that a user slides a time cursor to a target time period on a video time axis, and display the target effect material into a selected state, wherein the target time period is a time period for displaying a target display effect corresponding to the target effect material in a second video; and the third display sub-module is configured to respond to the operation that the user slides the time cursor to the outside of the target time period on the video time axis, and display the target effect material into a non-selected state.
In some embodiments, the display module may further include: the fourth display sub-module is used for canceling the selected state of the target effect material and replacing the display effect in the target time period with the display effect corresponding to other effect materials in response to the selection operation of the user on the other effect materials if the time cursor is positioned in the target time period on the video time axis; and the fifth display sub-module is used for responding to the selection operation of the user on any effect material if the time cursor is positioned outside the target time period on the video time axis, and determining the display effect to be added for the image at the corresponding moment.
In some embodiments, the aforementioned effect video clip generation module may specifically include: the effect video segment duration acquisition module is configured to acquire a first playing duration of the target effect material; and the first effect video segment generation module is configured to generate the effect video segment with the first playing duration according to the target frame image and the target effect material.
In some embodiments, the aforementioned effect video clip generation module may specifically include: the effect video clip duration configuration module is configured to acquire second playing duration configured by user definition; and the second effect video segment generation module is configured to generate the effect video segment with the second playing duration according to the target frame image and the target effect material.
In some embodiments, in the case that the target effect material includes text information, the video generating device may further include: and the effect material text information replacing module is configured to respond to the replacing operation of the user on the text information in the target effect material and replace the text information contained in the target effect material.
In some embodiments, in the case that the target effect material includes text information, the video generating device may further include: the effect material text information caching module is configured to cache the text information after the target effect material is replaced; the effect video segment generation module is further configured to generate an effect video segment of the replaced text information in response to a secondary selection operation of the user on the target effect material, where the secondary selection operation is a selection operation performed again on the target effect material after the user replaces the text information included in the target effect material in a primary video editing process.
In some embodiments, the video generating apparatus may further include: a face image recognition module configured to recognize a face image contained in the target frame image; the effect video segment generating module is further configured to generate a corresponding effect video segment according to the target frame image and the target effect material based on the face image recognition result.
In some embodiments, the aforementioned effect video clip generation module is further configured to: transmitting the target frame image to a server, wherein the server is configured to generate a corresponding effect video clip according to display information according to the target frame image and the target display effect; and receiving the effect video clips returned by the server.
In some embodiments, the aforementioned effect video clip generation module is further configured to: acquiring display information of the target display effect; and generating corresponding effect video clips according to the target frame images and the display information of the target display effect.
In some embodiments, the video generating apparatus may further include: and the effect video clip editing module is configured to respond to the adding operation of the user on the audio and/or the text, and add the corresponding audio and/or the text in the effect video clip.
According to a fourth aspect of embodiments of the present disclosure, there is also provided a video playback apparatus including: the video playing module is configured to display a video, wherein the video comprises an effect video segment, and the effect video segment is generated based on target frame images selected from the video, which are continuously preset playing time length and added with target display effects.
In some embodiments, the video played by the video playing device may be a video generated by any one of the video generating devices.
According to a fifth aspect of embodiments of the present disclosure, there is also provided an electronic device including: a processor; a memory configured to store the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video generation method of any one of the above, or the video playing method of the above.
According to a sixth aspect of embodiments of the present disclosure, there is also provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform any one of the video generation methods described above, or the video playback method described above.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program/instruction which, when executed by a processor, implements the video generation method of any one of the above, or the video playback method of the above.
According to the scheme provided by the embodiment of the disclosure, after the video works are displayed, the video works with the effect video clips are automatically generated in response to the triggering operation of selecting the video frame images and adding the display effects by the user, so that the video clips can be added for the video rapidly, the operation is simple, the professional threshold of a video producer is reduced, and the video editing requirements of the public users can be met.
Further, the scheme provided by the embodiment of the disclosure not only supports the addition of the multi-section effect video clips in the video by the user, but also supports the modification of the characters or the audio in the effect video clips by the user, and provides a friendly user interaction interface in the video editing process, thereby greatly improving the user experience of the video producer.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an exemplary system architecture to which the video generation/playback method of embodiments of the present disclosure may be applied;
FIG. 2 is a flowchart illustrating a video generation method according to an exemplary embodiment;
fig. 3 is a schematic diagram illustrating an addition of a red-eye effect to a video frame image in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating an effect addition triggering operation, according to an example embodiment;
FIG. 5 is a schematic diagram illustrating yet another effect addition triggering operation, according to an example embodiment;
FIG. 6 is a schematic diagram illustrating another effect addition triggering operation, according to an example embodiment;
FIG. 7 is a schematic diagram of an effects addition page shown in accordance with an exemplary embodiment;
FIG. 8 is a flowchart illustrating an alternative video generation method according to an example embodiment;
FIG. 9 is a flowchart illustrating one method of generating an effect video clip according to an exemplary embodiment;
FIG. 10 is a diagram illustrating a replacement effect material text message, according to an example embodiment;
FIG. 11 is a schematic diagram illustrating a replacement effect material according to an example embodiment;
FIG. 12 is a schematic diagram illustrating a secondary addition effect according to an example embodiment;
FIG. 13 is a flowchart illustrating an addition of material effects to a video in accordance with an exemplary embodiment;
FIG. 14 is a flowchart illustrating a video playback method according to an exemplary embodiment;
FIG. 15 is a schematic diagram of a video generating apparatus according to an example embodiment;
FIG. 16 is a schematic diagram of a video playback device according to an exemplary embodiment; and
fig. 17 is a schematic diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Fig. 1 is a schematic diagram of an exemplary system architecture to which a video generation/playback method in an embodiment of the present disclosure may be applied, as shown in fig. 1, the system architecture may include a terminal device 101, a network 102, and a server 103. The network 102 may be a wired network or a wireless network.
The terminal device 101 may be a variety of electronic devices including, but not limited to, a cell phone, tablet computer, laptop portable computer, desktop computer, wearable device, vehicle-mounted device, augmented reality device, virtual reality device, smart television, etc. The operating system employed by the terminal device 101 may be, but is not limited to: android system, linux system, windows system, iOS system, etc. The style of user interfaces displayed on different operating systems for the same application may vary.
The server 103 may be a server providing various services, and may respond to a request initiated from the terminal device 101 and return a response result to the terminal device 101. Optionally, the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like.
Those skilled in the art will appreciate that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative, and that any number of terminal devices, networks, and servers may be provided as desired. The present disclosure is not limited in this regard.
First, in an embodiment of the present disclosure, a video generating method is provided, which may be performed by any electronic device having computing processing capabilities.
In some embodiments, the video generating method provided in the embodiments of the present disclosure may be performed by a terminal device of the above system architecture; in other embodiments, the video generating method provided in the embodiments of the present disclosure may be implemented by the terminal device and the server in the system architecture through an interaction manner. The present disclosure is not limited in this regard.
Fig. 2 is a flowchart illustrating a video generating method according to an exemplary embodiment, and as shown in fig. 2, the video generating method provided in an embodiment of the present disclosure may include the steps of:
s202, displaying a first video, wherein the first video comprises multi-frame images;
s204, responding to a preset operation of a user on the first video, and displaying a second video, wherein the second video is obtained after inserting an effect video segment into the first video, the preset operation is used for triggering the selection of a target frame image in the first video and the addition of a target display effect, and the effect video segment is a video segment which is generated according to the target frame image, displays the target display effect and lasts for a preset playing duration.
It should be noted that, in the embodiment of the present disclosure, the first video refers to a video to be added with an effect, and the second video refers to a video after adding an effect, and in some embodiments, when multiple times of adding an effect are required in the first video, the second video may be a video after adding an effect one time, or may be a video after adding all effects. The first video in the embodiment of the disclosure may be any video containing multiple frames of images, may be a video recorded by real-time shooting, may be a video stored on various electronic devices, or may be a video played on a network. The multi-frame images included in the first video may be the same or different. It should be noted that, in the embodiment of the present disclosure, the frame image may be an actual frame image in the video, or may be a frame image in the video.
In some embodiments, the video generating method provided in the embodiments of the present disclosure may also be used to perform effect addition on a certain image, and generate a video clip corresponding to the image.
For some dynamic effects, a user often only wants to add the effect on a certain picture, but the pictures in the video are continuously changed, so that the effect cannot be frozen on a certain picture in the related art, and the picture after the effect is added lasts for a period of time. The video generation method provided by the embodiment of the disclosure aims to select a frame of image from an original video, and enable the frame of image to last for a period of time after a corresponding display effect is added.
As shown in fig. 3, for a first video including 6 frame images, after selecting the 2 nd frame image as a target frame image to be added with an effect, if the effect added for the target frame image is "red-eye effect", an effect video clip displaying the red-eye effect may be generated according to the 2 nd frame image and inserted into a position corresponding to the target frame image in the first video, so as to obtain a second video. Assuming that the playing time length of the red eye effect is 3s, the playing time length of the red eye effect video segment generated according to the 2 nd frame image is 3s, and the playing time length of the second video is increased by 3s compared with that of the first video.
In some embodiments, the formats of the first video and the second video in the embodiments of the present disclosure may be, but are not limited to, any one of the following: MP4 (Moving Picture Experts Group ) format, AVI (Audio Video Interleaved, audio video interleave format) format, ASF (Advanced Stream Format, advanced streaming format) format, WMV (Windows Media Video, windows media video format) format, and the like. The format of the video is not particularly limited in the embodiments of the present disclosure.
In addition, it should be further noted that, in the embodiment of the present disclosure, the preset operation for triggering the selection of the target frame image in the first video and adding the target display effect may be one or more preset operations, and the embodiment of the present disclosure does not limit the form of specific operations, for example, in some embodiments, as shown in fig. 4, the selection of the target frame image may be implemented by suspending a certain picture of the first video playing, and further, by implementing the addition of some simple display effects (such as background blurring) according to the target frame image, for a long time; in yet other embodiments, as shown in fig. 5, after the selection of the target frame image is achieved by pausing the playback screen, one or more add buttons (such as effect 1, effect 2, effect 3, and effect 4 shown in fig. 5) for displaying the effect may be displayed on the interface for displaying the first video, and the addition of the different display effects may be achieved by clicking the corresponding buttons (e.g., clicking the button for effect 1 adds effect 1 for the currently selected target frame image).
In other embodiments, as shown in fig. 6, a special effect adding trigger button (such as a special effect button shown by an icon 601) may be further added on the interface for displaying the first video, after the user clicks the effect adding trigger button, an effect adding page of the corresponding video is entered, as shown in fig. 7, on which a multi-frame image (such as an area shown by an icon 602) and one or more effect materials (such as an area shown by an icon 603) included in the first video may be displayed, so that the user may implement addition of the corresponding effect by selecting a certain effect material. In the embodiment of the disclosure, various rich effects can be added in the video by providing a large amount of effect materials.
In some embodiments, as shown in fig. 8, S204 may be implemented specifically by the following steps:
s802, determining a target frame image in response to a selection operation of any frame image in the first video by a user.
In one embodiment of the present disclosure, the target frame image to which the effect is to be added may be selected by: displaying multi-frame images contained in a first video on a video time axis; and determining a target frame image in response to a selection operation of a target time on the video time axis by a user. As shown in fig. 7, the multi-frame image contained in the first video may be presented through a video time axis (as shown by an icon 602) so that the user selects any one of the frame images in the first video as a target frame image to be added with an effect.
In the above embodiment, the multi-frame image included in the first video is displayed through the video time axis, so that a user can conveniently select a target frame image to be added with an effect, and the user can select any frame image to add the effect.
In another embodiment of the present disclosure, the target frame image to which the effect is to be added may be selected by: displaying multi-frame images contained in a first video on a video time axis, and displaying a time cursor on the video time axis; the target frame image is determined in response to a sliding operation of the time cursor by the user on the video time axis. As shown in fig. 7, after the multi-frame images included in the first video are displayed through the video time axis, a time cursor (as shown by an icon 604) may be displayed on the video time axis, so that the user selects any one of the frame images in the first video as a target frame image of an effect to be added by sliding the time cursor.
In the above embodiment, the target frame image to be added with the effect is selected by the time cursor, and the user can select an image at any one time to perform the effect addition.
S804, in response to the effect adding operation of the user, determining a target display effect to be added.
Note that, the effect adding operation in S804 may be, but not limited to, any one of the manners shown in fig. 4 to 6, where the manner provided in fig. 6 can add a richer display effect.
S806, corresponding effect video clips are generated according to the target frame images and the target display effects.
In the implementation, after the target frame image and the target display effect to be added are determined, the target display effect can be added to the target frame image, the playing time of the target frame image is prolonged, and a corresponding video clip with a corresponding display effect is generated so as to fully display the corresponding display effect.
In one embodiment of the present disclosure, when generating a corresponding effect video clip according to a target frame image and a target display effect, it may be achieved by: transmitting the target frame image to a server, wherein the server is configured to generate a corresponding effect video clip according to display information according to the target frame image and the target display effect; and receiving the effect video clips returned by the server. In this embodiment, the process of generating the effect video clip is performed on the server, so that the hardware requirement on the terminal device can be reduced.
In another embodiment of the present disclosure, when generating a corresponding effect video clip according to a target frame image and a target display effect, it may be achieved by: acquiring display information of a target display effect; and generating corresponding effect video clips according to the target frame images and the display information of the target display effect. In the embodiment, the process of generating the effect video clips is carried out locally on the terminal, so that the effect video clips can be generated quickly.
In one embodiment of the present disclosure, after determining the target display effect to be added in response to a user selection operation of the target effect material, the video generating method provided in the embodiment of the present disclosure may further include the steps of: acquiring a first playing time length of a target effect material; and generating an effect video clip of the first playing duration according to the target frame image and the target effect material. In this embodiment, an effect video clip corresponding to the playing duration may be automatically generated according to the effect material.
In another embodiment of the present disclosure, after determining the target display effect to be added in response to a user selection operation of the target effect material, the video generating method provided in the embodiment of the present disclosure may further include the steps of: acquiring a second playing time length configured by user definition; and generating an effect video clip with a second playing duration according to the target frame image and the target effect material. In this embodiment, the playing duration of the effect video clip can be customized by the user, so as to meet more user requirements.
In some embodiments, after generating the corresponding effect video clip according to the target frame image and the target display effect, the video generating method provided in the embodiments of the present disclosure may further include the steps of: and responding to the adding operation of the user on the audio and/or the text, and adding the corresponding audio and/or the text in the effect video clip. For the display effect that only the picture effect has no audio and/or text, the video generation method provided in the embodiment of the present disclosure can support the user to add the audio and/or text in the effect video clip, so as to generate the effect video clip more meeting the user's requirement.
S808, inserting the generated effect video clips into the positions corresponding to the target frame images in the first video to obtain the second video.
It should be noted that, after the effect video clip of the target frame image is generated, the generated effect video clip may be inserted into any position of the first video, and in some embodiments, a plurality of positions may also be inserted, for example, a position corresponding to the target frame image, or a starting position of the first video, so as to highlight a portion that is desired to be displayed in the first video.
And S810, displaying the second video.
It should be noted that, in some embodiments, the second video after the adding effect may be displayed in the video preview area of the video editing; in other embodiments, the second video after the adding effect can be displayed after the video editing is completed and released once; in other embodiments, if an effect is directly added to the first video of the video playing interface, the second video may be directly displayed after the effect is added. In the embodiment of the disclosure, the display mode of the second video is not limited, and different display modes can be adopted according to different actual application scenes.
In some embodiments, as shown in fig. 9, the video generating method provided in the embodiments of the present disclosure may specifically determine the target display effect to be added by:
s902, one or more effect materials are displayed, wherein each effect material corresponds to a display effect;
s904, determining a target display effect to be added in response to a selection operation of a user on target effect materials, wherein the target effect materials are any one of the displayed effect materials.
As shown in fig. 7, a display area of one effect material may be added to the video effect adding page, so that a user may select any one effect material to implement addition of a display effect.
Through the embodiment, the effect materials corresponding to different display effects are displayed in advance, the quick addition of the corresponding display effects can be conveniently realized by a user through selecting the effect materials, and a video producer user can quickly add various rich display effects on a video by providing rich effect materials.
In some embodiments, in the case that the target effect material includes text information, the video generating method provided in the embodiments of the present disclosure may further include the following steps:
s906, replacing the text information contained in the target effect material in response to the replacement operation of the text information in the target effect material by the user.
The above-mentioned operation of replacing the text information in the target effect material may be a direct click operation of the text information displayed in the image after the display effect of the target effect material is added to the target frame image, or a click operation of a "change title" button specifically provided on the interface. In either replacement operation, after the text information replacement operation is performed, a corresponding virtual keyboard may be displayed so that the user inputs the replaced text information. As shown in fig. 10, after clicking the title-changing button 605, a corresponding keyboard 606 is popped up for the user to change the original text information "brain pain" of the "red-eye effect" material to "no language".
Through the embodiment, the replacement of the text information in the effect material can be realized, so that the user can customize the text information displayed in the effect video clip.
In some embodiments, after replacing text information included in the target effect material in response to a user's replacement operation of text information in the target effect material, as shown in fig. 9, the video generation method provided in the embodiments of the present disclosure may further include the steps of:
s908, caching the text information after the replacement of the target effect material;
s910, responding to the secondary selection operation of the user on the target effect material, and generating an effect video segment of the replaced text information, wherein the secondary selection operation is performed again on the target effect material after the user replaces the text information contained in the target effect material in the primary video editing process. As shown in fig. 11, the user may not want to use the "red eye effect" and change the "red eye effect" to the "wobble effect" after changing the text information of the "red eye effect" material. Because the user has replaced the text information of the "red eye effect" material, the cached text information may be used directly when the user selects the "red eye effect" material again in other portions of the first video. As shown in fig. 12, when the user selects the "red-eye effect" material again, the text information "no-seed" after the replacement of the "red-eye effect" material may be directly applied.
Through the embodiment, after the video producer user changes the text information contained in the effect material, the text information changed by the user is cached, so that the user can directly use the changed text information when the user secondarily selects the effect material, and the user experience is further enhanced.
In some embodiments, in a case where the target display effect corresponding to the target effect material includes a face effect, as shown in fig. 9, the video generating method provided in the embodiment of the disclosure may further include the following steps:
s912, recognizing a face image contained in the target frame image;
s914, based on the face image recognition result, generating corresponding effect video clips according to the target frame image and the target effect materials.
It should be noted that, in the embodiment of the present disclosure, not only facial image recognition but also animal facial recognition may be supported, for example, the effect of the animal face may be added to the facial image, or the effect on the facial image may be added to the animal face.
The above embodiment can achieve more accurate effect addition, for example, for the "red-eye effect" which is an effect that needs to be added on the eyes, the face contained in the target frame image needs to be recognized, so that the "red-eye effect" can be accurately added to the eye portion of the corresponding object on the target frame image.
It should be noted that, in the embodiment of the present disclosure, the identification of the facial image is merely an example, and in other embodiments, other information of the target object included in the image may be further identified, for example, the category, the dynamic gesture, and the like.
In some embodiments, after inserting the generated effect video clip into the first video at the position corresponding to the target frame image to obtain the second video, as shown in fig. 13, the video generating method provided in the embodiment of the disclosure may further include the following steps:
s1302, displaying multi-frame images contained in the second video on a video time axis;
s1304, in response to the operation of sliding the time cursor to the target time period on the video time axis by the user, displaying the target effect material in a selected state, wherein the target time period is a time period for displaying a target display effect corresponding to the target effect material in the second video;
s1306, in response to the user sliding the time cursor outside the target period on the video time axis, the target effect material is displayed in a non-selected state.
According to the embodiment, according to the sliding operation of the time cursor on the video time axis by the user, when the time cursor is positioned in the video segment applying the target display effect, the target effect material is displayed in the selected state, and when the time cursor moves beyond the video segment applying the target display effect, the target effect material is displayed in the unselected state, so that the user can conveniently and clearly see the display effect of each part of application on the video, and the user experience is greatly enhanced.
Further, in some embodiments, as shown in fig. 13, the video generating method provided in the embodiments of the present disclosure may further include the following steps:
s1308, if the time cursor is positioned in a target time period on the video time axis, responding to the selection operation of the user on other effect materials, canceling the selected state of the target effect materials, and replacing the display effect in the target time period with the display effect corresponding to the other effect materials;
s1310, if the time cursor is located outside the target time period on the video time axis, responding to the selection operation of the user on any effect material, and determining the display effect to be added for the image at the corresponding moment.
Still taking fig. 11 and 12 as an example, when the time cursor is located in the video clip to which the "red-eye effect" is applied, if the user clicks the "wobble effect", the selected state of the "red-eye effect" material is automatically canceled, and the display effect in the video clip is replaced with the "wobble effect". When the user continues to slide the time cursor, and moves the time cursor outside the video clip applying the 'swinging effect', if the user clicks the 'red-eye effect' material, the corresponding display effect is automatically added.
Through the above embodiment, according to the sliding operation of the time cursor by the user and the selection operation of the effect materials, for the segments to which a certain effect material has been applied in the video, the display effect corresponding to other effect materials can be replaced, and for the segments to which no effect material has been applied, the display effect of any one effect material can be added.
Based on the same inventive concept, the embodiment of the disclosure also provides a video playing method, which can be executed by any electronic device with computing processing capability. In some embodiments, the video generating method provided in the embodiments of the present disclosure may be performed by a terminal device of the above system architecture; in other embodiments, the video generating method provided in the embodiments of the present disclosure may be implemented by the terminal device and the server in the system architecture through an interaction manner. The present disclosure is not limited in this regard.
Fig. 14 is a flowchart illustrating a video playing method according to an exemplary embodiment, and as shown in fig. 14, the video playing method provided in the embodiment of the present disclosure may include the steps of:
s1402, displaying a video, wherein the video comprises an effect video segment, and the effect video segment is generated based on a target frame image selected from the video, which is continuously preset for a play time period and added with a target display effect.
In some embodiments, the video played by the video playing method may be a video generated by any one of the video generating methods described above.
Based on the same inventive concept, a video generating apparatus is also provided in the embodiments of the present disclosure, as described in the following embodiments. Since the principle of solving the problem of the embodiment of the device is similar to that of the embodiment of the method, the implementation of the embodiment of the device can be referred to the implementation of the embodiment of the method, and the repetition is omitted.
Fig. 15 is a schematic diagram of a card interface generating device according to an exemplary embodiment. Referring to fig. 15, the video generating apparatus includes: a first video presentation module 151 and a second video presentation module 152.
Wherein, the first video display module 151 is configured to display a first video, wherein the first video comprises a plurality of frames of images; the second video display module 152 is configured to display a second video in response to a preset operation of the user on the first video, where the second video is a video obtained after inserting an effect video segment into the first video, and the preset operation is used to trigger selection of a target frame image in the first video and addition of a target display effect, and the effect video segment is a video segment that is generated according to the target frame image, displays the target display effect and lasts for a preset playing duration.
Referring to fig. 15, in some embodiments, the second video display module 152 may specifically include: a frame image selection module 1521 configured to determine a target frame image in response to a user selection operation of any one frame image in the first video; an effect adding module 1522 configured to determine a target display effect to be added in response to an effect adding operation by a user; an effect video clip generation module 1523 configured to generate a corresponding effect video clip from the target frame image and the target display effect; the video generating module 1524 is configured to insert the generated effect video clip into a position corresponding to the target frame image in the first video to obtain a second video; the presentation module 1525 is configured to present the second video.
In some embodiments, the frame image selecting module 1521 may specifically include: the video time axis display module is configured to display multi-frame images contained in the first video on a video time axis; and the video time axis selection module is configured to respond to the selection operation of the user on the target moment on the video time axis and determine a target frame image.
In some embodiments, the frame image selection module 1521 may further include: a time cursor display module configured to display a time cursor on a video timeline; the video timeline selection module is further configured to determine a target frame image in response to a user sliding operation of a time cursor on the video timeline.
In some embodiments, the effect adding module 1522 may specifically include:
the effect material display module is configured to display one or more effect materials, wherein each effect material corresponds to one display effect;
and the effect material selection module is used for responding to the selection operation of the user on the target effect material and determining the target display effect to be added, wherein the target effect material is any one of the displayed effect materials.
In some embodiments, the display module 1525 may specifically include:
A first display sub-module configured to display a multi-frame image contained in the second video on a video time axis;
the second display sub-module is configured to respond to the operation of sliding the time cursor to the target time period on the video time axis by the user, and display the target effect material into a selected state, wherein the target time period is a time period for displaying a target display effect corresponding to the target effect material in the second video;
and the third display sub-module is configured to display the target effect material as a non-selected state in response to an operation of sliding the time cursor out of the target time period on the video time axis by a user.
Further, in some embodiments, the display module may further include:
the fourth display sub-module is used for responding to the selection operation of the user on other effect materials if the time cursor is positioned in the target time period on the video time axis, canceling the selected state of the target effect materials and replacing the display effect in the target time period with the display effect corresponding to the other effect materials;
and the fifth display sub-module is used for responding to the selection operation of the user on any effect material if the time cursor is positioned outside the target time period on the video time axis, and determining the display effect to be added for the image at the corresponding moment.
In some embodiments, the aforementioned effect video clip generation module 1523 may specifically include:
the effect video segment duration acquisition module is configured to acquire a first playing duration of the target effect material;
and the first effect video segment generation module is configured to generate an effect video segment with a first playing duration according to the target frame image and the target effect material.
In other embodiments, the aforementioned effect video clip generation module 1523 may specifically include:
the effect video clip duration configuration module is configured to acquire second playing duration configured by user definition;
and the second effect video segment generation module is configured to generate an effect video segment with a second playing duration according to the target frame image and the target effect material.
In some embodiments, in a case where the target effect material includes text information, the video generating apparatus in the embodiments of the present disclosure may further include:
and the effect material text information replacing module is configured to replace text information contained in the target effect material in response to the replacing operation of the text information in the target effect material by the user.
In some embodiments, in a case where the target effect material includes text information, the video generating apparatus in the embodiments of the present disclosure may further include:
The effect material text information caching module is configured to cache text information after the target effect material is replaced;
the effect video segment generation module is further configured to generate an effect video segment of the replaced text information in response to a secondary selection operation of the user on the target effect material, where the secondary selection operation is a selection operation performed again on the target effect material after the user replaces the text information included in the target effect material in a primary video editing process.
In some embodiments, the video generating apparatus in the embodiments of the present disclosure may further include:
an image recognition module configured to recognize a face image contained in the target frame image; the aforementioned effect video clip generation module 1523 is further configured to generate a corresponding effect video clip according to the target frame image and the target effect material based on the face image recognition result.
In some embodiments, the aforementioned effect video clip generation module 1523 is further configured to: transmitting the target frame image to a server, wherein the server is configured to generate a corresponding effect video clip according to display information according to the target frame image and the target display effect; and receiving the effect video clips returned by the server.
In some embodiments, the aforementioned effect video clip generation module is further configured to: acquiring display information of a target display effect; and generating corresponding effect video clips according to the target frame images and the display information of the target display effect.
In some embodiments, the video generating apparatus in the embodiments of the present disclosure may further include:
and the effect video clip editing module is configured to respond to the adding operation of the user on the audio and/or the text, and add the corresponding audio and/or the text in the effect video clip.
Based on the same inventive concept, a video playing device is also provided in the embodiments of the present disclosure, as described in the following embodiments. Since the principle of solving the problem of the embodiment of the device is similar to that of the embodiment of the method, the implementation of the embodiment of the device can be referred to the implementation of the embodiment of the method, and the repetition is omitted.
Fig. 16 is a schematic diagram of a video playback device according to an exemplary embodiment. Referring to fig. 16, the video playback apparatus includes: the video playing module 161 is configured to display a video, where the video includes an effect video clip, and the effect video clip is a video clip generated based on a target frame image selected from the video for a preset playing duration and adding a target display effect.
In some embodiments, the video played by the video playing device may be a video generated by any one of the video generating devices.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1700 according to such an embodiment of the present disclosure is described below with reference to fig. 17. The electronic device 1700 shown in fig. 17 is merely an example and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 17, the electronic device 1700 is in the form of a general purpose computing device. The components of electronic device 1700 may include, but are not limited to: the at least one processing unit 1710, the at least one storage unit 1720, and a bus 1730 connecting the different system components (including the storage unit 1720 and the processing unit 1710).
Wherein the storage unit stores program code that is executable by the processing unit 1710, such that the processing unit 1710 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the "exemplary method" of the present specification. For example, in some embodiments, when the electronic device is a video generating apparatus, the processing unit 1710 may perform the following steps of the above method embodiments: displaying a first video, wherein the first video comprises a plurality of frames of images; and responding to the preset operation of the user on the first video, and displaying a second video, wherein the second video is obtained after inserting an effect video segment into the first video, the preset operation is used for triggering the selection of a target frame image in the first video and the addition of a target display effect, and the effect video segment is a video segment which is generated according to the target frame image, displays the target display effect and continuously presets the playing duration.
In other embodiments, when the electronic device is a video playing apparatus, the processing unit 1710 may perform the following steps of the above method embodiments: and displaying the video, wherein the video comprises an effect video segment, and the effect video segment is generated based on the fact that target frame images selected from the video last for a preset playing time period and a target display effect is added.
The storage unit 1720 may include a readable medium in the form of a volatile storage unit, such as a random access memory unit (RAM) 17201 and/or a cache memory unit 17202, and may further include a read only memory unit (ROM) 17203.
The storage unit 1720 may also include a program/utility 17204 having a set (at least one) of program modules 17205, such program modules 17205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1730 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, a graphics accelerator port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1700 may also communicate with one or more external devices 1740 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1700, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1700 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1750. Also, electronic device 1700 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, for example, the Internet, through network adapter 1760. As shown, network adapter 1760 communicates with other modules of electronic device 1700 via bus 1730. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1700, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium, which may be a readable signal medium or a readable storage medium, is also provided. On which a program product is stored which enables the implementation of the method described above of the present disclosure. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
More specific examples of the computer readable storage medium in the present disclosure may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In this disclosure, a computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Alternatively, the program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In particular implementations, the program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
In an exemplary embodiment of the present disclosure, there is also provided a computer program product comprising a computer program/instruction which, when executed by a processor, implements the video generation method or video playback method described above.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the description of the above embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. A video generation method, comprising:
displaying a first video, wherein the first video comprises a plurality of frames of images;
and responding to a preset operation of a user on the first video, displaying a second video, wherein the second video is a video obtained after inserting an effect video segment into the first video, the preset operation is used for triggering the selection of a target frame image in the first video and the addition of a target display effect, and the effect video segment is a video segment which is generated according to the target frame image, displays the target display effect and lasts for a preset playing duration.
2. The video generation method according to claim 1, wherein presenting a second video in response to a preset operation of the first video by a user, comprises:
determining the target frame image in response to a user selection operation of any frame image in the first video;
determining a target display effect to be added in response to an effect adding operation of a user;
generating a corresponding effect video clip according to the target frame image and the target display effect;
inserting the generated effect video clips into the positions corresponding to the target frame images in the first video to obtain the second video;
the second video is shown.
3. The video generation method according to claim 2, wherein determining the target frame image in response to a user selection operation of any one frame image in the first video, comprises:
displaying multi-frame images contained in the first video on a video time axis;
and determining the target frame image in response to a user selection operation of the target moment on the video time axis.
4. A video generation method according to claim 3, wherein the method further comprises:
Displaying a time cursor on the video timeline;
wherein determining the target frame image in response to a user selection operation of a target time on the video time axis includes: the target frame image is determined in response to a sliding operation of a time cursor by a user on a video time axis.
5. The video generation method according to claim 3, wherein determining a target display effect to be added in response to an effect adding operation by a user comprises:
displaying one or more effect materials, wherein each effect material corresponds to one display effect;
and determining a target display effect to be added in response to a selection operation of a user on target effect materials, wherein the target effect materials are any one of the displayed effect materials.
6. The video generation method according to claim 5, wherein after inserting the generated effect video clip into the first video at a position corresponding to the target frame image, the method further comprises:
displaying multi-frame images contained in the second video on the video time axis;
responding to the operation that a user slides a time cursor to a target time period on a video time axis, and displaying the target effect material in a selected state, wherein the target time period is a time period in which a target display effect corresponding to the target effect material is displayed in a second video;
And in response to the user sliding the time cursor outside the target time period on the video time axis, displaying the target effect material as a non-selected state.
7. The video generation method of claim 6, wherein the method further comprises:
if the time cursor is positioned in a target time period on the video time axis, responding to the selection operation of a user on other effect materials, canceling the selected state of the target effect materials, and replacing the display effect in the target time period with the display effect corresponding to the other effect materials;
if the time cursor is positioned outside the target time period on the video time axis, responding to the selection operation of the user on any effect material, and determining the display effect to be added for the image at the corresponding moment.
8. The video generation method according to claim 5, wherein after determining the target display effect to be added in response to a user selection operation of the target effect material, the method further comprises:
acquiring a first playing time length of the target effect material;
and generating the effect video clip of the first playing duration according to the target frame image and the target effect material.
9. The video generation method according to claim 5, wherein after determining the target display effect to be added in response to a user selection operation of the target effect material, the method further comprises:
acquiring a second playing time length configured by user definition;
and generating the effect video clips of the second playing duration according to the target frame images and the target effect materials.
10. The video generation method according to claim 5, wherein in the case where text information is contained in the target effect material, the method further comprises:
and responding to the replacement operation of the text information in the target effect material by the user, and replacing the text information contained in the target effect material.
11. The video generation method according to claim 10, wherein after replacing text information contained in the target effect material in response to a user's replacement operation of text information in the target effect material, the method further comprises:
caching the text information after the target effect material is replaced;
and generating an effect video segment of the replaced text information in response to a secondary selection operation of the user on the target effect material, wherein the secondary selection operation is a selection operation executed again on the target effect material after the user replaces the text information contained in the target effect material in a primary video editing process.
12. The video generation method according to claim 5, wherein in a case where the target display effect corresponding to the target effect material includes a face effect, the method further comprises:
identifying a face image contained in the target frame image;
and generating corresponding effect video clips according to the target frame images and the target effect materials based on the face image recognition results.
13. The video generation method according to claim 2, wherein generating a corresponding effect video clip from the target frame image and the target display effect comprises:
the target frame image is sent to a server, wherein the server is used for generating corresponding effect video clips according to the display information of the target frame image and the target display effect;
and receiving the effect video clips returned by the server.
14. The video generation method according to claim 2, wherein generating a corresponding effect video clip from the target frame image and the target display effect comprises:
acquiring display information of the target display effect;
and generating corresponding effect video clips according to the target frame images and the display information of the target display effect.
15. The video generation method according to any one of claims 2 to 8, wherein after generating the respective effect video clip from the target frame image and the target display effect, the method further comprises:
and responding to the adding operation of the user on the audio and/or the text, and adding the corresponding audio and/or the text in the effect video clip.
16. A video playing method, comprising:
and displaying the video, wherein the video comprises an effect video segment, and the effect video segment is generated based on the fact that target frame images selected from the video last for a preset playing time period and a target display effect is added.
17. A video generating apparatus, comprising:
a first video presentation module configured to present a first video, wherein the first video comprises a plurality of frames of images;
the second video display module is configured to display a second video in response to a preset operation of a user on the first video, wherein the second video is a video obtained after an effect video segment is inserted into the first video, the preset operation is used for triggering selection of a target frame image in the first video and addition of a target display effect, and the effect video segment is a video segment which is generated according to the target frame image, displays the target display effect and lasts for a preset playing duration.
18. A video playback device, comprising:
the video playing module is configured to display a video, wherein the video comprises an effect video segment, and the effect video segment is generated based on target frame images selected from the video, which are continuously preset playing time length and added with target display effects.
19. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video generation method of any one of claims 1 to 15, or the video playback method of claim 16.
20. A computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the video generation method of any one of claims 1 to 15, or the video playback method of claim 16.
CN202310101241.4A 2023-01-28 2023-01-28 Video generation method, video playing method and related equipment Pending CN116095388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310101241.4A CN116095388A (en) 2023-01-28 2023-01-28 Video generation method, video playing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310101241.4A CN116095388A (en) 2023-01-28 2023-01-28 Video generation method, video playing method and related equipment

Publications (1)

Publication Number Publication Date
CN116095388A true CN116095388A (en) 2023-05-09

Family

ID=86208152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310101241.4A Pending CN116095388A (en) 2023-01-28 2023-01-28 Video generation method, video playing method and related equipment

Country Status (1)

Country Link
CN (1) CN116095388A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117714774A (en) * 2024-02-06 2024-03-15 北京美摄网络科技有限公司 Method and device for manufacturing video special effect cover, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117714774A (en) * 2024-02-06 2024-03-15 北京美摄网络科技有限公司 Method and device for manufacturing video special effect cover, electronic equipment and storage medium
CN117714774B (en) * 2024-02-06 2024-04-19 北京美摄网络科技有限公司 Method and device for manufacturing video special effect cover, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
WO2021238597A1 (en) Virtual scene interaction method and apparatus, device, and storage medium
CN112291627B (en) Video editing method and device, mobile terminal and storage medium
US20230144094A1 (en) Multimedia data processing method, multimedia data generation method, and related device
EP3448048B1 (en) Enhancing video content with extrinsic data
US9747951B2 (en) Timeline interface for video content
CN106303723B (en) Video processing method and device
CN105979339B (en) Window display method and client
US8811797B2 (en) Switching between time order and popularity order sending of video segments
US20230300403A1 (en) Video processing method and apparatus, device, and storage medium
WO2022007722A1 (en) Display method and apparatus, and device and storage medium
CN104065979A (en) Method for dynamically displaying information related with video content and system thereof
US11928152B2 (en) Search result display method, readable medium, and terminal device
CN110784753B (en) Interactive video playing method and device, storage medium and electronic equipment
WO2023104102A1 (en) Live broadcasting comment presentation method and apparatus, and device, program product and medium
CN111866550A (en) Method and device for shielding video clip
JP2023539815A (en) Minutes interaction methods, devices, equipment and media
CN113014985A (en) Interactive multimedia content processing method and device, electronic equipment and storage medium
CN116095388A (en) Video generation method, video playing method and related equipment
US20180048937A1 (en) Enhancing video content with personalized extrinsic data
CN113553466A (en) Page display method, device, medium and computing equipment
CN113259708A (en) Method, computer device and medium for introducing commodities based on short video
CN113099309A (en) Video processing method and device
CN106792219B (en) It is a kind of that the method and device reviewed is broadcast live
CN110209870B (en) Music log generation method, device, medium and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination