CN117714774B - Method and device for manufacturing video special effect cover, electronic equipment and storage medium - Google Patents

Method and device for manufacturing video special effect cover, electronic equipment and storage medium Download PDF

Info

Publication number
CN117714774B
CN117714774B CN202410170469.3A CN202410170469A CN117714774B CN 117714774 B CN117714774 B CN 117714774B CN 202410170469 A CN202410170469 A CN 202410170469A CN 117714774 B CN117714774 B CN 117714774B
Authority
CN
China
Prior art keywords
video
picture
original video
client
special effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410170469.3A
Other languages
Chinese (zh)
Other versions
CN117714774A (en
Inventor
王轶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Meishe Network Technology Co ltd
Original Assignee
Beijing Meishe Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Meishe Network Technology Co ltd filed Critical Beijing Meishe Network Technology Co ltd
Priority to CN202410170469.3A priority Critical patent/CN117714774B/en
Publication of CN117714774A publication Critical patent/CN117714774A/en
Application granted granted Critical
Publication of CN117714774B publication Critical patent/CN117714774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a computer readable storage medium for manufacturing a video special effect cover, which comprise the following steps: acquiring an original video, and displaying the original video and a time axis corresponding to the original video through a play control of a client; the time axis is used for sequentially representing the playing time of each picture frame in the original video; responding to the selection operation of the playing time on the time axis by the client, and generating an initial moving picture according to at least one picture frame corresponding to the selected at least one target playing time; in response to a special effect selection operation of the special effect through the client, rendering an initial moving picture according to the selected target special effect so as to obtain a preview moving picture; and displaying the preview moving picture through a play control, and exporting the preview moving picture as a target file in response to export operation through the client. The method solves the problem that when the video cover is manufactured, the video needs to be edited through various operation interfaces.

Description

Method and device for manufacturing video special effect cover, electronic equipment and storage medium
Technical Field
The application belongs to the field of video production, and particularly relates to a method and a device for producing a video special effect cover, electronic equipment and a computer readable storage medium.
Background
Video special effect covers refer to adding some dynamic visual effects at the beginning or end of a video to enhance the appeal and expressive force of the video. The video special effect cover is widely applied to various video platforms, can improve the watching rate and sharing rate of videos, and can also display the subjects and styles of the videos.
At present, a common method for manufacturing the video special effect cover uses different tools to cut, process, transcribe and the like the video. For example, a user may clip video using video editing software, such as adding filters or subtitles, etc.; the user may then use video conversion software to convert the video to a format and size suitable for uploading.
However, under this scheme, because the use of tools is many and complicated and the compatibility is poor, the problem of complicated operation flow of the user is caused, and because the user cannot view the effect of the video special effect cover in real time, the user can only view the video special effect cover after uploading, and the adjustment and optimization by the user are also unfavorable.
Disclosure of Invention
The application aims to provide a method, a device, electronic equipment and a computer readable storage medium for manufacturing a video special effect cover, which at least solve the problem that when the video cover is manufactured, the video is required to be edited through various operation interfaces, and the preview is inconvenient.
In a first aspect, an embodiment of the present application discloses a method for manufacturing a video special effect cover, including:
acquiring an original video, and displaying the original video and a time axis corresponding to the original video through a play control of a client; the time axis is used for sequentially representing the playing time of each picture frame in the original video;
Responding to the selection operation of the playing time on the time axis by the client, and generating an initial moving picture according to at least one picture frame corresponding to the selected at least one target playing time;
responding to the special effect selection operation of the special effect through the client, and rendering the initial moving picture according to the selected target special effect so as to obtain a preview moving picture;
And displaying the preview moving picture through the play control, and responding to the export operation through the client, and exporting the preview moving picture as a target file.
In a second aspect, the embodiment of the application also discloses a device for manufacturing the video special effect cover, which comprises:
The video preview module is used for acquiring an original video and displaying the original video and a time axis corresponding to the original video through a play control of the client; the time axis is used for sequentially representing the playing time of each picture frame in the original video;
the picture preview module is used for responding to the selection operation of the playing time on the time axis through the client and generating an initial moving picture according to at least one picture frame corresponding to the selected at least one target playing time;
The special effect rendering module is used for responding to special effect selection operation of the special effect through the client, and rendering the initial moving picture according to the selected target special effect so as to obtain a preview moving picture;
And the picture export module is used for displaying the preview moving picture through the play control and exporting the preview moving picture as a target file in response to export operation.
In a third aspect, an embodiment of the present application further discloses an electronic device, including a processor and a memory, where the memory stores a program or instructions executable on the processor, where the program or instructions implement the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application also disclose a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the method as described in the first aspect.
In summary, in the embodiment of the application, the original video and the time axis are displayed through the play control of the client, so that a user can intuitively select any section of the video as the material of the special effect cover without using other video cutting tools, the operation flow is simplified, and the time and the resource are saved; the initial dynamic diagram is generated through the selection operation of the client to the playing time on the time axis, so that a user can view the dynamic effect of the selected video segment in real time without waiting for video conversion or uploading, and the user experience and efficiency are improved; through special effect selection operation of the client side on the special effect, an initial diagram is rendered, so that a user can perform special effect processing on the video segment on the same interface without switching different video editing software, and the creation freedom degree and flexibility of the user are enhanced; the preview image is displayed through the play control, and is exported as a target file in response to the export operation of the client, so that a user can conveniently obtain the final result of the video special effect cover without performing other format conversion or compression, and the quality and consistency of the video special effect cover are ensured. Therefore, when the method of the embodiment of the application is used for manufacturing the video special effect cover, a plurality of different video cutting tools are not needed, video conversion software or video editing software is not needed, and the problems that in the related technology, the operation flow of a user is complicated due to the fact that the using tools are more and are poor in compatibility, and the user cannot view the effect of the video special effect cover in real time, the user can only view the effect after uploading, and adjustment and optimization are not facilitated for the user are solved.
Drawings
In the drawings:
FIG. 1 is a flowchart illustrating steps of a method for manufacturing a video special effect cover according to the present embodiment;
FIG. 2 is a client interface diagram of another method for manufacturing a video special effect cover according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating steps of another method for manufacturing a video special effect cover according to an embodiment of the present disclosure;
FIG. 4 is a data flow diagram of a method for manufacturing a video special effect cover according to an embodiment of the present application;
FIG. 5 is a block diagram of a device for manufacturing a video special effect cover according to an embodiment of the present application;
FIG. 6 is a block diagram of an electronic device of one embodiment provided by an embodiment of the application;
Fig. 7 is a block diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
Fig. 1 is a method for manufacturing a video special effect cover according to the present embodiment.
The method may comprise the steps of:
Step 101, obtaining an original video, and displaying the original video and a time axis corresponding to the original video through a play control of a client.
The time axis is used for sequentially representing the playing time of each picture frame in the original video.
The purpose of this step is to allow the user to view and select any piece of the original video on the client as material for the special effects cover without using other video cropping tools.
For example, as shown in fig. 2, the user opens an original video file on the client, where the file is a video in which a cat is recorded for 4 seconds. The play control on the client can enable the user to play, pause, fast forward, fast backward, drag the progress bar and the like, so that the user can watch the content of the video conveniently. At the same time, a time axis is displayed on the client, and the time axis is from 0 seconds to 4 seconds, each second corresponds to a scale, and each scale corresponds to a picture frame in the original video. The user can select the start and end moments of the video by clicking or dragging the scale on the time axis, thereby determining a section of the video as the material of the special effect cover. For example, the user may select a video segment from 3 rd to 5 th seconds that contains 60 picture frames in the original video (assuming a frame rate of 30 frames/second for the video).
Step 102, in response to the selection operation of the playing time on the time axis by the client, generating an initial moving picture according to at least one picture frame corresponding to the selected at least one target playing time.
The purpose of this step is to allow the user to view the dynamic effects of the selected video segment in real time without waiting for video conversion or upload.
For example, as shown in fig. 2, the user selects, as the material of the special effect cover, a video segment from 3 rd to 5 th seconds, which contains 60 picture frames in the original video, in the example of step 101. After receiving the selection operation of the user, the client extracts corresponding picture frames (namely, the 90 th frame and the 150 th frame) from the original video according to the selected target playing time (namely, the 3 rd second and the 5 th second), and sequentially splices the picture frames into an initial picture. The client stores the initial dynamic diagram in a local cache, and displays the initial dynamic diagram on a play control, so that a user can watch the dynamic effect of the initial dynamic diagram. The duration of the initial map is 2 seconds, which is the same as the duration of the selected video segment.
And step 103, in response to the special effect selection operation of the special effect through the client, rendering the initial moving picture according to the selected target special effect so as to obtain a preview moving picture.
The method aims at enabling the user to conduct special effect processing on the video frequency band on the same interface without switching different video editing software, and enhancing the degree of freedom and flexibility of creation of the user.
For example, as shown in fig. 2, the user generates an initial motion picture in the example of step 102, where the initial motion picture is a motion picture in which a cat is recorded, and the duration is 2 seconds. The client also provides a plurality of preset special effects, the effects can be displayed through different configuration areas, such as filter effects, and the user can select and apply the special effects to the initial diagram through clicking, dragging or inputting the special effects. For example, the user may select a filter effect that causes the color of the initial map to become more vivid. And after receiving the special effect selection operation of the user, the client renders the initial dynamic diagram according to the selected target special effect so as to obtain a preview dynamic diagram. The client stores the preview moving picture in a local cache, and displays the preview moving picture on a play control, so that a user can watch the dynamic effect of the preview moving picture.
And 104, displaying the preview moving picture through the play control, and exporting the preview moving picture as a target file in response to the export operation through the client.
For example, as shown in FIG. 2, in an embodiment of the application, the specific export format of the object file may be a still picture format, such as joint photographic experts group (Joint Photographic Experts Group, JPG), portable network graphics format (Portable Network Graphics, PNG), or the like; video formats such as a fourteenth type multimedia file format (MPEG-4 part 14, mp 4), an audio-video interleave format (Audio Video Interleave, AVI), and the like; or a moving picture Format such as a picture file Format (weppy, webp), an image interchange Format (GRAPHICS INTERCHANGE Format, GIF), etc. The user may make the corresponding selection by deriving the configuration. The method and the device aim to enable the user to conveniently obtain the final result of the video special effect cover without carrying out other format conversion or compression, and ensure the quality and consistency of the video special effect cover.
For example, as shown in fig. 2, the user generates a preview image in an example of step 103, where the preview image is a moving image in which a cat is recorded, and the duration is 2 seconds, and a filter effect is added. The playing control on the client can enable the user to play, pause, repeat and the like, so that the user can conveniently watch the dynamic effect of the preview dynamic diagram. At the same time, the client also provides a export option which can be set in the export configuration area, and the user can export the preview moving picture as a target file by confirming the option. After receiving the export operation of the user, the client exports the preview image into a target file, such as a dynamic or static picture, according to default or user-specified parameters. And the client stores the exported target file in a local or cloud end, and prompts the user that the export is successful so that the user can view or share the exported target file through a file manager or other applications.
In summary, in the embodiment of the application, the original video and the time axis are displayed through the play control of the client, so that a user can intuitively select any section of the video as the material of the special effect cover without using other video cutting tools, the operation flow is simplified, and the time and the resource are saved; the initial dynamic diagram is generated through the selection operation of the client to the playing time on the time axis, so that a user can view the dynamic effect of the selected video segment in real time without waiting for video conversion or uploading, and the user experience and efficiency are improved; through special effect selection operation of the client side on the special effect, an initial diagram is rendered, so that a user can perform special effect processing on the video segment on the same interface without switching different video editing software, and the creation freedom degree and flexibility of the user are enhanced; the preview image is displayed through the play control, and is exported as a target file in response to the export operation of the client, so that a user can conveniently obtain the final result of the video special effect cover without performing other format conversion or compression, and the quality and consistency of the video special effect cover are ensured. Therefore, when the method of the embodiment of the application is used for manufacturing the video special effect cover, a plurality of different video cutting tools are not needed, video conversion software or video editing software is not needed, and the problems that in the related technology, the operation flow of a user is complicated due to the fact that the using tools are more and are poor in compatibility, and the user cannot view the effect of the video special effect cover in real time, the user can only view the effect after uploading, and adjustment and optimization are not facilitated for the user are solved.
Fig. 3 is a schematic diagram of another method for manufacturing a video special effect cover according to an embodiment of the present application, and referring to fig. 3, the method may include the following steps:
step 201, an original video is obtained, and the original video and a time axis corresponding to the original video are displayed through a play control of a client.
The time axis is used for sequentially representing the playing time of each picture frame in the original video.
The method shown in this step is already described in step 101, and will not be described here again.
Optionally, the playing control includes a first window and a second window, and in order to display the original video and a time axis corresponding to the original video through the playing control of the client, step 201 may include the following substeps:
in a substep 2011, the original video is displayed through the first window.
The purpose of this sub-step is to allow the user to view the content of the original video on the client so that the user selects a section of the video as the material of the special effects cover.
For example, as shown in fig. 2, the user opens an original video file on the client, where the file is a video in which a cat is recorded for 4 seconds. The first window on the client is a video player. The user can control the playback of the original video through the first window. The user may also adjust the size and position of the first window by clicking or dragging the edge of the first window.
In step 2012, the time axis of the original video is displayed through the second window, and the thumbnails of the picture frames included in the original video are sequentially displayed based on the time axis.
The sub-step aims to enable a user to intuitively select one section of video on a client without using other video cropping tools, so that the operation flow is simplified, and the time and the resources are saved.
For example, as shown in fig. 2, the user opens an original video file on the client, where the file is a video in which a cat is recorded for 4 seconds. The second window on the client is a time axis control, which can enable the user to play, pause, fast forward, stop, drag a progress bar and the like, so as to facilitate the user to watch the content of the video, and also enable the user to watch and select any section of the original video as the material of the special effect cover. A time axis is displayed on the second window, the time axis being from 0 seconds to 4 seconds, each second corresponding to a scale, each scale corresponding to a frame of the original video. The second window also displays thumbnails of the picture frames included in the original video, which are arranged in order of the time axis. The user can select the start and end moments of the video by clicking or dragging the scale on the time axis, thereby determining a section of the video as the material of the special effect cover. For example, the user may select a video segment from 2 nd to 4 th seconds that contains 60 picture frames in the original video (assuming a frame rate of 30 frames/second for the video). When the user selects the video segment, the time axis on the second window displays a selection area for identifying the range of the video segment selected by the user. The user may also adjust the size and location of the selection field by clicking or dragging the edge of the selection field to change the range of video segments selected by the user. After the user selects the video segment, the thumbnail of the picture frame on the second window performs highlighting or blurring processing according to the range of the video segment selected by the user, so as to distinguish the video segment selected by the user from the unselected video segment. The user can view the details of the picture frame by clicking or dragging the thumbnail of the picture frame so that the user selects the appropriate video segment as the material of the special effects cover.
Optionally, the playing control includes a first window and a second window, the original video is stored in video streaming media data, in order to display the original video and a time axis corresponding to the original video through the playing control of the client, step 201 may include the following substeps:
Sub-step 2013, inputting the streaming media data into the play control to obtain the environment parameters of the streaming media data, and obtaining the original video from the streaming media data through the environment parameters; the environment parameter is used for representing a storage mode of the original video when the original video is stored in the streaming media data.
In the embodiment of the present application, the environmental parameter refers to a parameter that characterizes a storage manner of the original video when the original video is stored in the streaming media data, such as a video coding format, a video resolution, a video frame rate, a video code rate, and the like. The purpose of this sub-step is to allow the user to view and select any piece of the original video in the streaming media data as material for the special effects cover on the client without downloading the entire video file.
For example, as shown in fig. 2, the user opens a link to streaming media data on the client, where the link is a network address pointing to a video in which a cat is recorded for 4 seconds. The playing control on the client is a video streaming media player, and can enable a user to receive and play streaming media data in real time through network connection. After the client opens the link of the streaming media data, the streaming media data is input into a play control to obtain the environmental parameters of the streaming media data, such as a video coding format of H.264, a video resolution of 1080p, a video frame rate of 30 frames/second, a video code rate of 5Mbps, and the like. The client acquires the original video from the streaming media data through the environment parameters, namely, a video with a cat recorded is recorded, and the duration is 4 seconds. The client stores the original video in a local cache, and displays the original video through a play control, so that a user can watch the content of the original video.
Sub-step 2014, extracting all picture frames from the original video, and playing time of each picture frame in the original video, and generating the original video time axis through the numerical range of the playing time.
The sub-step aims to enable a user to view and select any section of the original video on the client as the material of the special effect cover without using other video cutting tools, so that the operation flow is simplified, and the time and the resource are saved.
For example, the user acquired the original video in the example of sub-step 2013. The client extracts all picture frames, namely 300 picture frames, from the original video, wherein the size of each picture frame is 1920×1080 pixels, and the data volume of each picture frame is 0.2MB. The client also extracts the playing time of each picture frame in the original video, namely from 0 second to 4 seconds, wherein the playing time of each picture frame is an integer multiple of 0.033 seconds, and the numerical range of the playing time of each picture frame is 0 to 9.967 seconds. The client generates an original video time axis, namely a linear graph from 0 seconds to 4 seconds, through a numerical range of playing time, wherein each second corresponds to a scale, and each scale corresponds to a picture frame in the original video. The client stores the original video time axis in a local cache, and displays the original video time axis through a play control, so that a user can view and select any section of the original video as the material of the special effect cover.
A substep 2015, displaying the original video through the first window, and displaying the time axis of the original video through the second window.
The purpose of this sub-step is to allow the user to view the content of the original video and the time axis information at the same time on the client, so that the user selects the appropriate video segment as the material of the special effect cover.
For example, as shown in fig. 2, the user obtains, in the example of sub-step 2014, an original video, which is a video in which a cat is recorded, for 4 seconds, and an original video timeline, which is a linear graph from 0 seconds to 4 seconds, each second corresponding to a scale, each scale corresponding to a frame of pictures in the original video. The client displays the original video through the first window, so that a user can play, pause, fast forward, fast backward, drag a progress bar and the like, and the user can watch the content of the video conveniently. The client side displays the original video time axis through the second window, so that a user can view and select any section of the original video as the material of the special effect cover. The user can select the start and end moments of the video by clicking or dragging the scale on the time axis, thereby determining a section of the video as the material of the special effect cover. When the user selects the video segment, the time axis on the second window displays a selection area for identifying the range of the video segment selected by the user. The user may also adjust the size and location of the selection field by clicking or dragging the edge of the selection field to change the range of video segments selected by the user. After the user selects the video segment, the thumbnail of the picture frame on the second window performs highlighting or blurring processing according to the range of the video segment selected by the user, so as to distinguish the video segment selected by the user from the unselected video segment. The user can view the details of the picture frame by clicking or dragging the thumbnail of the picture frame so that the user selects the appropriate video segment as the material of the special effects cover.
Optionally, in order to present the original video through the play control of the client, step 201 may include the following sub-steps:
sub-step 2016 adjusts the resolution of the picture at which the original video was presented in response to the resolution value entered by the client.
In the embodiment of the present application, the resolution value refers to a pixel value of a width and a height of a picture when an original video designated by a user through a client is presented, such as 1920×1080, 1280×720, and the like. The purpose of this substep is to allow the user to choose the appropriate resolution value according to his own needs and the display capabilities of the device, so as to optimize the viewing effect of the original video.
For example, as shown in fig. 2, the user acquires an original video, which is a video in which a cat is recorded, for 4 seconds. The client displays the original video, so that a user can watch the content of the video. The user can also input a resolution value through the client, and the input of the value can be realized through the play configuration area so as to adjust the picture resolution when the original video is displayed. For example, the user may input a resolution value of 720p, i.e., 1280×720 pixels. After receiving the resolution value of the user, the client performs scaling processing on the original video according to the resolution value so as to adjust the picture resolution when the original video is displayed. The client stores the scaled original video in a local cache, and displays the scaled original video, so that a user can watch the content of the scaled original video. The picture resolution of the scaled original video is 720p, which is the same as the resolution value input by the user.
Based on sub-step 2016, to present the original video and the timeline corresponding to the original video through the client's play control, step 201 may include the sub-steps of:
Sub-step 2017, adjusting the video frame rate at which the original video is presented, and the accuracy of the timeline, in response to the video frame rate value entered by the client.
In the embodiment of the present application, the video frame rate value refers to the number of frames per second displayed when the original video specified by the user through the client is presented, such as 30 frames/second, 15 frames/second, etc. The accuracy of the time axis refers to a time interval corresponding to each scale on the time axis, such as 1 second, 0.5 second, etc. The aim of this substep is to allow the user to select the appropriate video frame rate value according to his own needs and the performance of the device, so as to optimize the playing effect of the original video and the displaying effect of the time axis.
For example, as shown in fig. 2, the user acquires an original video, which is a video in which a cat is recorded, and an original video time axis, which is 4 seconds long, and a video frame rate is 30 frames/second. The original video timeline is a linear graph from 0 seconds to 4 seconds, each second corresponding to a scale, each scale corresponding to a frame of pictures in the original video. The client displays the original video, so that a user can watch the content of the video. The user can also input video frame rate values through the client, and the input of the values can be realized through the play configuration area so as to adjust the video frame rate when the original video is displayed and the precision of the time axis. For example, the user may input a video frame rate value of 15 frames per second, i.e., 15 picture frames per second are displayed. After receiving the video frame rate value of the user, the client performs frame dropping processing on the original video according to the video frame rate value so as to adjust the video frame rate when the original video is displayed. The client stores the original video after the frame reduction in a local cache, and displays the original video after the frame reduction, so that a user can watch the content of the original video after the frame reduction. The video frame rate of the original video after the frame reduction is 15 frames/second, which is the same as the video frame rate value input by the user. The client adjusts the original video time axis according to the video frame rate value to change the accuracy of the time axis. The client stores the adjusted original video time axis in a local cache, and displays the adjusted original video time axis, so that a user can view and select any section of the original video as the material of the special effect cover. The adjusted original video time axis is a linear graph from 0 seconds to 4 seconds, each 0.5 seconds corresponds to a scale, and each scale corresponds to two picture frames in the original video. The precision of the adjusted original video time axis is 0.5 seconds, which is the same as the reciprocal of the video frame rate value input by the user.
Step 202, in response to the selection operation of the playing time on the time axis by the client, generating an initial moving picture according to at least one picture frame corresponding to the selected at least one target playing time.
The method shown in this step is already described in step 102, and will not be described here again.
Optionally, the playing control further includes a fourth window, in order to generate, in response to a selection operation of a playing time on the time axis by the client, an initial moving picture according to at least one frame corresponding to the selected at least one target playing time, step 202 includes the following substeps:
sub-step 2021, displaying, in the fourth window, a frame corresponding to the target playing time.
The sub-step aims to enable a user to accurately select a certain frame of a video on a client to serve as a material of a special effect cover without using other video screenshot tools, so that the selection accuracy and flexibility of the user are improved.
For example, as shown in fig. 2, the user acquires an original video, which is a video in which a cat is recorded, and an original video timeline, which is 4 seconds long. The original video timeline is a linear graph from 0 seconds to 4 seconds, each second corresponding to a scale, each scale corresponding to a frame of pictures in the original video. The client side displays the picture frames corresponding to the target playing time through a fourth window, so that a user can view and select any picture frame in the original video as the material of the special effect cover. The user can select the target playing time by clicking or dragging the scale on the time axis, thereby determining the target picture frame. For example, the user may select the 3 rd second scale, i.e., the 90 th frame, as the target play time, thereby determining the 90 th frame as the target picture frame. After the user selects the target playing time, the fourth window displays the picture frame corresponding to the target playing time, namely the 90 th frame, so that the user can view the content of the 90 th frame. The user can select proper target playing time and target picture frame as the material of the special effect cover according to the own demands and favorites.
Sub-step 2022, generating the initial map according to a selection order for a target playback time corresponding to the picture frame on the time axis.
Wherein, the picture frames in the initial picture are sequentially arranged according to the selection sequence.
The aim of this substep is to allow the user to freely select multiple frames of video on the client to compose a dynamic special effect cover without using other video stitching tools, increasing the user's freedom and flexibility of creation.
For example, as shown in fig. 2, the user has selected three picture frames in the original video as target picture frames in the example of sub-step 2021, respectively 90 th, 120 th and 150 th frames, corresponding to 2 nd, 3 rd and 4 th seconds in the original video, respectively. When selecting the target frame, the user selects the 90 th frame, the 150 th frame and the 120 th frame in sequence, namely, firstly selects the 2 nd second frame, then selects the 4 th second frame and finally selects the 3 rd second frame. After the user selects the target frame, the fourth window displays the frame corresponding to the target frame, namely the 90 th frame, the 150 th frame and the 120 th frame, so that the user can view the content of the target frame. After the user selects the target picture frame, the client generates an initial picture on the time axis according to the sequence of the target picture frame selected by the user. The client stores the initial dynamic diagram in a local cache, and displays the initial dynamic diagram through a play control, so that a user can watch the dynamic effect of the initial dynamic diagram. The picture frames in the initial picture are sequentially arranged according to the sequence selected by the user, namely, the 90 th frame, the 150 th frame and the 120 th frame, and correspond to the 1 st frame, the 2 nd frame and the 3 rd frame in the initial picture respectively.
Optionally, prior to step 2022, step 202 further comprises the following additional sub-steps:
Sub-step 2023 adjusts the selection order in response to an adjustment operation of the selection order through a fourth window.
The sub-step aims to enable a user to modify the selection sequence on the client at any time so as to change the dynamic effect of the initial diagram, and the creation freedom and flexibility of the user are increased.
For example, in the example of sub-step 2022, the user arranges the picture frames in the initial picture in the order selected by the user, i.e., the 90 th frame, the 150 th frame, and the 120 th frame, corresponding to the 1 st frame, the 2 nd frame, and the 3 rd frame in the initial picture, respectively. At this time, the user may also perform an adjustment operation on the selection sequence through the fourth window to change the dynamic effect of the initial map. For example, the user may interchange the positions of the 90 th and 120 th frames, i.e., move the 90 th frame to the third position and the 120 th frame to the first position. After receiving the adjustment operation of the user, the client regenerates the initial diagram according to the selection sequence adjusted by the user. The picture frames in the regenerated initial picture are sequentially arranged according to the selection sequence adjusted by the user, namely, the 120 th frame, the 150 th frame and the 90 th frame correspond to the 1 st frame, the 2 nd frame and the 3 rd frame in the initial picture respectively.
Step 203, in response to the selection of the target special effect from the preset special effects and the selection of the target picture frame in the initial picture, rendering the target picture frame according to the target special effect to obtain the preview picture.
The special effect comprises one or more of a filter effect, a subtitle special effect, an animation sticker effect and a human face prop effect.
The aim of this step is to allow the user to perform specific special effects processing on certain frames in the initial diagram, so as to increase the individuation and interest of the video special effect cover.
For example, the user generates an initial motion picture in the example of step 102, where the initial motion picture is a motion picture in which a cat is recorded for 2 seconds. The client also provides a plurality of preset special effects including a filter effect, a subtitle special effect, an animation sticker effect, a face prop effect and the like, and a user can select and apply the special effects to the initial picture by clicking or dragging the special effects. For example, the user may select a caption effect and add an interesting utterance to the initial diagram, such as "i am an lovely cat"; the user can select an animation sticker effect and wear a pair of sunglasses for the cat in the initial diagram; the user can select a face prop effect to change a smiling face for the cat in the initial diagram. The user may also select certain frames of the initial view, such as 120 th and 135 th frames, which correspond to 4 th and 4.5 th seconds of the initial view, respectively. And the client renders the selected target picture frame according to the selected target special effect after receiving the special effect selection operation and the picture frame selection operation of the user so as to obtain a preview picture. The client stores the preview moving picture in a local cache, and displays the preview moving picture on a play control, so that a user can watch the dynamic effect of the preview moving picture. The duration of the preview map is the same as the duration of the initial map, and is 2 seconds. In the preview picture, only 120 th and 135 th frames show the effect selected by the user, and other picture frames keep the original effect of the initial picture.
Optionally, in the case that the special effects include subtitle special effects, the method further includes an optional step 2030:
in step 2030, the special caption is acquired in response to the caption input operation through the client.
In the embodiment of the application, the caption input operation refers to that a user inputs or selects a section of text through a client as the content of the special caption. The special effect caption refers to a section of text displayed in the video special effect cover, so that the information quantity and the interestingness of the video special effect cover can be increased, and the input content of the special effect caption can comprise the specific content of the text and the elements related to the text in the field of computers, such as font information of the text. The purpose of this step is to allow the user to freely input or select the content of the special effects caption on the client to personalize and customize the video special effects cover.
For example, as shown in FIG. 2, the user has generated an initial map. The user also selects a caption effect that can add a piece of text to the initial diagram. The user can also input the caption through the caption input operation of the client, and the input of the caption can be realized through the caption input area to input or select the content of the special effect caption. For example, the user may input "i am an lovely cat" as the content of the special subtitle on the client. After receiving the caption input operation of the user, the client acquires the special caption, namely 'I are lovely cats'. The client stores the special subtitles in a local cache, and displays the special subtitles through a play control, so that a user can view the contents of the special subtitles. The user can input or select the content of the special caption at any time according to the own demands and favorites so as to personalize and customize the video special cover.
Based on step 2030, in order to render the target picture frame in accordance with the target special effect to obtain the preview map, step 203 comprises the sub-steps of:
Sub-step 2031, adding the special effect subtitle to the target frame according to the subtitle special effect to obtain the preview.
In the embodiment of the application, the caption special effect refers to some visual effects added on the special caption, including the style of the caption, such as color, font, size, position, animation, filter, etc.; and style of composite subtitles. The aim of this sub-step is to allow the user to freely select and adjust the caption effect on the client to beautify and enrich the presentation of the special effect caption.
For example, as shown in FIG. 2, in the example of step 2030, the user has typed the content of the special subtitle, i.e., "I are lovely cats". The user also selects the caption special effect, such as pink, cartoon, large and medium, jump, and blur. The user can also select the caption special effect through the special effect configuration area by the caption special effect operation of the client side so as to select and adjust the caption special effect. For example, the user may change the color, font, size, position, animation, filters, etc. of the subtitle by clicking or dragging a button or slider on the subtitle special effect control. After receiving the caption special effect operation of the user, the client adds the special effect caption to the target picture frame according to the caption special effect so as to obtain a preview picture. The client stores the preview moving picture in a local cache, and displays the preview moving picture through a play control, so that a user can watch the dynamic effect of the preview moving picture. The special effect captions in the preview picture are displayed according to the caption special effect selected by the user, such as pink, cartoon, large and large, centered, jump, and blurred filter. Users can select and adjust the caption special effect at any time according to own demands and favorites so as to beautify and enrich the expression form of the special caption.
And 204, displaying the preview map through the play control.
The purpose of this step is to allow the user to view and adjust the dynamic effects of the preview images on the client to meet the user's needs and preferences.
For example, as shown in fig. 2, the user generates a preview image in the example of step 203, where the preview image is a moving picture in which a cat is recorded, and the duration is 2 seconds, and a filter effect, a subtitle special effect, a motion picture sticker effect, a face prop effect, and the like are added. The playing control on the client can enable the user to play, pause, repeat and the like, so that the user can conveniently watch the dynamic effect of the preview dynamic diagram. The user can also adjust parameters of the preview picture, such as picture frame rate, picture quality, compression mode, number of loops, single picture resolution, reserve end frames, etc., by configuring the area with parameters on the play control. The user can also add or delete special effects of the preview map, such as a filter effect, a subtitle special effect, an animation sticker effect, a face prop effect, etc., by clicking or dragging an icon or a menu on the play control. And after receiving the adjustment operation of the user, the client updates the dynamic effect of the preview dynamic diagram in real time, and displays the preview dynamic diagram on the play control, so that the user can watch the latest effect of the preview dynamic diagram.
Optionally, the play control includes a third window, and in order to present the preview view through the play control, step 204 includes the substeps of:
a substep 2041, displaying the preview image through the third window.
The purpose of this sub-step is to allow the user to view the dynamic effects of the preview view on the client for the user to preview and evaluate the video special effect cover.
For example, as shown in fig. 2, the user acquires a preview image, which is a moving picture in which a cat is recorded. The special effect captions in the preview picture are displayed according to the caption special effect selected by the user, such as pink, cartoon font, large size, centered position, jump animation, blurred filter and "I are lovely cats". The client side displays the preview moving picture through the third window, so that a user can watch the dynamic effect of the preview moving picture.
Step 205, calculating the data volume of the exported target file according to the preview diagram and the export operation shown in the play control, and showing the data volume of the exported target file.
The purpose of this step is to let the user know the size of the target file before exporting, so that the user selects the appropriate export parameters and formats according to his own needs and the storage space of the device.
For example, as shown in FIG. 2, the user adjusts parameters of the preview map, such as picture frame rate, picture quality, compression mode, number of cycles, single map resolution, reserve end frames, etc., in the example of step 204. The play control on the client also provides an export option that the user can export the preview movie as a target file by confirming. After receiving the export operation of the user, the client calculates the data volume of the exported target file, such as the file size of the dynamic picture or the static picture, according to the parameters of the preview image. The client displays the calculated data volume of the target file on a play control, so that a user can view the size of the target file, and the position for displaying the size of the target file can be a parameter configuration area or a export configuration area. The user may also adjust the export parameters and formats by clicking or dragging buttons or sliders on the play control to change the amount of data of the target file. And after receiving the adjustment operation of the user, the client updates the data volume of the target file in real time and displays the size of the target file on the play control. The user can select proper export parameters and formats according to own requirements and storage space of the device so as to export the target file.
Step 206, in response to the parameter configuration operation of the derived configuration by the client, obtaining the derived configuration after the parameter configuration.
Wherein the derived configuration comprises one or more of picture frame rate, picture quality, compressed mode, number of cycles, single picture resolution (DPI), reservation of end frames.
In the embodiment of the present application, the picture quality refers to picture adjustment parameters, such as a file frame rate and a quality coefficient, of a target file to be finally output. The compression mode refers to specific parameters for compressing the file before the file is finally output as the target file, such as compression ratio of compression, specific algorithm of compression, and the like. The number of loops refers to the specific number of loops of playing the same section of picture in the file. The purpose of this step is to allow the user to select appropriate export parameters and formats to export the target file according to his own needs and the storage space of the device.
For example, as shown in FIG. 2, the user has viewed the data volume of the target file in the example of step 205. The user can also adjust the derived parameters and formats by playing the parameter configuration area on the control to change the data volume and presentation form of the target file. For example, the user may adjust the picture frame rate, e.g., from 15 frames/second up to 30 frames/second, to increase the smoothness of the target file, or from 15 frames/second down to 8 frames/second, to decrease the size of the target file; the user can adjust the picture quality, and the picture quality is reduced from high definition to standard definition so as to reduce the size of the target file; the user can adjust the compression mode from lossless compression to lossy compression to reduce the size of the target file; the user can adjust the circulation times from infinite circulation to finite circulation so as to reduce the size of the target file; the user can adjust the resolution of the single graph from 1080p to 720p to reduce the size of the target file; the user may adjust the end of reservation frame from reserved to unreserved to reduce the size of the target file. After receiving the parameter configuration operation of the user, the client acquires the derived configuration after the parameter configuration, and displays the derived configuration after the parameter configuration on the play control, so that the user can view the derived parameters and the format changes. The user can select proper export parameters and formats according to own requirements and storage space of the device so as to export the target file.
Step 207, exporting the preview image as a target file according to the export configuration.
The purpose of this step is to allow the user to save the dynamic effect of the preview as a file for the user to view or share the video special effect cover on other applications or platforms.
For example, the user has selected the export parameters and formats in the example of step 206. Wherein the export format is selected to be webp. After receiving the export configuration of the user, the client exports the preview image as a target file, i.e. a webp file, according to the export configuration. The client stores the exported webp files in the local or cloud and prompts the user that the export is successful. The user may view or share the exported webp files through a file manager or other application. The exported webp file is consistent with the dynamic effect of the preview view.
Step 208, packaging the derived preview image and the original video into a video file, and setting the icon as the preview image when the icon of the video file is displayed by a file manager.
The purpose of this step is to allow the user to quickly identify and access the video special effects covers in the file manager without having to open the video file to view the content.
For example, the user derives a preview, which is a webp file, with a duration of 2 seconds, recorded a moving picture of a cat, added with a filter effect, a subtitle special effect, an animation sticker effect, a face prop effect, etc. The user also has an original video, which is an MP4 file, with a duration of 4 seconds, and a video of a cat is recorded. After deriving the preview image, the client encapsulates the preview image and the original video into a video file, such as a movie format (MOV) file. And the client stores the packaged video file in a local or cloud end and prompts the user that the packaging is successful. The user may view the encapsulated video file through the file manager. In the file manager, the icon of the packaged video file is set to be a preview image, namely a dynamic picture of a cat, so that a user can quickly identify and access the video special effect cover. The user may open or share the video file by double clicking or right clicking on the encapsulated video file. After opening the video file, the user can see the content of the video file including the preview view and the original video. The preview plot is taken as the beginning of the video file for 2 seconds, the original video is taken as the end of the video file for 4 seconds.
As shown in fig. 4, for the client shown in fig. 2, the data flow is as follows:
s1: the streaming media data of the original video is input into a playing interface of the client, and the playing interface obtains a corresponding time axis and the original video after being analyzed through playing control display;
S2: in the process of displaying the original video, the playing resolution and the playing frame rate of the original video are adjusted through a playing configuration interface, and the playing configuration interface is displayed through a playing configuration area;
S3: when the cover is manufactured, interaction is realized by adding a special effect interface, wherein a specific special effect is configured through a special effect configuration area, setting of caption content is realized through a caption input area, main items of caption content setting comprise fonts and text content of captions, and when the special effect is the caption, a caption style and configuration of a composite caption are also required to be independently set for the caption;
S4: before the front cover is exported, the target file can be correspondingly adjusted through an export setting interface of the parameter configuration area, wherein, as the parameter configuration area is provided with a window for previewing the size of the special target file, a user can adjust specific parameters such as the frame rate, the quality coefficient, the compression ratio, the compression algorithm, the cycle number, the single-image resolution and the like of the exported target file according to actual requirements, and the preview effect of the exported front cover is checked through a play control in the whole course;
S5: when the cover is exported, the user can select the exported file format according to the own requirements.
In summary, in the embodiment of the application, the original video and the time axis are displayed through the play control of the client, so that a user can intuitively select any section of the video as the material of the special effect cover without using other video cutting tools, the operation flow is simplified, and the time and the resource are saved; the initial dynamic diagram is generated through the selection operation of the client to the playing time on the time axis, so that a user can view the dynamic effect of the selected video segment in real time without waiting for video conversion or uploading, and the user experience and efficiency are improved; through special effect selection operation of the client side on the special effect, an initial diagram is rendered, so that a user can perform special effect processing on the video segment on the same interface without switching different video editing software, and the creation freedom degree and flexibility of the user are enhanced; the preview image is displayed through the play control, and is exported as a target file in response to the export operation of the client, so that a user can conveniently obtain the final result of the video special effect cover without performing other format conversion or compression, and the quality and consistency of the video special effect cover are ensured. Therefore, when the method of the embodiment of the application is used for manufacturing the video special effect cover, a plurality of different video cutting tools are not needed, video conversion software or video editing software is not needed, and the problems that in the related technology, the operation flow of a user is complicated due to the fact that the using tools are more and are poor in compatibility, and the user cannot view the effect of the video special effect cover in real time, the user can only view the effect after uploading, and adjustment and optimization are not facilitated for the user are solved.
Referring to fig. 5, an apparatus 30 for manufacturing a video special effect cover according to an embodiment of the present application is shown, including:
the video preview module 301 is configured to obtain an original video, and display the original video and a time axis corresponding to the original video through a play control of a client; the time axis is used for sequentially representing the playing time of each picture frame in the original video;
The picture preview module 302 is configured to generate an initial moving picture according to at least one frame corresponding to the selected at least one target playing time in response to a selection operation of the playing time on the time axis by the client;
The special effect rendering module 303 is configured to render the initial motion picture according to the selected target special effect in response to a special effect selection operation on the special effect by the client, so as to obtain a preview motion picture;
And the picture export module 304 is configured to display the preview image through the play control, and export the preview image as a target file in response to an export operation through a client.
Optionally, the play control includes a first window, a second window and a third window, and the video preview module 301 includes:
The first display sub-module is used for displaying the original video through the first window;
the second display sub-module is used for displaying the time axis of the original video through the second window and sequentially displaying the thumbnails of the picture frames included in the original video based on the time axis;
The picture derivation module 304 includes:
And the third display sub-module is used for displaying the preview image through the third window.
Optionally, the playing control includes a first window and a second window, the original video is stored in the video streaming media data, and the video preview module 301 includes:
The environment parameter extraction sub-module is used for inputting the streaming media data into the play control to acquire environment parameters of the streaming media data, and acquiring the original video from the streaming media data through the environment parameters; the environment parameter is used for representing a storage mode of the original video when the original video is stored in the streaming media data;
the picture extraction sub-module is used for extracting all picture frames from the original video, playing time of each picture frame in the original video, and generating the original video time axis through the numerical range of the playing time;
and the fourth display sub-module is used for displaying the original video through the first window and displaying the time axis of the original video through the second window.
Optionally, the play control further includes a fourth window, and the picture preview module 302 includes:
the material preview sub-module is used for displaying the picture frame corresponding to the target playing moment in the fourth window;
A material time sub-module, configured to generate, on the time axis, the initial map according to a selection order for a target playing time corresponding to the frame; the picture frames in the initial picture are sequentially arranged according to the selection sequence.
Optionally, the material time sub-module further includes:
And a sequence adjusting unit for adjusting the selection sequence in response to an adjustment operation of the selection sequence through the fourth window.
Optionally, the special effect rendering module 303 includes:
The special effect rendering sub-module is used for responding to the selection of a target special effect from preset special effect and the selection of a target picture frame in an initial moving picture, and rendering the target picture frame according to the target special effect so as to obtain the preview moving picture; the special effect comprises one or more of a filter effect, a subtitle special effect, an animation sticker effect and a human face prop effect.
Optionally, in the case that the special effects include subtitle special effects, the apparatus 30 further includes:
the subtitle input module is used for responding to the subtitle input operation through the client to acquire special effect subtitles;
The special effects rendering module 303 includes:
and the subtitle rendering sub-module is used for adding the special effect subtitle to the target picture frame according to the subtitle special effect so as to obtain the preview picture.
Optionally, the picture derivation module 304 includes:
The parameter configuration sub-module is used for responding to the parameter configuration operation of the derived configuration of the client and acquiring the derived configuration after the parameter configuration;
A first export sub-module, configured to export the preview image as a target file according to the export configuration; the derived configuration includes one or more of picture frame rate, picture quality, compressed mode, number of cycles, single picture resolution, reservation of end frames.
Optionally, the apparatus 30 further includes:
And the data quantity display module is used for calculating the data quantity of the exported target file according to the preview diagram displayed on the play control and the export operation, and displaying the data quantity of the exported target file.
Optionally, the apparatus 30 further includes:
And the file association module is used for packaging the exported preview moving picture and the original video into a video file, and setting the icon as the preview moving picture when the icon of the video file is displayed through a file manager.
Optionally, the video preview module 301 includes:
A resolution adjustment sub-module, configured to adjust a resolution of a picture when the original video is presented in response to a resolution value input through a client;
And the frame rate adjustment sub-module is used for responding to the video frame rate value input by the client and adjusting the video frame rate when the original video is displayed and the precision of the time axis.
In summary, in the embodiment of the application, the original video and the time axis are displayed through the play control of the client, so that a user can intuitively select any section of the video as the material of the special effect cover without using other video cutting tools, the operation flow is simplified, and the time and the resource are saved; the initial dynamic diagram is generated through the selection operation of the client to the playing time on the time axis, so that a user can view the dynamic effect of the selected video segment in real time without waiting for video conversion or uploading, and the user experience and efficiency are improved; through special effect selection operation of the client side on the special effect, an initial diagram is rendered, so that a user can perform special effect processing on the video segment on the same interface without switching different video editing software, and the creation freedom degree and flexibility of the user are enhanced; the preview image is displayed through the play control, and is exported as a target file in response to the export operation of the client, so that a user can conveniently obtain the final result of the video special effect cover without performing other format conversion or compression, and the quality and consistency of the video special effect cover are ensured. Therefore, when the method of the embodiment of the application is used for manufacturing the video special effect cover, a plurality of different video cutting tools are not needed, video conversion software or video editing software is not needed, and the problems that in the related technology, the operation flow of a user is complicated due to the fact that the using tools are more and are poor in compatibility, and the user cannot view the effect of the video special effect cover in real time, the user can only view the effect after uploading, and adjustment and optimization are not facilitated for the user are solved.
Referring to fig. 6, an electronic device 500 may include one or more of the following components: a processing component 502, a memory 504, a power supply component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the electronic device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is used to store various types of data to support operations at the electronic device 500. Examples of such data include instructions for any application or method operating on the electronic device 500, contact data, phonebook data, messages, pictures, multimedia, and so forth. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the electronic device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 500.
The multimedia component 508 includes an interface between the electronic device 500 and a user that provides an output interface. In some embodiments, the interface may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the interface includes a touch panel, the interface may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense demarcations of touch or sliding actions, but also detect durations and pressures associated with touch or sliding operations. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. When the electronic device 500 is in an operational mode, such as a shooting mode or a multimedia mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is for outputting and/or inputting audio signals. For example, the audio component 510 includes a Microphone (MIC) for receiving external audio signals when the electronic device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
Input/output I/O interface 512 provides an interface between processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the electronic device 500. For example, the sensor assembly 515 may detect an on/off state of the electronic device 500, a relative positioning of the components, such as a display and keypad of the electronic device 500, the sensor assembly 514 may also detect a change in position of the electronic device 500 or a component of the electronic device 500, the presence or absence of a user's contact with the electronic device 500, an orientation or acceleration/deceleration of the electronic device 500, and a change in temperature of the electronic device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 515 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is employed to facilitate communication between the electronic device 500 and other devices, either in a wired or wireless manner. The electronic device 500 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 516 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for implementing a display control method provided by an embodiment of the application.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 504, including instructions executable by processor 520 of electronic device 500 to perform the above-described method. For example, the non-transitory storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Fig. 7 is a block diagram of an electronic device 600 in accordance with another embodiment of the application. For example, the electronic device 600 may be provided as a server. Referring to fig. 7, the electronic device 600 includes a processing component 622 that further includes one or more processors and memory resources represented by a memory 632 for storing instructions, such as application programs, executable by the processing component 622. The application programs stored in memory 632 may include one or more modules each corresponding to a set of instructions. In addition, the processing component 622 is configured to execute instructions to perform a display control method provided by an embodiment of the present application.
The electronic device 600 may also include a power component 626 configured to perform power management of the electronic device 600, a wired or wireless network interface 650 configured to connect the electronic device 600 to a network, and an input/output (I/O) interface 658. The electronic device 600 may operate based on an operating system, such as WindowsServerTM, macOSXTM, unixTM, linuxTM, freeBSDTM or the like, stored in the memory 632.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. A method for making a video special effect cover, comprising:
acquiring an original video, and displaying the original video and a time axis corresponding to the original video through a play control of a client; the time axis is used for sequentially representing the playing time of each picture frame in the original video;
Responding to the selection operation of the playing time on the time axis by the client, and generating an initial moving picture according to at least one picture frame corresponding to the selected at least one target playing time;
responding to the special effect selection operation of the special effect through the client, and rendering the initial moving picture according to the selected target special effect so as to obtain a preview moving picture;
displaying the preview moving picture through the play control, and responding to the export operation through the client, and exporting the preview moving picture as a target file;
Calculating the data volume of the exported target file according to the preview diagram displayed on the play control and the export operation, and displaying the data volume of the exported target file;
packaging the derived preview moving picture and the original video into a video file, and setting the icon as the preview moving picture when the icon of the video file is displayed through a file manager;
The playing control further includes a fourth window, and the generating, in response to a selection operation of the playing time on the time axis by the client, an initial moving picture according to at least one frame corresponding to the selected at least one target playing time includes:
Displaying a picture frame corresponding to the target playing moment in the fourth window;
Generating the initial map on the time axis according to the selection sequence of the target playing time corresponding to the picture frame; the picture frames in the initial picture are sequentially arranged according to the selection sequence.
2. The method of claim 1, wherein the play control comprises a first window, a second window, and a third window, the presenting, by the play control of the client, the original video and a timeline corresponding to the original video comprising:
displaying the original video through the first window;
Displaying a time axis of the original video through the second window, and sequentially displaying thumbnails of picture frames included in the original video based on the time axis;
The displaying the preview icon through the play control comprises:
And displaying the preview moving picture through the third window.
3. The method of claim 1, wherein the play control includes a first window and a second window, the original video is stored in video streaming media data, the displaying the original video and a timeline corresponding to the original video by the play control of the client includes:
inputting the streaming media data into the play control to acquire the environment parameters of the streaming media data, and acquiring the original video from the streaming media data through the environment parameters; the environment parameter is used for representing a storage mode of the original video when the original video is stored in the streaming media data;
Extracting all picture frames from the original video, and playing time of each picture frame in the original video, and generating the original video time axis through the numerical range of the playing time;
And displaying the original video through the first window, and displaying the time axis of the original video through the second window.
4. The method of claim 1, wherein, before generating the initial map according to the selection order for the target play moments corresponding to the picture frames, the generating the initial map according to at least one picture frame corresponding to the selected at least one target play moment in response to a selection operation of a play moment on the time axis by a client, further comprises:
and adjusting the selection order in response to an adjustment operation of the selection order through a fourth window.
5. The method of claim 1, wherein the rendering the initial view according to the selected target effect to obtain the preview view in response to the effect selection operation of the effect by the client comprises:
responding to the selection of a target special effect from preset special effects and the selection of a target picture frame in an initial picture, and rendering the target picture frame according to the target special effect so as to obtain the preview picture;
the special effect comprises one or more of a filter effect, a subtitle special effect, an animation sticker effect and a human face prop effect.
6. The method of claim 5, wherein in the event that the special effects comprise subtitle special effects, the method further comprises:
responding to the caption input operation through the client to obtain the special caption;
Rendering the target picture frame according to the target special effect to obtain the preview picture, including:
and adding the special effect caption to the target picture frame according to the caption special effect so as to obtain the preview picture.
7. The method of claim 1, wherein the exporting the preview map as the target file in response to an export operation by the client comprises:
Responding to parameter configuration operation of export configuration of a client, and acquiring export configuration after parameter configuration;
exporting the preview image into a target file according to the export configuration;
the derived configuration includes one or more of picture frame rate, picture quality, compressed mode, number of cycles, single picture resolution, reservation of end frames.
8. The method of claim 1, wherein the exposing the original video through a play control of a client comprises:
Adjusting a picture resolution when the original video is presented in response to a resolution value input through a client;
The displaying the original video and the time axis corresponding to the original video through the play control of the client side comprises the following steps:
In response to a video frame rate value entered by a client, adjusting a video frame rate at which the original video is presented, and an accuracy of the timeline.
9. A device for making a video special effect cover, comprising:
The video preview module is used for acquiring an original video and displaying the original video and a time axis corresponding to the original video through a play control of the client; the time axis is used for sequentially representing the playing time of each picture frame in the original video;
the picture preview module is used for responding to the selection operation of the playing time on the time axis through the client and generating an initial moving picture according to at least one picture frame corresponding to the selected at least one target playing time;
The special effect rendering module is used for responding to special effect selection operation of the special effect through the client, and rendering the initial moving picture according to the selected target special effect so as to obtain a preview moving picture;
The picture export module is used for displaying the preview moving picture through the play control and exporting the preview moving picture as a target file in response to export operation;
The data quantity display module is used for calculating the data quantity of the exported target file according to the preview diagram displayed on the play control and the export operation, and displaying the data quantity of the exported target file;
the file association module is used for packaging the derived preview moving picture and the original video into a video file, and setting the icon as the preview moving picture when the icon of the video file is displayed through a file manager;
the play control further comprises a fourth window, and the picture preview module comprises:
the material preview sub-module is used for displaying the picture frame corresponding to the target playing moment in the fourth window;
A material time sub-module, configured to generate, on the time axis, the initial map according to a selection order for a target playing time corresponding to the frame; the picture frames in the initial picture are sequentially arranged according to the selection sequence.
10. An electronic device, comprising: a processor, a memory for storing instructions executable by the processor;
Wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 8.
11. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of claims 1 to 8.
CN202410170469.3A 2024-02-06 2024-02-06 Method and device for manufacturing video special effect cover, electronic equipment and storage medium Active CN117714774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410170469.3A CN117714774B (en) 2024-02-06 2024-02-06 Method and device for manufacturing video special effect cover, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410170469.3A CN117714774B (en) 2024-02-06 2024-02-06 Method and device for manufacturing video special effect cover, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117714774A CN117714774A (en) 2024-03-15
CN117714774B true CN117714774B (en) 2024-04-19

Family

ID=90150209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410170469.3A Active CN117714774B (en) 2024-02-06 2024-02-06 Method and device for manufacturing video special effect cover, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117714774B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104540028A (en) * 2014-12-24 2015-04-22 上海影卓信息科技有限公司 Mobile platform based video beautifying interactive experience system
CN111756952A (en) * 2020-07-23 2020-10-09 北京字节跳动网络技术有限公司 Preview method, device, equipment and storage medium of effect application
CN113018867A (en) * 2021-03-31 2021-06-25 苏州沁游网络科技有限公司 Special effect file generating and playing method, electronic equipment and storage medium
CN113099287A (en) * 2021-03-31 2021-07-09 上海哔哩哔哩科技有限公司 Video production method and device
CN116095388A (en) * 2023-01-28 2023-05-09 北京达佳互联信息技术有限公司 Video generation method, video playing method and related equipment
WO2023231235A1 (en) * 2022-05-30 2023-12-07 网易(杭州)网络有限公司 Method and apparatus for editing dynamic image, and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111669623B (en) * 2020-06-28 2023-10-13 腾讯科技(深圳)有限公司 Video special effect processing method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104540028A (en) * 2014-12-24 2015-04-22 上海影卓信息科技有限公司 Mobile platform based video beautifying interactive experience system
CN111756952A (en) * 2020-07-23 2020-10-09 北京字节跳动网络技术有限公司 Preview method, device, equipment and storage medium of effect application
CN113018867A (en) * 2021-03-31 2021-06-25 苏州沁游网络科技有限公司 Special effect file generating and playing method, electronic equipment and storage medium
CN113099287A (en) * 2021-03-31 2021-07-09 上海哔哩哔哩科技有限公司 Video production method and device
WO2023231235A1 (en) * 2022-05-30 2023-12-07 网易(杭州)网络有限公司 Method and apparatus for editing dynamic image, and electronic device
CN116095388A (en) * 2023-01-28 2023-05-09 北京达佳互联信息技术有限公司 Video generation method, video playing method and related equipment

Also Published As

Publication number Publication date
CN117714774A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
EP3758364B1 (en) Dynamic emoticon-generating method, computer-readable storage medium and computer device
CN109151537B (en) Video processing method and device, electronic equipment and storage medium
KR20210092220A (en) Real-time video special effects systems and methods
US20160283097A1 (en) Gesture based interactive graphical user interface for video editing on smartphone/camera with touchscreen
US20170024110A1 (en) Video editing on mobile platform
US20110170008A1 (en) Chroma-key image animation tool
CN112218154B (en) Video acquisition method and device, storage medium and electronic device
US20220223181A1 (en) Method for synthesizing videos and electronic device therefor
CN111479158B (en) Video display method and device, electronic equipment and storage medium
CN113099287A (en) Video production method and device
CN113115097B (en) Video playing method, device, electronic equipment and storage medium
CN111832539A (en) Video processing method and device and storage medium
JP2016537918A (en) Method and apparatus for parallax of captions on images when scrolling
KR20210118428A (en) Systems and methods for providing personalized video
CN113727140A (en) Audio and video processing method and device and electronic equipment
CN114520876A (en) Time-delay shooting video recording method and device and electronic equipment
CN113099288A (en) Video production method and device
CN112764636A (en) Video processing method, video processing device, electronic equipment and computer-readable storage medium
CN117714774B (en) Method and device for manufacturing video special effect cover, electronic equipment and storage medium
US20160202882A1 (en) Method and apparatus for animating digital pictures
CN113711575A (en) System and method for instantly assembling video clips based on presentation
CN116095388A (en) Video generation method, video playing method and related equipment
CN113852757B (en) Video processing method, device, equipment and storage medium
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN113489899A (en) Special effect video recording method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant