CN111541936A - Video and image processing method and device, electronic equipment and storage medium - Google Patents

Video and image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111541936A
CN111541936A CN202010255390.2A CN202010255390A CN111541936A CN 111541936 A CN111541936 A CN 111541936A CN 202010255390 A CN202010255390 A CN 202010255390A CN 111541936 A CN111541936 A CN 111541936A
Authority
CN
China
Prior art keywords
special effect
video
original
template
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010255390.2A
Other languages
Chinese (zh)
Inventor
刘瑶
陈仁健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010255390.2A priority Critical patent/CN111541936A/en
Publication of CN111541936A publication Critical patent/CN111541936A/en
Priority to PCT/CN2021/075500 priority patent/WO2021196890A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Abstract

The embodiment of the application discloses a video and image processing method, a video and image processing device, electronic equipment and a computer readable storage medium. The video processing method comprises the following steps: acquiring an original video of a video special effect to be generated; identifying the video content of the original video, and obtaining at least one special effect template matched with the video content of the original video, wherein the special effect template comprises a plurality of special effect components, and the special effect components are used for describing the special effect of the video; and generating a video special effect described by the target special effect component in the original video according to the target special effect component contained in the selected special effect template. The image processing method is the same as the video processing method. By adopting the technical scheme of the embodiment of the application, the special effect can be conveniently generated in the video or the image.

Description

Video and image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image application technologies, and in particular, to a video processing method and apparatus, an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of internet technology, various video application programs are emerging. These video applications typically have special effects addition functionality to provide a better user experience for the user by generating video special effects in the user's video. However, how to improve the convenience in the video special effect generation process is a technical problem to be solved in the prior art.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present application provide a video processing method and apparatus based on multiple aspects, and further provide an image processing method and apparatus, an electronic device and a computer-readable storage medium.
The embodiment of the application adopts the technical scheme that:
a method of video processing, the method comprising: acquiring an original video of a video special effect to be generated; identifying the video content of the original video, and obtaining at least one special effect template matched with the video content of the original video, wherein the special effect template comprises a plurality of special effect components, and the special effect components are used for describing the special effect of the video; and generating a video special effect described by the target special effect component in the original video according to the target special effect component contained in the selected special effect template.
Based on another aspect of the present application, there is also provided a video processing method, including: acquiring an original video of a video special effect to be generated; performing video content identification on the original video to obtain at least one special effect template matched with the video content of the original video, wherein the special effect template comprises a plurality of special effect components, and the special effect components are used for describing video special effects; and sending the special effect template to a designated device, so that the designated device generates a video special effect described by the special effect component in the original video according to a target special effect component contained in the selected special effect template.
Based on another aspect of the present application, there is also provided a video processing method, including: displaying an original video of a video special effect to be generated; displaying at least one special effect template matched with the video content of the original video, wherein the special effect template is generated according to a special effect component matched with the video content of the original video, and the special effect component is used for describing video special effects; detecting a target special effect template selected from the at least one special effect template, wherein the target special effect template comprises a target special effect component; rendering the video effect described by the target effect component in the displayed original video.
Based on another aspect of the present application, there is also provided an image processing method, including: acquiring an original image of a special effect to be generated; carrying out image content identification on the original image, and acquiring at least one special effect template matched with the image content of the original image, wherein the special effect template comprises a plurality of special effect components which are used for describing the special effect of the image; and generating an image special effect described by the target special effect component in the original image according to the target special effect component contained in the selected special effect template.
A video processing apparatus comprising: the original video acquisition module is used for acquiring an original video of a video special effect to be generated; the system comprises a special effect template acquisition module, a video content identification module and a video content identification module, wherein the special effect template acquisition module is used for identifying the video content of an original video and acquiring at least one special effect template matched with the video content of the original video, the special effect template comprises a plurality of special effect components, and the special effect components are used for describing the special effect of the video; and the video special effect generation module is used for generating a video special effect described by the target special effect component in the original video according to the target special effect component contained in the selected special effect template.
An image processing apparatus comprising: the original image acquisition module is used for acquiring an original image of a special effect to be generated; the template acquisition module is used for carrying out image content identification on the original image and acquiring at least one special effect template matched with the image content of the original image, wherein the special effect template comprises a plurality of special effect components, and the special effect components are used for describing the special effect of the image; and the image special effect generating module is used for generating the image special effect described by the target special effect component in the original image according to the target special effect component contained in the selected special effect template.
An electronic device comprising a processor and a memory, the memory having stored thereon computer readable instructions which, when executed by the processor, implement a video processing method or an image processing method as described above.
A computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to execute a video processing method or an image processing method as described above.
According to the technical scheme, the original video is subjected to special effect processing through the special effect template, a user does not need to select and add special effects from a large number of special effect materials, and convenience in the video characteristic generation process is greatly improved.
And the special effect template corresponding to the original video of the video special effect to be generated is obtained by performing video content identification on the original video of the video special effect to be generated and automatically matching according to the video content of the original video, so that the special effect template is actively adaptive to the original video of the video special effect to be generated, and then the video special effect described by the target special effect component is generated in the original video according to the target special effect component contained in the selected special effect template, so that the original video does not need to be edited before the video special effect is generated on the original video, the generation efficiency of the video special effect is greatly improved, and the original video is not limited in the process of generating the video special effect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic illustration of an implementation environment to which the present application relates;
FIG. 2 is a flow diagram illustrating a video processing method according to an exemplary embodiment;
FIG. 3 is a flow chart of one embodiment of step 130 in the embodiment shown in FIG. 2;
FIG. 4 is a flow chart of one embodiment of step 133 in the embodiment of FIG. 3;
FIG. 5 is a flow chart of one embodiment of step 150 in the embodiment shown in FIG. 2;
FIG. 6 is a diagram of an effect template shown in an exemplary embodiment;
FIG. 7 is a flow chart of step 130 in another embodiment of the embodiment shown in FIG. 2;
FIG. 8 is a flow chart illustrating a method of video processing according to another exemplary embodiment;
FIG. 9 is a flow diagram illustrating a video processing method in accordance with another illustrative embodiment;
FIG. 10 is a schematic diagram illustrating a video effects generation process in accordance with one illustrative embodiment;
FIG. 11 is a flow diagram illustrating a method of video processing in accordance with another illustrative embodiment;
FIG. 12 is an interface diagram of a terminal device shown in accordance with an exemplary embodiment;
FIG. 13 is a block diagram illustrating a video processing device according to an example embodiment;
fig. 14 is a block diagram illustrating a video processing apparatus according to another exemplary embodiment;
fig. 15 is a block diagram illustrating a video processing apparatus according to another exemplary embodiment;
fig. 16 is a schematic diagram illustrating a structure of a video processing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment according to the present application, which includes a terminal 100 and a server 200.
Here, a wired or wireless communication connection is established in advance between the terminal 100 and the server 200, so that data transmission between the terminal 100 and the server 200 is possible.
The terminal 100 runs a video application having a user interaction interface and providing video interaction functions such as generation and release of video effects for a user through the user interaction interface, and the server 200 is used to provide data services for normal running of the video application.
It should be noted that, in this implementation environment, the terminal 100 may be any electronic device capable of running the video application, such as a smart phone, a tablet computer, a notebook computer, and the like, the video application run by the terminal 100 may be a client application or a web application, the server 200 may be an individual server or a server cluster formed by a plurality of servers, and this is not limited here.
Fig. 2 is a flow chart illustrating a video processing method that may be applied to the terminal 100 in the implementation environment shown in fig. 1 and specifically executed by a video application running in the terminal 100, according to an example embodiment.
As shown in fig. 2, in an exemplary embodiment, the video processing method at least includes the following steps:
step 110, obtaining an original video of a video special effect to be generated.
Firstly, it should be noted that, in the existing video special effect generation scheme, a user is required to select a target special effect template from preset special effect templates and edit a user video to be generated with a video special effect, so that the user video meets the requirement of the target special effect template, and a video special effect contained in the target special effect target can be generated in the edited user video. Therefore, the existing video special effect generation scheme has great limitation on the user video, and the video special effect cannot be conveniently generated in the user video based on the preset special effect template.
In order to solve the technical problem, the embodiment provides a video processing method, in the video processing method, a special effect template for generating a video special effect for a user video can be adaptive to video content included in the user video, a user does not need to edit the user video in advance, and limitation on the user video in a video special effect generation process is completely eliminated.
It should be further noted that, in this embodiment, the original video refers to a user video with a video special effect to be generated, where the original video may be obtained by a user triggering a camera to shoot, or may be obtained by a user selecting from a storage module (e.g., an album), and this is not limited here.
Step 130, performing video content identification on the original video, and obtaining at least one special effect template matched with the video content of the original video, wherein the special effect template comprises a plurality of special effect components, and the special effect components are used for describing video special effects.
The video content identification of the original video is a process of identifying a video object contained in the original video. The video object contained in the original video may be a person, an object or other objects appearing in the original video, and the embodiment is not limited. The video content of the original video can be obtained through the video content identification of the original video.
A special effects template is a set of special effects components combined with several special effects components, which are pre-set and one special effects component may describe one or more video effects, so the special effects template may be understood as a set of video effects.
In this embodiment, a process of obtaining at least one special effect template matched with the video content of the original video according to the video content of the identified original video, that is, a process of adaptively generating at least one special effect template for the original video according to the video content of the original video, where each special effect template is formed by combining a plurality of special effect components matched with the video content of the original video.
And 150, generating a video special effect described by the target special effect component in the original video according to the target special effect component contained in the selected special effect template.
In the obtained at least one special effect template matched with the video content of the original video, the operation of selecting the special effect template may be triggered by a user through an interaction manner such as clicking, double clicking, long pressing, and the like, which is not limited in this embodiment.
Aiming at the target special effect components contained in the selected special effect template, the video special effects can be correspondingly generated in the original video according to the video special effects described by the target special effect components.
Therefore, in the embodiment, according to the video content of the original video, special effect components matched with the video content of the original video are selected from the preset special effect components to be combined to form at least one special effect template, the special effect templates can be adaptive to the original video, then according to the selected special effect template, the video special effect described by the target special effect component contained in the selected special effect template can be generated in the original video, the original video does not need to be edited in the whole video special effect generation process, and the limitation of the preset special effect template in the existing video special effect generation scheme on the original video is avoided.
In addition, the video special effects are added to the original video in the form of the special effect template, so that the process that a user needs to select special effect materials from a large number of special effect materials is omitted, the generation process of the video special effects is more convenient, and the method can be used for quickly generating the high-quality video special effects in the original video.
FIG. 3 is a flow chart of one embodiment of step 130 in the embodiment shown in FIG. 2. As shown in fig. 3, in an exemplary embodiment, performing video content recognition on an original video and obtaining at least one special effect template matching the video content of the original video includes at least the following steps:
and 131, identifying the video content of the original video to obtain a video label corresponding to the video content of the original video.
As described above, the video content identification of the original video is a process of identifying the video object contained in the original video, and therefore the video tag corresponding to the video content of the original video is the tag corresponding to the video object contained in the original video. For example, if a picture containing a child in the original video is identified, the video label "baby" can be acquired correspondingly.
It should be noted that, since the original video usually contains different types of video objects, the video tags obtained by video content identification of the original video should also be multi-dimensional.
In one embodiment, the video tag may be a tag corresponding to a video object contained in a video image in the original video. For example, a plurality of frames of video images may be captured from an original video according to the video duration of the original video, and then content identification may be performed on the captured video images to obtain content tags of the video images, where the content tags identify video objects contained in the video images, so that the content tags of the video images are used as video tags corresponding to the video content of the original video.
It should be noted that the frame number of the video image captured from the original video can be set according to specific requirements. If the video duration of the original video is long, in order to enable the obtained video tag to accurately identify the video content of the original video, the video image with more frames can be intercepted for content identification. Similarly, even if the video duration of the original video is short, the video images with more frames can be intercepted for content, so that the obtained video label can accurately reflect the video content of the original video.
And step 133, acquiring a plurality of special effect components matched with the video tags from a preset special effect component set according to the video tags.
The special effect component set comprises all preset special effect components, and the special effect components respectively describe different video special effects. According to the video label of the original video, a plurality of special effect components matched with the video label are selected from the special effect component set, so that the obtained special effect components can be matched with the video content of the original video.
Because the video tags corresponding to different original videos should be different, the embodiment can select different special effect components to generate video special effects for different video contents, thereby forming a diversified video special effect generation scheme.
And 135, generating at least one special effect template according to the special effect component matched with the video label.
As previously described, since several special effects components obtained from the special effects component set according to the video tags in step 133 match the video content of the original video, the special effects templates generated according to these special effects components can also match the video content of the original video.
Therefore, according to the video tag obtained by identifying the video content of the original video, the special effect components matched with the video tag of the original video are obtained from the preset special effect component set, and the special effect components are combined to form at least one special effect template matched with the video content of the original video, so that the embodiment can generate rich special effect templates through the preset special effect material combination, and the generated special effect template can adapt to the video content of the original video, and the video special effect generation scheme provided by the embodiment has great adaptivity.
FIG. 4 is a flow chart of one embodiment of step 133 in the embodiment of FIG. 3. As shown in fig. 4, according to the video tag, a plurality of special effect components matched with the video tag are obtained from a preset special effect component set, which at least includes the following steps:
step 1331, determining the weight of the special effect label of each special effect component in the special effect component set relative to the video label according to a preset label weight judgment rule.
First, it should be noted that the special effect label of each special effect component in the special effect component set is used to identify the special effect style of the special effect component, for example, the special effect label may be "european and american antique" or "chinese wind", each special effect component may have at least one special effect label, and at least one label of different special effect components may be the same.
The preset label weight judgment rule is a preset rule used for judging the weight of the special effect label relative to the video label of the original video. In an embodiment, since video tags obtained by video content recognition for any original video and special effect tags corresponding to special effect components are preset, that is, all the video tags and the special effect tags are known, weights of the special effect tags relative to each video tag can be determined in advance, and the obtained weights and corresponding determination conditions are stored in association, for example, in a list form, so that a tag weight determination rule can be obtained.
Therefore, according to the label weight determination rule, the weight of the special effect label of each special effect component in the special effect component set relative to the video label of the original video identified and obtained in step 131 can be determined. The weight of the special effect label relative to the video label reflects the matching degree between the corresponding special effect component and the video content of the original video.
Step 1333, selecting a plurality of special effect components with the special effect labels matched with the video labels from the special effect component set according to the weight of the special effect label of each special effect component relative to the video label.
As described above, the matching degree between the special effect component and the video content of the original video is embodied by the weight of the special effect label of the special effect component relative to the video label of the original video, so that according to the weight of the special effect label of each special effect component in the special effect component set relative to the video label of the original video, a plurality of special effect components matched with the video content of the original video can be determined, and at least one special effect template matched with the video content of the original video is generated by the selected special effect component.
In one embodiment, the special effect components with weights greater than a preset weight threshold in the special effect component set can be used as special effect components matched with the video tags of the original video, so that the special effect components are randomly combined according to different special effect types, and at least one special effect template matched with the video content of the original video is obtained.
The special effect type corresponding to the special effect component is used for representing the type of the video special effect described by the special effect component, for example, the special effect component may include a plurality of special effect types such as a title, a tail, a time special effect, a sticker, an atmosphere, a filter, a transition and the like, and a plurality of special effect components may be included under different special effect types.
It should be noted that one special effect component may correspond to one video special effect, and a plurality of special effect components may also constitute one special effect component, so that one special effect component can correspond to a plurality of video special effects. For example, the special effect component under the head and tail type is composed of a head special effect component and a tail special effect component, and the stop special effect component and the color change special effect component can form a fixed-point color change special effect component, so that the fixed-point color change component has a stop special effect and a color change special effect.
The single special effect component comprises at least three special effect parameters, namely a type parameter, a component parameter and a time parameter. The type parameter is used for describing a special effect type corresponding to the special effect component, such as a sticker type, a filter type and the like. The component parameters are used for describing specific video special effect contents, and the video special effect contents described by the special effect components can be drawn according to the component parameters. For the special effect component containing the special effect material, drawing the corresponding special effect material according to the component parameter; for a special effect component which does not contain a special effect material, such as a time special effect, the time special effect can be specifically drawn according to the component parameters. The time parameter is used for describing the relative position and the display duration of the special effect component in the original video.
Therefore, according to the type parameter of the special effect component, the special effect type corresponding to the special effect component can be determined. By randomly combining the special effect components with the weights larger than the preset weight threshold value according to different special effect types, each special effect template can be ensured to contain the special effect components with different special effect types, and therefore rich video special effects can be generated in the original video through each special effect template.
In another embodiment, the special effect components belonging to the same type in the special effect component set can be determined according to the special effect labels of the special effect components, the special effect components are respectively selected from the special effect components under each type according to the sequence of the weights of the special effect labels of the special effect components relative to the video label from large to small, and the number of the selected special effect components under each type is the same as the number of the preset special effect templates. Then combining the special effect components with the same sequence under different special effect types to form a special effect template according to the sequence of the matching degrees of the special effect components with different special effect types respectively relative to the video tags, thereby obtaining at least one special effect template.
The method comprises the steps of classifying special effect components matched with video tags of an original video according to different special effect types, sequencing the special effect components belonging to the same special effect type according to the fact that the weights of the special effect tags relative to the video tags of the original video are gradually reduced, sequentially selecting the quantity of the special effect components from the special effect components under different special effect types according to the quantity of preset special effect templates, and combining the special effect components with the same sequencing to form the special effect template. The number of preset special effect templates corresponds to the number of the special effect templates displayed in the video application interface.
For example, if it is assumed that the number of preset special effect templates is 8, special effect components a with weights sorted from big to small under the leader and trailer types may be selected1~A8Special effect component B with weights sorted from big to small under filter type1~B8Special effect component C with weights sorted from big to small under paster type1~C8. By combining special effects component A1、B1And C1Combining to form a special effect template, and combining the special effect components A2、B2And C2And combining to form another special effect template, and correspondingly obtaining 8 special effect templates by the combination mode of other special effect templates.
Aiming at the display of the special effect templates in the video application interface, the special effect templates can be arranged according to the weight sequence corresponding to the special effect components in the special effect templates, so that the special effect templates can be ordered from large to small according to the matching degree between the special effect templates and the video content of the original video, and the selection of the special effect templates is convenient to realize.
It should be noted that the above two embodiments are only examples of the selection manner of the special effect component matched with the video content of the original video, and in practical application, the selection manner of the special effect component may be set according to actual requirements.
Therefore, the special effect component is labeled, and the matching degree between the special effect component and the video content of the original video is reflected according to the weight of the special effect label of the special effect component relative to the video label of the original video, so that the obtained special effect template is ensured to be matched with the video content of the original video.
FIG. 5 is a flow diagram of one embodiment of step 150 in the embodiment shown in FIG. 2. As shown in fig. 5, in an exemplary embodiment, according to a target special effect component included in a selected special effect template, a video special effect described by the target special effect component is generated in an original video to obtain a special effect video corresponding to the original video, which includes at least the following steps:
and 151, analyzing the special effect parameters of each special effect component contained in the selected special effect template, wherein the special effect parameters comprise time parameters.
As described above, the special effect parameters of the special effect component at least include a type parameter, a component parameter, and a time parameter, and the time parameter is used to describe a relative position and a display duration of the video special effect described by the special effect component in the original video.
Illustratively, the time parameters include a start position (hereinafter, startOffset) of the special effect component with respect to the original video display, an end position (hereinafter, endoffset) and a duration (hereinafter, duration) of the special effect component with respect to the original video display.
FIG. 6 is a diagram of a selected special effects template, shown in an exemplary embodiment. As shown in fig. 6, in the exemplary special effect template, the head and tail components include a head special effect and a tail special effect, the head special effect is displayed from the 0ms position of the original video for 2000ms, and the tail special effect is displayed from the 8000ms position of the original video for 2000 ms. The atmosphere component comprises the facula atmosphere special effect which is displayed from 2000ms of the original video, and the display time length is 8000 ms. It should be appreciated that the end position of the special effects component relative to the original video display can be derived from both the start position and the duration.
The leader effect shown in fig. 6 can be described in detail as follows: { "effectType": "Pag", "startOffset":0 "," duration ": 2000", "parameter": { "filter Play": head of slice "," type ": filter" }; the tail special effect is described in detail as follows: { "effectType": "Pag", "endOffset":0 "," duration ": 2000", "parameter": { "filter leather": piece end. Pag "," type ": filter" }; the special effect of the facula atmosphere can be described in detail as follows: { "effectType": "Pag", "startOffset":2000 "," endOffset ": 0", "parameter" { "filter": atmosphere. Pag "," type ": filter" }.
Step 153, according to the time parameter of each special effect component, generating video special effect content corresponding to each special effect component at the relative position of the original video, and enabling the video special effect content to continue for the display duration included in the display time.
According to the time parameters obtained by analyzing each special effect component contained in the selected special effect template, video special effect contents corresponding to each special effect component can be generated at the relative position of the original video according to the relative position and the display duration described by the time parameters, and the generated video special effect contents can be continuously displayed for the display duration contained in the time parameters, so that various video special effects corresponding to the selected special effect template can be displayed in the original video, the generation of the video special effects in the original video is realized, and the special effect video corresponding to the original video is obtained.
As shown in fig. 6, the obtained special effect video is formed by combining the video special effects described by the respective special effect components contained in the original video and the selected special effect template.
Therefore, the video special effect corresponding to the selected special effect template is automatically generated in the original video based on the time parameter contained in the special effect component, the process of generating the video special effect is very simple and convenient for the user, and no additional operation is required for the user.
In another exemplary embodiment, the video processing method further includes the steps of:
according to the recognition result of the video content recognition of the original video, selecting background music matched with the video content of the original video from a preset background music set, and fusing the background music into the special-effect video.
In this embodiment, tagging is performed on each piece of background music in the background music set in advance, and music tags, such as "children song", "lovely", and the like, are set for each piece of background music according to dimensions such as the music style of the background music.
According to the video tag obtained by identifying the video content of the original video, the background music with the music tag matched with the video tag can be selected from a preset background music set, for example, the background music corresponding to the music tag with the highest matching degree is selected, and the selected background music is fused into the original video, so that the obtained special effect video contains the background music.
Therefore, in this embodiment, after the background music is distinguished from the special effect templates, and the background music matched with the video content of the original video is determined according to the video content of the original video, for at least one special effect template matched with the video content of the original video, no matter any one special effect template is selected, the finally obtained background music in the special effect video is kept consistent.
In further embodiments, background music may be used as one type of special effect, and the background music component may be combined with other types of special effect components to form a special effect template. Specifically, the background music set is a background music component set included in the special effect component set, in step 133, according to the video tags of the original video, the special effect components matched with the video tags of the original video are obtained from the special effect component set, and include a background music component and other special effect components, and according to the obtained weights of the music tags of each background music component relative to the video tags and the obtained weights of the special effect tags of each special effect component relative to the video tags, at least one special effect template matched with the video content of the original video is formed.
Therefore, in the embodiment, the background music contained in at least one special effect template matched with the original video can be different, so that the diversity of the special effect templates is further increased, and richer video special effect experience is provided for users.
In another exemplary embodiment, the original video to be generated for the video effect includes at least two video segments to further enhance the video effect. In the existing video special effect generation scheme, a special effect template needs to be selected for each video clip in advance, and then each video clip is edited, so that each video clip meets the requirements of the corresponding special effect template.
It can be seen that, when generating video special effects for at least two video segments, a user needs to perform more additional operations, so that the process of generating the video special effects is more complicated, and the present embodiment provides a solution for this situation. As shown in fig. 7, in this embodiment, performing video content recognition on an original video to obtain at least one special effect template matching with the video content of the original video includes at least the following steps:
step 132, respectively identifying the content of each video clip;
and step 134, respectively acquiring special effect templates matched with the video content of each video clip according to the identified video content of each video clip.
As with the content described in the foregoing embodiment, the process of identifying the content of each video clip, that is, the process of identifying the video object contained in each video clip in the present embodiment can obtain the video tag matching the video content of each video clip.
According to the video tags corresponding to the video clips, the special effect templates matching with the video contents of the video clips can be obtained according to the method described in the foregoing embodiment. When the generation process of the video special effects is specifically executed, the special effect templates are respectively selected for each video clip, and then the video special effects described by the target special effect components are generated in each video clip according to the target special effect components contained in the selected special effect templates.
Therefore, under the condition of multiple video segments, the method provided by the embodiment adaptively obtains the special effect templates for different video segments according to the video contents of the different video segments, and a user only needs to select the special effect template of the video special effect to be generated for the different video segments, so that the video special effect matched with the video contents of the different video segments can be generated in the different video segments, and the user does not need to edit each video segment in the whole process, so that the method has no limitation on the multiple video segments of the video special effect to be generated.
In another exemplary embodiment, as shown in fig. 8, after step 150, the video processing method further includes the following steps:
step 210, calling each special effect component contained in the special effect video according to the special effect editing instruction, wherein the special effect video is formed by combining the original video and the video special effects described by the target special effect component.
After the video special effect is generated for the original video according to the selected special effect template and the special effect video corresponding to the original video is obtained, the embodiment also provides the editing operation for the special effect video so as to further improve the user experience.
The special effect editing instruction is used for instructing to edit a video special effect contained in the special effect video, so that each special effect component contained in the special effect video needs to be called. It should be noted that the special effect component included in the special effect video is the target special effect component included in the special effect template selected in step 150, and the special effect editing instruction may be obtained by detecting that a specific button in the video application interface is triggered.
And step 230, updating each special effect component contained in the special effect video according to the component updating instruction.
The component update instructions are used for indicating that the special effect components contained in the special effect video are updated, wherein the updating operation of the special effect components comprises but is not limited to adding, deleting and replacing the special effect components. The component update instruction may also be obtained by detecting that a corresponding button in the video application interface is triggered.
Illustratively, assume that a special effects component A is contained in a special effects video1、B1And C1If the component update instruction indicates that the special effect component B under the filter type is to be used1Special effect component B replaced by filter type1Then replacement of the special effect component is performed as indicated by the component update instruction.
It should be understood that in the special effect video corresponding to the original video, the update of the special effect component indicates that the video special effect generated in the original video is updated accordingly.
Fig. 9 is a flow diagram illustrating a video processing method that may be applied to the server 200 in the implementation environment shown in fig. 1, according to another example embodiment. As shown in fig. 9, in an exemplary embodiment, the video processing method at least includes the following steps:
step 310, acquiring an original video of a video special effect to be generated;
step 330, identifying video content of the original video to obtain at least one special effect template matched with the video content of the original video, wherein the special effect template comprises a plurality of special effect components, and the special effect components are used for describing video special effects;
step 350, sending the special effect template to the specified device, so that the specified device generates the video special effect described by the special effect component in the original video according to the target special effect component contained in the selected special effect template.
In this embodiment, the specific device is an electronic device running a video application, for example, the terminal 100 in the implementation environment shown in fig. 1, and the specific device is configured to generate a corresponding video special effect in an original video according to a selected special effect template.
The original content of the video special effect to be generated is sent by a designated device, and the designated device sends the original video to a server so as to obtain at least one special effect template matched with the video content of the original video from the server.
It should be noted that, for the process of performing video content identification on the original video and obtaining the special effect template matched with the video content of the original video, reference is made to corresponding content obtained by the special effect template described in the foregoing embodiment, which is not described in detail in this embodiment.
After the server acquires the special effect template matched with the video content of the original video, the acquired special effect template is sent to the appointed equipment, so that the appointed equipment generates a corresponding video special effect in the original video according to the selected special effect template.
As shown in fig. 10, the designated device sends the original video to the server, the server performs video content identification on the original video to obtain a video tag corresponding to the original video, then selects background music matched with the original video according to the music tag of the background music, selects special effect components matched with the original video according to the special effect tags of other special effect components, combines the selected background music and the special effect components of other types to form a special effect template matched with the original video, and then sends the special effect template to the designated device. And the appointed equipment generates the video special effect contained in the special effect template into the original video through the selected analysis special effect template.
Fig. 11 is a flow chart illustrating a video processing method according to another exemplary embodiment, which is applicable to the terminal 100 in the implementation environment shown in fig. 1 and the designated device in the embodiment shown in fig. 10. As shown in fig. 11, in an exemplary embodiment, the video processing method at least includes the following steps:
step 410, displaying an original video of a video special effect to be generated;
step 430, displaying at least one special effect template matched with the video content of the original video, wherein the special effect template is generated according to a special effect component matched with the video content of the original video, and the special effect component is used for describing the special effect of the video;
step 450, detecting a target special effect template selected from at least one special effect template, wherein the target special effect template comprises a target special effect component;
step 470, the video effect described by the target effect component is presented in the displayed original video.
In this embodiment, an original video of a video special effect to be generated is displayed, at least one special effect template matched with video content of the original video is displayed, and after a target special effect template selected from the at least one special effect template is detected, a video special effect described by a target special effect component contained in the target special effect template is presented in the displayed original video.
Therefore, the process of generating the video special effect by the original video is visually displayed, and it can be obtained that in the process of generating the video special effect, a user only needs to execute the selection operation of the original video and the selection operation of the special effect template, and does not need to execute additional operations such as editing processing of the original video, so that the video special effect generation scheme provided by the embodiment has excellent user experience.
To facilitate understanding of the video processing method disclosed in the present embodiment, the following describes the video processing method in detail by taking a specific application scenario as an example.
In an exemplary application scenario, the video processing method is applied to a terminal device running a video application, where the terminal device may specifically be the terminal 100 in the implementation environment shown in fig. 1, or a specific device in the embodiment shown in fig. 10, and fig. 12 is an interface schematic diagram of the terminal device.
As shown in fig. 12, after the user obtains an original video with a special video effect to be generated by shooting with a camera of the terminal device, or selects an original video with a special video effect to be generated from an album of the terminal device, the terminal device correspondingly displays the original video.
When detecting that a 'one-key film-out' button in the equipment interface is triggered, the terminal equipment displays at least one special effect template matched with the video content of the original video. As shown in fig. 12, the terminal device displays the matched special effect templates of the original video in the form of a list of special effect templates.
The special effect template is formed by identifying the video content of the original video to obtain a video label corresponding to the original video, selecting special effect components matched with the video content of the original video according to the video label and combining the special effect components. It should also be understood that the special effect templates contained in the list of special effect templates may be different for different original videos selected by the user.
If the user selects one of the special effect templates in the special effect template list, the selected target special effect template is detected, and then the video special effect described by the target special effect component contained in the target special effect template is presented in the displayed original video, namely the formed special effect video is displayed. As shown in fig. 12, if it is detected that the special effect template 4 is selected, the video special effect contained in the special effect template 4 is correspondingly displayed in the original video, that is, the selected special effect template is previewed in the original video. When the user determines to select the target special effect template, the obtained special effect video is displayed, for example, the special effect video displayed in fig. 12 contains the corresponding displayed text sticker.
After the special-effect video is obtained, the user can also perform updating operation on the video special effect in the special-effect video by triggering buttons such as background music, adjusting materials, a filter and the like displayed in the interface, and finally the user-satisfied special-effect video is obtained. Then, the user clicks a 'done' button displayed in the interface, and the finally obtained special effect video can be locally stored, or the special effect video is uploaded to a server to be stored, or the special effect video is published to the internet, and the method is not limited in the local place.
Therefore, in the practical application process, after a user selects one or more sections of user videos, by the method provided by the embodiment of the application, at least one special effect template can be adaptively matched with the user videos, each special effect template has a respective special effect style and contains a plurality of video special effects, so that the finally obtained special effect videos have diversity, and no limitation is imposed on the user videos.
In view of the fact that the video special effect generation principle disclosed in the video processing method provided by the above embodiment can be applied to an image special effect generation process, based on another aspect of the present application, an image processing method is also provided to facilitate special effect generation for a user image. The image processing method may also be applied to the terminal 100 in the implementation environment shown in fig. 1, and is specifically executed by a video application running in the terminal 100.
In an exemplary embodiment, the image processing method includes at least the steps of:
acquiring an original image of a special effect to be generated;
carrying out image content identification on an original image, and acquiring at least one special effect template matched with the image content of the original image, wherein the special effect template comprises a plurality of special effect components which are used for describing the special effect of the image;
and generating an image special effect described by the target special effect component in the original image according to the target special effect component contained in the selected special effect template.
It should be noted that, similar to the manner of acquiring the original video, the original image may be acquired by a user triggering a camera to shoot or may be acquired by a user selecting from a storage module (e.g. an album), which is not limited herein.
The process of identifying the image content of the original image, that is, the process of identifying the person, object or other object contained in the original image, is not limited herein.
The method comprises the steps of obtaining at least one special effect template matched with the image content of an original image, namely generating the content of the at least one special effect template for the original image in a self-adaptive mode according to the image content of the original image, wherein each special effect template is formed by combining a plurality of special effect components matched with the image content of the original image.
The user selects one special effect template of the centrometer from the special effect templates matched with the image content of the original image, and the image special effects respectively described by each target special effect component contained in the selected special effect template can be generated in the original image.
Therefore, the image special effects are added to the original image in the form of the special effect template, users do not need to select special effect materials from a large number of special effect materials, the generation process of the image special effects is more convenient, and the special effect templates generated by the embodiment are all matched with the image content of the original image, so that the method can quickly generate high-quality video special effects in the original image.
It should be noted that the special effect template matching performed on the image content of the original image in the present embodiment may also be implemented based on the image label of the original image. The image label of the original image is obtained by identifying the image content of the original image, and a plurality of special effect components matched with the image label are obtained from a preset special effect component set based on the image label, so that at least one special effect template is formed by combination.
In one embodiment, a plurality of special effect components whose special effect labels are matched with the video labels may be selected from the special effect component set according to weights of the special effect labels of the special effect components relative to the image labels, so as to perform combination of the special effect templates. It should be mentioned that there may be some special effect components of the set of special effect components that are not suitable for image special effect generation, such as a background music component, and therefore the special effect labels of these special effect components should be weighted lower relative to the image labels.
The combination mode of the special effect templates can be that the special effect components matched with the image labels are randomly combined according to different special effect types, or the special effect components with the same sequence under different special effect types are combined to form one special effect template according to the sequence of the special effect components of different special effect types relative to the matching degree of the image labels, so that at least one special effect template is obtained.
After the image special effect described by the target special effect component is generated in the original image according to the target special effect component contained in the selected special effect template, various special effect components contained in the special effect image can be updated.
It should be noted that, for details of the above process, reference may be made to the description of the video special effect generation process in the foregoing embodiment, and details are not repeated here.
In another embodiment, the display duration of the original image in which the special effect is to be generated may also be configured to be a set duration, for example, the display duration of the original image is configured to be 2 seconds, a video generated based on the original image may be obtained, then a matched special effect template may be generated for the video according to the video processing method in the foregoing implementation, and a video special effect may be generated in the video according to the selected special effect template.
In addition, in a video generated based on an original image, since each frame of image in the video is the original image, the identification of the content of the video is equivalent to the content identification of the original image.
If there are a plurality of original images to be generated with special effects, the plurality of original images can be combined to form a video, and a matched special effect template is generated for the video based on the video processing method described in the foregoing embodiment. The display duration of each original image in the video may be configured to be a fixed duration or any duration, which is not limited in this embodiment. The identification of the video content is the content identification of at least one original image.
It should be noted that, in the generated special effect template matched with the video content, a special effect component, for example, a special effect component for describing a transition special effect, may be included, and the relative position of the transition component displayed in the video is the display position between adjacent original images. For example, a video is composed of 4 original images, the display time of each original image is 2000ms, the total display time of the video is 8000ms, and the relative positions of the transition components in the video, including 2000ms, 4000ms and 6000ms of the video, are displayed. The display duration of the transition component may be preset.
In other embodiments, the transition special effect component may also be a common special effect component, that is, similar to other special effect components in the foregoing embodiments, obtained by performing a matching operation between the video tag and the special effect tag, and this embodiment is not limited. Because the requirement for transition special effects is high when videos are formed by combining original images, for example, visual differences caused by image switching can be greatly reduced after the transition special effects are added between different original images, the weight of the special effect label of the transition special effect component relative to the video label should be high. And the relative position of the transition special effect component for displaying in the video is still related to the position of switching and displaying the original image in the video.
Therefore, one or more original images of the special effect to be generated form a video, and the special effect generation operation performed on the images is converted into the special effect generation operation performed on the video, so that the special effect component applicable to the video is simultaneously applicable to the special effect processing performed on the images, and therefore the special effect template corresponding to the original images can contain richer special effect contents, and the user experience can be greatly improved.
Fig. 13 is a block diagram illustrating a video processing apparatus according to an example embodiment. As shown in fig. 13, the video processing apparatus includes an original video acquisition module 510, an effect template acquisition module 530, and a video effect generation module 550.
The original video obtaining module 510 is configured to obtain an original video of a video special effect to be generated. The special effect template obtaining module 530 is configured to perform video content identification on an original video, and obtain at least one special effect template matched with video content of the original video, where the special effect template includes a plurality of special effect components, and the special effect components are used to describe video special effects. The video special effect generating module 550 is configured to generate a video special effect described by the target special effect component in the original video according to the target special effect component included in the selected special effect template.
In another exemplary embodiment, the effect template acquiring module 530 includes a video tag acquiring unit, an effect component acquiring unit, and an effect component matching unit. The video tag obtaining unit is used for identifying the video content of the original video and obtaining a video tag corresponding to the video content of the original video. The special effect component acquisition unit is used for acquiring a plurality of special effect components matched with the video tags from a preset special effect component set according to the video tags. The special effect component matching unit is used for generating at least one special effect template according to the special effect component matched with the video label.
In another exemplary embodiment, the video tag acquiring unit includes a video image intercepting subunit and a video image identifying subunit. The video image intercepting subunit is used for intercepting a plurality of frames of video images from the original video according to the video time length of the original video. The video image identification subunit is used for identifying the content of the video image to obtain a content tag of the video image, and taking the content tag of the video image as a video tag corresponding to the video content of the original video.
In another exemplary embodiment, the special effects component acquiring unit includes a weight determining subunit and a weight matching subunit. The weight determining subunit is configured to determine, according to a preset label weight determination rule, a weight of a special effect label of each special effect component in the special effect component set with respect to the video label. The weight matching subunit is used for selecting a plurality of special effect components with the special effect labels matched with the video labels from the special effect component set according to the weight of the special effect label of each special effect component relative to the video label.
In another exemplary embodiment, the weight matching subunit is configured to use the special effect component with the weight greater than the preset weight threshold in the special effect component set as the special effect component matched with the video tag; or determining the special effect components belonging to the same special effect type in the special effect component set according to the special effect labels of the special effect components, and respectively selecting the special effect components from the special effect components under each special effect type according to the sequence that the weight of the special effect label of each special effect component relative to the video label is from large to small, wherein the number of the selected special effect components under each type is the same as that of the preset special effect templates.
In another exemplary embodiment, the special effect component matching unit is configured to randomly combine the special effect components matched with the video tag according to different special effect types to obtain at least one special effect template; or combining the special effect components with the same sequence under different special effect types to form a special effect template according to the sequence of the matching degrees of the special effect components with different special effect types respectively relative to the video tags, and obtaining at least one special effect template.
In another exemplary embodiment, the video effect generation module 550 includes an effect parameter parsing unit and an effect content generation unit. The special effect parameter analyzing unit is used for analyzing the special effect parameters of each special effect component contained in the selected special effect template, the special effect parameters comprise time parameters, and the time parameters are used for describing the relative position and the display duration of the special effect components in the original video. The special effect content generating unit is used for generating video special effect content corresponding to each special effect component at the relative position of the original video according to the time parameter of each special effect component, and enabling the video special effect content to continuously display the display duration.
In another exemplary embodiment, the video processing apparatus further includes a background music selection module, where the background music selection module is configured to select, according to a recognition result of video content recognition on the original video, background music that matches video content of the original video from a preset background music set, and fuse the background music into the special-effect video.
In another exemplary embodiment, the original video includes at least two video clips, and the special effect template obtaining module 530 includes a video clip identifying unit and a clip template obtaining unit. The video clip identification unit is used for respectively identifying the video content of each video clip. The fragment template obtaining unit is used for respectively obtaining special effect templates matched with the video content of each video fragment according to the video content of each video fragment obtained through identification.
In another exemplary embodiment, the video processing apparatus further includes a special effects component calling module and a special effects component updating module. The special effect component calling module is used for calling various special effect components contained in the special effect video according to the special effect editing instruction, and the special effect video is formed by combining the original video and the video special effects described by the target special effect components. The special effect component updating module is used for updating each special effect component contained in the special effect video according to the component updating instruction.
Fig. 14 is a block diagram illustrating a video processing apparatus according to another exemplary embodiment. As shown in fig. 14, the video processing apparatus includes a second original video acquiring module 610, a second effect template acquiring module 630, and an effect template transmitting module 650.
The second original video obtaining module 610 is configured to obtain an original video of a video special effect to be generated. The second special effect template obtaining module 630 is configured to perform video content identification on the original video, and obtain at least one special effect template matched with the video content of the original video, where the special effect template includes a plurality of special effect components, and the special effect components are used to describe video special effects. The special effect template sending module 650 is configured to send the special effect template to the specific device, so that the specific device generates a video special effect described by the special effect component in the original video according to the target special effect component included in the selected special effect template.
Fig. 15 is a block diagram illustrating a video processing apparatus according to another exemplary embodiment. As shown in fig. 15, the video processing apparatus includes an original video display module 710, an effect template display module 730, a template selection detection module 750, and a video effect presentation module 770.
The original video display module 710 is used for displaying an original video of a video special effect to be generated. The special effect template display module 730 is configured to display at least one special effect template matched with the video content of the original video, where the special effect template is generated according to a special effect component matched with the video content of the original video, and the special effect component is used to describe a special effect of the video. The template selection detection module 750 is configured to detect a target special effect template selected from at least one special effect template, where the target special effect template includes a target special effect component. The video effects presentation module 770 is configured to present the video effects described by the target effects component in the displayed original video.
A video processing apparatus comprising: the original image acquisition module is used for acquiring an original image of a special effect to be generated; the template acquisition module is used for carrying out image content identification on the original image and acquiring at least one special effect template matched with the image content of the original image, wherein the special effect template comprises a plurality of special effect components, and the special effect components are used for describing the special effect of the image; and the image special effect generation module is used for generating the image special effect described by the target special effect component in the original image according to the target special effect component contained in the selected special effect template.
It should be noted that the apparatus provided in the foregoing embodiment and the method provided in the foregoing embodiment belong to the same concept, and the specific manner in which each module and unit execute operations has been described in detail in the method embodiment, and is not described again here.
Embodiments of the present application also provide an electronic device comprising a processor and a memory, wherein the memory has stored thereon computer readable instructions, which when executed by the processor, implement the video processing method or the image processing method as described above.
Fig. 16 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
It should be noted that the electronic device is only an example adapted to the application and should not be considered as providing any limitation to the scope of use of the application. The electronic device is also not to be construed as requiring reliance on, or necessity of, one or more components of the exemplary electronic device illustrated in fig. 16.
As shown in fig. 16, in an exemplary embodiment, the electronic device includes a processing component 801, a memory 802, a power component 803, a multimedia component 804, an audio component 805, a sensor component 807, and a communication component 808. The above components are not all necessary, and the electronic device may add other components or reduce some components according to its own functional requirements, which is not limited in this embodiment.
The processing component 801 generally controls overall operation of the electronic device, such as operations associated with display, data communication, and log data processing. The processing component 801 may include one or more processors 809 to execute instructions to perform all or a portion of the above-described operations. Further, the processing component 801 may include one or more modules that facilitate interaction between the processing component 801 and other components. For example, the processing component 801 may include a multimedia module to facilitate interaction between the multimedia component 804 and the processing component 801.
The memory 802 is configured to store various types of data to support operation at the electronic device, examples of which include instructions for any application or method operating on the electronic device. The memory 802 stores one or more modules configured to be executed by the one or more processors 809 to perform all or part of the steps of the video processing method or the image processing method described in the above embodiments.
The power supply component 803 provides power to the various components of the electronic device. The power components 803 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for an electronic device.
The multimedia component 804 includes a screen that provides an output interface between the electronic device and the user. In some embodiments, the screen may include a TP (Touch Panel) and an LCD (Liquid Crystal Display). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 805 is configured to output and/or input audio signals. For example, the audio component 805 includes a microphone configured to receive external audio signals when the electronic device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. In some embodiments, the audio component 805 also includes a speaker for outputting audio signals.
The sensor assembly 807 includes one or more sensors for providing various aspects of status assessment for the electronic device. For example, the sensor assembly 807 may detect an open/closed state of the electronic device, and may also detect a temperature change of the electronic device.
The communication component 808 is configured to facilitate wired or wireless communication between the electronic device and other devices. The electronic device may access a Wireless network based on a communication standard, such as Wi-Fi (Wireless-Fidelity, Wireless network).
It will be appreciated that the configuration shown in fig. 16 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 16 or have different components than shown in fig. 16. Each of the components shown in fig. 16 may be implemented in hardware, software, or a combination thereof.
Another aspect of the present application also provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements a video processing method or an image processing method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist separately without being incorporated in the electronic device.
The above description is only a preferred exemplary embodiment of the present application, and is not intended to limit the embodiments of the present application, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A video processing method, comprising:
acquiring an original video of a video special effect to be generated;
identifying the video content of the original video, and obtaining at least one special effect template matched with the video content of the original video, wherein the special effect template comprises a plurality of special effect components, and the special effect components are used for describing the special effect of the video;
and generating a video special effect described by the target special effect component in the original video according to the target special effect component contained in the selected special effect template.
2. The method of claim 1, wherein performing video content recognition on the original video and obtaining at least one special effect template matching the video content of the original video comprises:
identifying the video content of the original video to obtain a video label corresponding to the video content of the original video;
according to the video label, obtaining a plurality of special effect components matched with the video label from a preset special effect component set;
and generating at least one special effect template according to the special effect component matched with the video label.
3. The method of claim 2, wherein performing video content recognition on the original video to obtain a video tag corresponding to the video content of the original video comprises:
intercepting a plurality of frames of video images from the original video according to the video duration of the original video;
and identifying the content of the video image to obtain a content label of the video image, and taking the content label of the video image as a video label corresponding to the video content of the original video.
4. The method of claim 2, wherein obtaining, according to the video tag, a plurality of special effect components matching the video tag from a preset special effect component set comprises:
determining the weight of the special effect label of each special effect component in the special effect component set relative to the video label according to a preset label weight judgment rule;
and selecting a plurality of special effect components of which the special effect labels are matched with the video labels from the special effect component set according to the weights of the special effect labels of the special effect components relative to the video labels.
5. The method of claim 4, wherein selecting a plurality of special effect components from the set of special effect components whose special effect labels match the video labels based on the weights of the special effect labels of the respective special effect components relative to the video labels comprises:
taking the special effect component with the weight larger than a preset weight threshold value in the special effect component set as the special effect component matched with the video label; or
According to the special effect labels of the special effect components, the special effect components belonging to the same special effect type in the special effect component set are determined, according to the sequence that the weight of the special effect label of each special effect component relative to the video label is reduced, the special effect components are respectively selected from the special effect components under each special effect type, and the number of the selected special effect components under each type is the same as that of the preset special effect templates.
6. The method of claim 2, wherein generating at least one of the effect templates from the effect components that match the video tags comprises:
randomly combining the special effect components matched with the video tags according to different special effect types to obtain at least one special effect template; or
According to the sequence of the matching degrees of the special effect components of different special effect types relative to the video tags, combining the special effect components with the same sequence under different special effect types to form a special effect template, and obtaining at least one special effect template.
7. The method according to claim 1, wherein generating a video effect described by the target effect component in the original video according to a target effect component included in the selected effect template to obtain an effect video corresponding to the original video comprises:
analyzing special effect parameters of each special effect component contained in the selected special effect template, wherein the special effect parameters comprise time parameters, and the time parameters are used for describing the relative position and the display duration of the special effect component displayed in the original video;
and generating video special effect contents corresponding to the special effect components at the relative positions of the original video according to the time parameters of the special effect components, and enabling the video special effect contents to continuously display the display duration.
8. The method of claim 1, wherein the original video comprises at least two video segments; performing video content identification on the original video to obtain at least one special effect template matched with the video content of the original video, wherein the method comprises the following steps:
respectively carrying out video content identification on each video clip;
and respectively acquiring a special effect template matched with the video content of each video clip according to the video content of each video clip obtained by identification.
9. The method of claim 1, wherein after generating a video effect described by a target effect component in the original video based on the target effect component contained in the selected effect template, the method further comprises;
calling various special effect components contained in a special effect video according to a special effect editing instruction, wherein the special effect video is formed by combining the original video and the video special effects described by the target special effect components;
and updating each special effect component contained in the special effect video according to the component updating instruction.
10. A video processing method, comprising:
acquiring an original video of a video special effect to be generated;
performing video content identification on the original video to obtain at least one special effect template matched with the video content of the original video, wherein the special effect template comprises a plurality of special effect components, and the special effect components are used for describing video special effects;
and sending the special effect template to a designated device, so that the designated device generates a video special effect described by the special effect component in the original video according to a target special effect component contained in the selected special effect template.
11. A video processing method, comprising:
displaying an original video of a video special effect to be generated;
displaying at least one special effect template matched with the video content of the original video, wherein the special effect template is generated according to a special effect component matched with the video content of the original video, and the special effect component is used for describing video special effects;
detecting a target special effect template selected from the at least one special effect template, wherein the target special effect template comprises a target special effect component;
rendering the video effect described by the target effect component in the displayed original video.
12. An image processing method, comprising:
acquiring an original image of a special effect to be generated;
carrying out image content identification on the original image, and acquiring at least one special effect template matched with the image content of the original image, wherein the special effect template comprises a plurality of special effect components which are used for describing the special effect of the image;
and generating an image special effect described by the target special effect component in the original image according to the target special effect component contained in the selected special effect template.
13. A video processing apparatus, comprising:
the original video acquisition module is used for acquiring an original video of a video special effect to be generated;
the system comprises a special effect template acquisition module, a video content identification module and a video content identification module, wherein the special effect template acquisition module is used for identifying the video content of an original video and acquiring at least one special effect template matched with the video content of the original video, the special effect template comprises a plurality of special effect components, and the special effect components are used for describing the special effect of the video;
and the video special effect generation module is used for generating a video special effect described by the target special effect component in the original video according to the target special effect component contained in the selected special effect template.
14. An electronic device, comprising:
a memory storing computer readable instructions;
a processor to read computer readable instructions stored by the memory to perform the method of any of claims 1-12.
15. A computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1-12.
CN202010255390.2A 2020-04-02 2020-04-02 Video and image processing method and device, electronic equipment and storage medium Pending CN111541936A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010255390.2A CN111541936A (en) 2020-04-02 2020-04-02 Video and image processing method and device, electronic equipment and storage medium
PCT/CN2021/075500 WO2021196890A1 (en) 2020-04-02 2021-02-05 Method and device for multimedia processing, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010255390.2A CN111541936A (en) 2020-04-02 2020-04-02 Video and image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111541936A true CN111541936A (en) 2020-08-14

Family

ID=71976949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010255390.2A Pending CN111541936A (en) 2020-04-02 2020-04-02 Video and image processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111541936A (en)
WO (1) WO2021196890A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112511750A (en) * 2020-11-30 2021-03-16 维沃移动通信有限公司 Video shooting method, device, equipment and medium
CN112689200A (en) * 2020-12-15 2021-04-20 万兴科技集团股份有限公司 Video editing method, electronic device and storage medium
CN112800263A (en) * 2021-02-03 2021-05-14 上海艾麒信息科技股份有限公司 Video synthesis system, method and medium based on artificial intelligence
WO2021196890A1 (en) * 2020-04-02 2021-10-07 腾讯科技(深圳)有限公司 Method and device for multimedia processing, electronic device, and storage medium
CN113542855A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN114173067A (en) * 2021-12-21 2022-03-11 科大讯飞股份有限公司 Video generation method, device, equipment and storage medium
WO2022063124A1 (en) * 2020-09-25 2022-03-31 连尚(北京)网络科技有限公司 Video fusion method and device
CN114339360A (en) * 2021-09-09 2022-04-12 腾讯科技(深圳)有限公司 Video processing method, related device and equipment
CN115150661A (en) * 2022-06-23 2022-10-04 深圳市大头兄弟科技有限公司 Method and related device for packaging video key fragments
CN115269889A (en) * 2021-04-30 2022-11-01 北京字跳网络技术有限公司 Clipping template searching method and device
CN115442519A (en) * 2022-08-08 2022-12-06 珠海普罗米修斯视觉技术有限公司 Video processing method, device and computer readable storage medium
CN115484423A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Transition special effect adding method and electronic equipment
CN115484397A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Multimedia resource sharing method and electronic equipment
CN115484400A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Video data processing method and electronic equipment
CN115484399A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Video processing method and electronic equipment
WO2022262536A1 (en) * 2021-06-16 2022-12-22 荣耀终端有限公司 Video processing method and electronic device
WO2023160515A1 (en) * 2022-02-25 2023-08-31 北京字跳网络技术有限公司 Video processing method and apparatus, device and medium
CN116700846A (en) * 2022-02-28 2023-09-05 荣耀终端有限公司 Picture display method and related electronic equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114896001A (en) * 2022-04-08 2022-08-12 北京达佳互联信息技术有限公司 Component display method, device, electronic equipment, medium and program product
CN115297359A (en) * 2022-07-29 2022-11-04 北京字跳网络技术有限公司 Multimedia data transmission method and device, electronic equipment and storage medium
CN116456131B (en) * 2023-03-13 2023-12-19 北京达佳互联信息技术有限公司 Special effect rendering method and device, electronic equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103795897A (en) * 2014-01-21 2014-05-14 深圳市中兴移动通信有限公司 Method and device for automatically generating background music
CN106488017A (en) * 2016-10-09 2017-03-08 上海斐讯数据通信技术有限公司 A kind of mobile terminal and its method that the image shooting is dubbed in background music
CN107493440A (en) * 2017-09-14 2017-12-19 光锐恒宇(北京)科技有限公司 A kind of method and apparatus of display image in the application
CN107888843A (en) * 2017-10-13 2018-04-06 深圳市迅雷网络技术有限公司 Sound mixing method, device, storage medium and the terminal device of user's original content
CN108174099A (en) * 2017-12-29 2018-06-15 光锐恒宇(北京)科技有限公司 Method for displaying image, device and computer readable storage medium
CN109040615A (en) * 2018-08-10 2018-12-18 北京微播视界科技有限公司 Special video effect adding method, device, terminal device and computer storage medium
CN109462776A (en) * 2018-11-29 2019-03-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN109819179A (en) * 2019-03-21 2019-05-28 腾讯科技(深圳)有限公司 A kind of video clipping method and device
CN110163050A (en) * 2018-07-23 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device, terminal device, server and storage medium
CN110177219A (en) * 2019-07-01 2019-08-27 百度在线网络技术(北京)有限公司 The template recommended method and device of video
CN110298283A (en) * 2019-06-21 2019-10-01 北京百度网讯科技有限公司 Matching process, device, equipment and the storage medium of picture material
CN110381371A (en) * 2019-07-30 2019-10-25 维沃移动通信有限公司 A kind of video clipping method and electronic equipment
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN110740262A (en) * 2019-10-31 2020-01-31 维沃移动通信有限公司 Background music adding method and device and electronic equipment
CN110769313A (en) * 2019-11-19 2020-02-07 广州酷狗计算机科技有限公司 Video processing method and device and storage medium
US20200082850A1 (en) * 2017-12-15 2020-03-12 Tencent Technology (Shenzhen) Company Limited Method and apparatus for presenting media information, storage medium, and electronic apparatus

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002027319A (en) * 2000-07-12 2002-01-25 Sony Corp Control system of special picture effect device
CN105049959B (en) * 2015-07-08 2019-09-06 广州酷狗计算机科技有限公司 Method for broadcasting multimedia file and device
CN108495058A (en) * 2018-01-30 2018-09-04 光锐恒宇(北京)科技有限公司 Image processing method, device and computer readable storage medium
CN108696699A (en) * 2018-04-10 2018-10-23 光锐恒宇(北京)科技有限公司 A kind of method and apparatus of video processing
CN110708596A (en) * 2019-09-29 2020-01-17 北京达佳互联信息技术有限公司 Method and device for generating video, electronic equipment and readable storage medium
CN110865754B (en) * 2019-11-11 2020-09-22 北京达佳互联信息技术有限公司 Information display method and device and terminal
CN111541936A (en) * 2020-04-02 2020-08-14 腾讯科技(深圳)有限公司 Video and image processing method and device, electronic equipment and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103795897A (en) * 2014-01-21 2014-05-14 深圳市中兴移动通信有限公司 Method and device for automatically generating background music
CN106488017A (en) * 2016-10-09 2017-03-08 上海斐讯数据通信技术有限公司 A kind of mobile terminal and its method that the image shooting is dubbed in background music
CN107493440A (en) * 2017-09-14 2017-12-19 光锐恒宇(北京)科技有限公司 A kind of method and apparatus of display image in the application
CN107888843A (en) * 2017-10-13 2018-04-06 深圳市迅雷网络技术有限公司 Sound mixing method, device, storage medium and the terminal device of user's original content
US20200082850A1 (en) * 2017-12-15 2020-03-12 Tencent Technology (Shenzhen) Company Limited Method and apparatus for presenting media information, storage medium, and electronic apparatus
CN108174099A (en) * 2017-12-29 2018-06-15 光锐恒宇(北京)科技有限公司 Method for displaying image, device and computer readable storage medium
CN110163050A (en) * 2018-07-23 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device, terminal device, server and storage medium
CN109040615A (en) * 2018-08-10 2018-12-18 北京微播视界科技有限公司 Special video effect adding method, device, terminal device and computer storage medium
CN109462776A (en) * 2018-11-29 2019-03-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN109819179A (en) * 2019-03-21 2019-05-28 腾讯科技(深圳)有限公司 A kind of video clipping method and device
CN110298283A (en) * 2019-06-21 2019-10-01 北京百度网讯科技有限公司 Matching process, device, equipment and the storage medium of picture material
CN110177219A (en) * 2019-07-01 2019-08-27 百度在线网络技术(北京)有限公司 The template recommended method and device of video
CN110381371A (en) * 2019-07-30 2019-10-25 维沃移动通信有限公司 A kind of video clipping method and electronic equipment
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN110740262A (en) * 2019-10-31 2020-01-31 维沃移动通信有限公司 Background music adding method and device and electronic equipment
CN110769313A (en) * 2019-11-19 2020-02-07 广州酷狗计算机科技有限公司 Video processing method and device and storage medium

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021196890A1 (en) * 2020-04-02 2021-10-07 腾讯科技(深圳)有限公司 Method and device for multimedia processing, electronic device, and storage medium
WO2022063124A1 (en) * 2020-09-25 2022-03-31 连尚(北京)网络科技有限公司 Video fusion method and device
CN112511750A (en) * 2020-11-30 2021-03-16 维沃移动通信有限公司 Video shooting method, device, equipment and medium
CN112689200A (en) * 2020-12-15 2021-04-20 万兴科技集团股份有限公司 Video editing method, electronic device and storage medium
CN112800263A (en) * 2021-02-03 2021-05-14 上海艾麒信息科技股份有限公司 Video synthesis system, method and medium based on artificial intelligence
CN115269889A (en) * 2021-04-30 2022-11-01 北京字跳网络技术有限公司 Clipping template searching method and device
WO2022228557A1 (en) * 2021-04-30 2022-11-03 北京字跳网络技术有限公司 Method and apparatus for searching for clipping template
CN115484423A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Transition special effect adding method and electronic equipment
CN115484397B (en) * 2021-06-16 2023-11-10 荣耀终端有限公司 Multimedia resource sharing method and electronic equipment
CN115484400B (en) * 2021-06-16 2024-04-05 荣耀终端有限公司 Video data processing method and electronic equipment
CN115484399B (en) * 2021-06-16 2023-12-12 荣耀终端有限公司 Video processing method and electronic equipment
WO2022262536A1 (en) * 2021-06-16 2022-12-22 荣耀终端有限公司 Video processing method and electronic device
CN115484399A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Video processing method and electronic equipment
CN115484397A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Multimedia resource sharing method and electronic equipment
CN115484400A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Video data processing method and electronic equipment
CN113542855A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN114339360B (en) * 2021-09-09 2023-05-02 腾讯科技(深圳)有限公司 Video processing method, related device and equipment
CN114339360A (en) * 2021-09-09 2022-04-12 腾讯科技(深圳)有限公司 Video processing method, related device and equipment
CN114173067A (en) * 2021-12-21 2022-03-11 科大讯飞股份有限公司 Video generation method, device, equipment and storage medium
WO2023160515A1 (en) * 2022-02-25 2023-08-31 北京字跳网络技术有限公司 Video processing method and apparatus, device and medium
CN116700846A (en) * 2022-02-28 2023-09-05 荣耀终端有限公司 Picture display method and related electronic equipment
CN116700846B (en) * 2022-02-28 2024-04-02 荣耀终端有限公司 Picture display method and related electronic equipment
CN115150661A (en) * 2022-06-23 2022-10-04 深圳市大头兄弟科技有限公司 Method and related device for packaging video key fragments
CN115150661B (en) * 2022-06-23 2024-04-09 深圳市闪剪智能科技有限公司 Method and related device for packaging video key fragments
CN115442519A (en) * 2022-08-08 2022-12-06 珠海普罗米修斯视觉技术有限公司 Video processing method, device and computer readable storage medium
CN115442519B (en) * 2022-08-08 2023-12-15 珠海普罗米修斯视觉技术有限公司 Video processing method, apparatus and computer readable storage medium

Also Published As

Publication number Publication date
WO2021196890A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
CN111541936A (en) Video and image processing method and device, electronic equipment and storage medium
KR101454950B1 (en) Deep tag cloud associated with streaming media
CN111970577B (en) Subtitle editing method and device and electronic equipment
US20180293088A1 (en) Interactive comment interaction method and apparatus
CN106844705B (en) Method and apparatus for displaying multimedia content
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
EP2939132A1 (en) Creating and sharing inline media commentary within a network
CN104822077B (en) The operation method and client of client
US11348587B2 (en) Review system for online communication, method, and computer program
US10674183B2 (en) System and method for perspective switching during video access
CN108833991A (en) Video caption display methods and device
CN112188267B (en) Video playing method, device and equipment and computer storage medium
CN113852767B (en) Video editing method, device, equipment and medium
US20170161871A1 (en) Method and electronic device for previewing picture on intelligent terminal
CN114785977A (en) Controlling video data content using computer vision
US9407864B2 (en) Data processing method and electronic device
CN114449327B (en) Video clip sharing method and device, electronic equipment and readable storage medium
CN115238125A (en) Data processing method and device, computer equipment and readable storage medium
KR20220132393A (en) Method, Apparatus and System of managing contents in Multi-channel Network
US10126821B2 (en) Information processing method and information processing device
JP2021077131A (en) Composition advice system, composition advice method, user terminal, and program
CN114546229B (en) Information processing method, screen capturing method and electronic equipment
CN111638845B (en) Animation element obtaining method and device and electronic equipment
CN112118410B (en) Service processing method, device, terminal and storage medium
CN114816599B (en) Image display method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40029148

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination