CN111491205A - Video processing method and device and electronic equipment - Google Patents

Video processing method and device and electronic equipment Download PDF

Info

Publication number
CN111491205A
CN111491205A CN202010306978.6A CN202010306978A CN111491205A CN 111491205 A CN111491205 A CN 111491205A CN 202010306978 A CN202010306978 A CN 202010306978A CN 111491205 A CN111491205 A CN 111491205A
Authority
CN
China
Prior art keywords
video
suite
target
feature information
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010306978.6A
Other languages
Chinese (zh)
Other versions
CN111491205B (en
Inventor
付玉迪
李巧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010306978.6A priority Critical patent/CN111491205B/en
Publication of CN111491205A publication Critical patent/CN111491205A/en
Application granted granted Critical
Publication of CN111491205B publication Critical patent/CN111491205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • H04N21/4586Content update operation triggered locally, e.g. by comparing the version of software modules in a DVB carousel to the version stored locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a video processing method, a video processing device and electronic equipment, and relates to the technical field of communication. The method comprises the following steps: displaying N suite identifications in a target area associated with a first video, wherein one suite identification is used for indicating one video suite, and the video suite contains at least one item of video characteristic information; receiving a first input of a user to a target suite identification in the N suite identifications; in response to the first input, updating video feature information of the first video according to at least one item of video feature information contained in the video suite indicated by the target suite identification; wherein N is a positive integer. According to the scheme of the invention, at least one item of video characteristic information contained in the video suite indicated by the pre-generated target suite identification can be subjected to one-key-set application, an application program does not need to be downloaded, the operation is more convenient and faster, and the efficiency is improved.

Description

Video processing method and device and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a video processing method and apparatus, and an electronic device.
Background
The video can well record life and share life, and is one of the most functions used by users at present, but for the users, the threshold of video production is higher, and the operation is more complex. The user sees a better video shared by friends, wants to make a similar video, but does not know application programs, functions, resources and the like of specific applications, and has high learning and operating costs, such as filter effects, background music and the like adopted by the video. Moreover, if a user wants to create a related video, the user needs to download a plurality of application programs, and create the video in each of the plurality of application programs, which makes the operation complicated.
Disclosure of Invention
The embodiment of the invention provides a video processing method and electronic equipment, which can solve the problem of complicated operation caused by the fact that a plurality of application programs need to be downloaded for video production in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video processing method applied to an electronic device, including:
displaying N suite identifications in a target area associated with a first video, wherein one suite identification is used for indicating one video suite, and the video suite contains at least one item of video characteristic information;
receiving a first input of a user to a target suite identification in the N suite identifications;
in response to the first input, updating video feature information of the first video according to at least one item of video feature information contained in the video suite indicated by the target suite identification;
wherein N is a positive integer.
In a second aspect, an embodiment of the present invention further provides a video processing apparatus, including:
the device comprises a first display module, a second display module and a display module, wherein the first display module is used for displaying N suite identifications in a target area associated with a first video, one suite identification is used for indicating one video suite, and the video suite contains at least one item of video characteristic information;
the first receiving module is used for receiving first input of a user to a target suite identifier in the N suite identifiers;
a first response module, configured to respond to the first input, and update video feature information of the first video according to at least one item of video feature information included in a video suite indicated by the target suite identifier;
wherein N is a positive integer.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the steps of the video processing method described above.
In this way, in the embodiment of the present invention, by displaying N suite identifiers in a target area associated with a first video, and by first inputting a target suite identifier of the N suite identifiers by a user, updating the video feature information of the first video according to at least one item of video feature information included in a video suite indicated by the target suite identifier, at least one item of video feature information included in a video suite indicated by a pre-generated target suite identifier can be used for one key set, an application program does not need to be downloaded, and therefore, the operation is more convenient and faster, and the efficiency is improved.
Drawings
FIG. 1 is a flow diagram of a video processing method according to an embodiment of the invention;
FIG. 2 shows one of the display diagrams of video processing according to an embodiment of the invention;
FIG. 3 is a second display diagram of video processing according to an embodiment of the invention;
FIG. 4 is a third display diagram of video processing according to an embodiment of the present invention;
FIG. 5 is a fourth illustration of a display of video processing according to an embodiment of the invention;
FIG. 6 shows a fifth display schematic of video processing according to an embodiment of the invention;
FIG. 7 shows a sixth display schematic of video processing according to an embodiment of the invention;
FIG. 8 shows a seventh illustrative display of video processing according to an embodiment of the invention;
FIG. 9 shows an eighth display schematic of video processing according to an embodiment of the invention;
FIG. 10 shows a ninth display schematic of video processing according to an embodiment of the invention;
FIG. 11 is a block diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, a plurality of video production application programs provide different effects and resources such as filters, stickers, dynamic effects, transition and the like with different effects and styles on the basis of providing rich video editing functions, so that a user can produce video files with large effects for storage and sharing. However, different applications cannot share the resources of video editing, and the user must download the corresponding application to use the application. In order to obtain good video effect, users usually need to download a plurality of video production applications, and the operation is complicated.
Therefore, the embodiment of the invention provides a video processing method, a video processing device and electronic equipment, which can be used for performing one-key sleeve on at least one item of video characteristic information contained in a video suite indicated by a pre-generated target suite identifier, do not need to download an application program, are more convenient to operate and improve the efficiency.
Specifically, as shown in fig. 1, an embodiment of the present invention provides a video processing method applied to an electronic device, including:
step 101, displaying N suite identifications in a target area associated with a first video, wherein one suite identification is used for indicating one video suite, the video suite contains at least one item of video feature information, and N is a positive integer.
Optionally, the video feature information includes, but is not limited to, at least one of the following: background music, video subtitles, video animation, the initial display time of the dynamic sticker, the display duration of the dynamic sticker, and the motion track of the dynamic sticker.
In step 101, the package identifiers are selection keys displayed in the target area for the user to select a desired video package, and the user can customize the name of each package identifier as desired. The video suite is a set containing one or more items of video characteristic information, the video suite can be in a form of a folder, a compressed package and the like, and when the video suite can be in a form of a folder, at least one item of video characteristic information is directly stored in the folder; when the video suite can be in the form of a compressed packet, compressing and storing at least one video feature in the compressed packet.
It should be noted that, the display mode of the N suite identifiers is as follows: the N suite identifiers may be directly displayed in the target area when the user browses the first video, or may be displayed in the target area through operations such as click input, double-click input, sliding input and the like of the user on the first position of the first video, which is not specifically limited herein. The target area can be customized according to the user's needs, such as the lower right corner and the upper left corner of the first video.
For example: as shown in fig. 2, the display mode of the N suite identifiers is: the N suite identifications are displayed in the target area by a user's pressing operation of the first position of the first video 21. If the user wants to edit the first video 21, the user presses a key of the edit 22 displayed in the lower right corner (or other positions) of the first video 21, and then N package identifiers can be displayed in the target area associated with the first video.
For example: as shown in fig. 3, when N is 2, the target area associated with the first video 31 may be set below the first video 31, and the displayed 2 suite identifiers are: a first video suite 32, a second video suite 33, the first video suite 32 indicating one video suite and the second video suite 33 indicating the other video suite. Wherein the video suite indicated by the first video suite 32 may contain background music and motion stickers and the video suite indicated by the second video suite 33 may contain video animation and motion stickers.
Step 102, receiving a first input of a target suite identification in the N suite identifications from a user.
In step 102, the first input may be a click input, a double click input, a slide input, and the like of a user on a target suite identifier in the N suite identifiers, and the first input may also be a first operation, which is not specifically limited herein.
Step 103, in response to the first input, updating the video feature information of the first video according to at least one item of video feature information contained in the video suite indicated by the target suite identifier.
In step 103, the user may apply a key sleeve to the video suite indicated by the target suite identifier, and directly apply all the video feature information included in the video suite indicated by the target suite identifier to the first video to update the video feature information related to the first video. Or, the user may select one or more pieces of video feature information in the video suite indicated by the target suite identifier according to the need of the user, so as to update the first video-related video feature information.
For example: as shown in fig. 3, if the user wants to apply the video suite indicated by the target suite identification (e.g., the first video suite 32) to the first video to update the first video-related video feature information, the user can click on the first video suite 32 (i.e., the first input) and can directly apply the video feature information included in the first video suite 32 to the first video to update the first video-related video feature information. For example: if the first video suite 32 contains background music, the background music in the first video may be directly replaced with the background music contained in the first video suite 32.
In the embodiment of the invention, N suite identifications are displayed in the target area associated with the first video, and the video characteristic information of the first video is updated according to at least one item of video characteristic information contained in the video suite indicated by the target suite identification through the first input of the target suite identification in the N suite identifications by a user, so that at least one item of video characteristic information contained in the video suite indicated by the pre-generated target suite identification can be used by one key without downloading an application program, the operation is more convenient and faster, and the efficiency is improved. Optionally, before the step 101, the method may further include:
receiving a second input of the user to the second video;
in response to the second input, displaying M video feature identifiers, one video feature identifier indicating at least one item of video feature information contained in the second video;
receiving a third input of the user to T video feature identifications in the M video feature identifications;
in response to the third input, storing the video feature information indicated by the T video feature identifications as a first video suite;
wherein T, M are positive integers, and T is less than or equal to M.
In the above embodiment, if the user needs to identify and extract video feature information in the second video, the video feature information included in the second video is extracted through the second input, and the extracted video feature information is displayed through M video feature identifiers, where one video feature identifier indicates one or more items of video feature information. Through third input of the user on T video feature identifiers in the M video feature identifiers, storing video feature information indicated by the T video feature identifiers as a first video suite, displaying the video feature information in a target area associated with the first video through the suite identifiers for the user to select, enabling the user not to download a plurality of application programs, and selecting video feature information needed by the user from the plurality of application programs to update the video feature information of the first video, and the method is simple and rapid to operate and high in video making efficiency; in addition, the user can share the stored first video suite to other users, so that information sharing among a plurality of electronic devices is achieved. Wherein the video feature identifier is an identifier indicating video feature information.
It should be noted that, identifying and extracting the video feature information included in the second video may be identified by an Artificial Intelligence (AI) technique, and the identification process may prompt the user to wait for identification and other information. The second input may be a click input, a double click input, a slide input, and the like of the second video by the user, and the second input may also be a third operation, which is not specifically limited herein. The third input may be a click input, a double click input, a slide input, and the like of the user on T video feature identifiers of the M video feature identifiers, and the third input may also be a third operation, which is not specifically limited herein.
For example: as shown in fig. 4, when the user wants to identify the video feature information in the second video 41, the user may display a key of the extracted video suite 42 at the lower right of the second video 41 (or at another position) by pressing the second video 41, and the user may click the key of the extracted video suite 42, i.e., enter the relevant interface for identifying the resource used in the second video (i.e., the video feature information used in the second video), as shown in fig. 5. As shown in fig. 6, if the video feature information of the identified second video includes a first video feature identifier 61 (e.g., the first video feature identifier 61 indicates background music), a second video feature identifier 62 (e.g., the second video feature identifier 62 indicates video subtitles), and a third video feature identifier 63 (e.g., the third video feature identifier 63 indicates video animation), the display interface after identification is as shown in fig. 6, the indicated video feature information is displayed in each video feature identifier, and the specific content of the video feature information is displayed below each video feature information, such as: music title is displayed under the background music, caption content is displayed under the video caption, the number of video animation is displayed under the video animation, and the like. As shown in fig. 7, if the first video feature identifier 71 indicates background music, the second video feature identifier 72 indicates video subtitles, and the third video feature identifier 73 indicates video animation, the user clicks the upper right square box of the first video feature identifier 71, and √ is displayed in the upper right square box to indicate that the first video feature identifier 71 is selected, the user clicks the button of the save 74 to save the background music indicated by the first video feature identifier 71 into the first video suite, the user can select the video feature information indicated by the video feature identifier he or she needs, and without downloading an application program, the video suite can be customized by identifying and saving the video feature information of the second video, and the video feature information of the first video is updated by the customized video suite, so that the operation is simple and fast, the efficiency of video production is high. Wherein the second input comprises a user pressing the second video 41 and clicking a key of the extracted video suite 42, and the third input comprises clicking a box in the upper right corner of the first video feature identifier 71 and clicking a key of the save 74 by the user.
Specifically, in the case that the video feature information includes background music, the method for identifying the background music may be:
if a user likes the background music of the second video when watching the second video but does not know the source of the music, the user can identify and extract the background music of the second video (namely the benchmark video), the background music identifying the second video can adopt the existing multi-channel blind source separation technology to separate the background sound and the original sound in the benchmark video, only the background music part is stored after the original sound is removed, and the background music part is stored in the first video suite as one piece of video characteristic information, so that the time for the user to search and apply the background music of the second video is saved.
Specifically, in the case that the video feature information includes a video subtitle, the method for identifying a video subtitle may be:
when a user watches a second video (or a web course, a song and the like with subtitles), the second video can be identified through a character identification and segmentation technology, firstly, characters are identified, then, characters around the second video are extracted through the segmentation technology, the characters (namely, video subtitles) are stored in a first video suite as one piece of video characteristic information, and the time for the user to manually edit the characters is saved.
Specifically, in the case that the video feature information includes a video animation, the method for identifying the video animation may be:
the video clips can be divided by using a shot segmentation technology, video moving effects (or transition) between different adjacent video clips are identified, a transition mode is identified, then the transition mode is extracted, the transition is stored in the first video suite as one piece of video characteristic information, and time for a user to search for and apply the video moving effects of the second video is saved.
Specifically, in a case where the video feature information includes a dynamic sticker (at least one of a start display time of the dynamic sticker, a display duration of the dynamic sticker, and a motion trajectory of the dynamic sticker), the identification method of the dynamic sticker may be:
the dynamic sticker and the content in the second video background are divided by utilizing a dividing technology, the motion track of the dynamic sticker is determined by combining the optical flow information of the second video, the dynamic sticker is stored in the first video suite as one or more pieces of video characteristic information, and the time for a user to search and apply the dynamic sticker of the second video is saved.
Optionally, the step of storing the video feature information indicated by the T video feature identifiers as a first video suite may specifically include:
storing the video feature information indicated by the T video feature identifiers into a first video suite in a data form of a compressed packet;
and the data type of the video characteristic information indicated by each video characteristic identification is the data type matched with each video characteristic information.
In the above embodiment, when the video feature information indicated by the T video feature identifiers is saved, the video feature information indicated by the T video feature identifiers may be compressed and saved in the form of a compressed packet, so as to save a storage space; and when the video suite is shared, the transmission flow can be saved. The data types of the video feature information may be different, and the data type of each video feature information is the data type matched with each video feature information. For example: the data types of the video feature information may include data types such as xml, json, png, mp3, and the like. Such as: the data type for background music matching may be mp3, the data type for video subtitle matching may be xml, json, etc., and the data type for dynamic sticker matching may be png.
It should be noted that the data format stored in the video feature information indicated by the T video feature identifiers includes not only a compressed package, but also a folder format, and is not limited herein.
Optionally, before the step of storing the video feature information indicated by the T video feature identifiers as the first video suite, the method may further include:
receiving a fourth input of a user to a target video feature identifier in the M video feature identifiers;
updating video feature information indicated by the target video feature identification in response to the fourth input.
In the above embodiment, a user can enter an editing interface through operations such as clicking, pressing and the like, edit the target video feature identifier in the M video feature identifiers, and save the edited second feature information, that is, update the video feature information indicated by the target video feature identifier, so that the user edits the video feature information indicated by the target video feature identifier as required, and a video suite required by the user is generated, so that the user can use and share one key of the generated video suite.
For example: the video feature information indicated by the target video feature identifier includes background music, or the video feature information indicated by the target video feature identifier includes background music and video subtitles. If the video characteristic information indicated by the target video characteristic identification comprises background music, the user can edit the background music if the user wants to intercept part of music in the background music; as shown in fig. 6, the first video feature identifier is a target video feature identifier, and the user may enter an editing interface of the background music by clicking three points on the lower left of the background music indicated by the first video feature identifier 61, as shown in fig. 8, the user may intercept part of the music of the first music 83 as needed, and the user saves the edited background music by clicking a button of the saving 82; if the user wants to cancel the edit, the user can click the cancel 81 button to discard the current edit. If the video characteristic information indicated by the target video characteristic identification comprises the dynamic sticker, a user can edit the dynamic sticker, change the motion track, the starting time, the display duration, the display size and the like of the dynamic sticker, copy, paste, rotate and other operations can be carried out on the dynamic sticker, the video characteristic information is edited and selected according to the needs of the user, the selected video characteristic information is stored in the first video suite, and the efficiency of the user for making a video is improved.
It should be noted that the fourth input may be a click input, a double click input, a slide input, and the like of the user on the target video feature identifier in the M video feature identifiers, and the fourth input may also be a fourth operation, which is not specifically limited herein.
Optionally, the video suite indicated by the target suite identifier includes a video animation, and step 103 may specifically include:
acquiring a first number of video animation contained in the video suite indicated by the target suite identification and a second number of video clips in the first video;
and updating the video dynamic effect in the first video according to the first preset corresponding relation between the first number and the second number and the video dynamic effect contained in the video suite indicated by the target suite identification.
In the above embodiment, the video motion effect in the first video is updated according to the first preset corresponding relationship between the first number of video motion effects contained in the video suite indicated by the target suite identification and the second number of video segments in the first video. The first preset corresponding relation can be set according to user needs, so that the time for searching and applying the video dynamic effect in the second video by a user is reduced, the video dynamic effect in the second video can be applied to the first video in a user-defined mode, the time of the user is saved, and the efficiency of making the video is improved.
For example: if the number of the video segments of the second video is consistent with the number of the video segments in the first video, the first preset corresponding relationship may be a corresponding relationship between the number of the video segments of the second video and the number of the video segments in the first video. If the number of the video segments of the second video is not consistent with the number of the video segments in the first video, the first preset corresponding relationship is the corresponding relationship between the number of the video animation of the second video (i.e. the first number) and the number of the video segments in the first video (i.e. the second number).
If there are 4 video segments in the second video, there are video motion effects between every 2 video segments, that is, there are 3 video motion effects (in order: video motion effect a, video motion effect B, video motion effect C) for 4 video segments. If only one video clip exists in the first video, the dynamic effect is not displayed; if one more video segment is added, namely two video segments exist in the first video, the first video animation (video animation a) of the second video is displayed between the two video segments of the first video, and the video animation B and the video animation C are not applied to the video segments of the first video. If the first video has 3 video segments, a first video animation (video animation a) is displayed between the first 2 video segments of the first video, and a second video animation (video animation B) is displayed between the last 2 video segments of the first video. If the first video has 5 video segments, a video dynamic effect A, a video dynamic effect B and a video dynamic effect C are displayed between every two adjacent video segments in the first 4 video segments of the first video, and no video dynamic effect exists between the 4 th video segment and the 5 th video segment, so that the time for searching and applying the video dynamic effect in the second video by a user is reduced, the video dynamic effect in the second video can be applied to the first video in a user-defined mode, the time for the user is saved, and the efficiency for making the video is improved.
Optionally, the video suite indicated by the target suite identifier includes an initial display time of the dynamic sticker, and step 103 may specifically include:
acquiring the playing time length of the first video;
and adding the dynamic paster to the first video according to a second preset corresponding relation between the initial display time of the dynamic paster and the playing time of the first video.
In the above embodiment, the second preset corresponding relationship may be set according to user needs, which not only reduces the time for the user to search and add the dynamic sticker in the second video, but also adds the dynamic sticker in the second video to the first video in a user-defined manner, thereby saving user time and improving the efficiency of making video.
For example: and if the starting display time of the second video from the beginning of playing to the dynamic paster exceeds the total playing time length of the first video, the dynamic paster is displayed in the first video frame of the first video. Such as: dynamic pasters appear at the 5 th s of the second video, but the total playing time of the first video is only 4s, so that the dynamic pasters can be displayed from the first video frame of the first video, the time for searching and adding the dynamic pasters in the second video by a user is reduced, the dynamic pasters in the second video can be added to the first video in a self-defined mode, the time of the user is saved, and the efficiency of making the video is improved.
Optionally, the video suite indicated by the target suite identifier includes a display duration of the dynamic sticker, and step 103 may specifically include:
acquiring the playing time length of the first video;
and adding the dynamic paster to the first video according to a third preset corresponding relation between the display time length of the dynamic paster and the playing time length of the first video.
In the above embodiment, the third preset corresponding relationship can be set according to the user needs, which not only reduces the time for the user to search and add the dynamic sticker in the second video, but also adds the dynamic sticker in the second video to the first video in a user-defined manner, thereby saving the user time and improving the efficiency of making the video.
For example: if the display duration of the dynamic sticker of the second video exceeds the duration that the dynamic sticker can be displayed by the first video (i.e., the playing duration of the first video), the dynamic sticker can be displayed from the first video frame of the first video until the last video frame of the first video is not displayed. Such as: the display duration of the dynamic sticker of the second video is 3s, the duration that the dynamic sticker can be displayed by the first video is 5s, the dynamic sticker is added in the 4 th s, the dynamic sticker is only displayed from 4s to the last video frame of the first video, the time for a user to search and add the dynamic sticker in the second video is reduced, the dynamic sticker in the second video can be added to the first video in a user-defined mode, the user time is saved, and the efficiency of making the video is improved.
Optionally, the video suite indicated by the target suite identifier includes a motion trajectory of the dynamic sticker, and step 103 may specifically include:
displaying the motion trail of the dynamic paster, wherein the motion trail of the dynamic paster consists of the display positions of the dynamic paster in P video frames;
receiving a fifth input of the user to Q video frames in the P video frames;
in response to the fifth input, updating the display position of the dynamic sticker in Q video frames, and updating the motion track of the dynamic sticker in the first video according to the updated display position of the dynamic sticker in P video frames;
wherein Q, P are positive integers, and Q is less than or equal to P.
In the above embodiment, the motion track of the dynamic sticker is displayed and is composed of the display positions of the dynamic sticker in the P video frames, if the user makes a fifth input for Q of the P video frames, the display position of the dynamic sticker in the Q video frames can be updated, moreover, the motion track of the dynamic sticker in the first video can be updated according to the updated display position of the dynamic sticker in the P video frames (namely the motion track of the updated dynamic sticker), so that the time for a user to search and add the dynamic sticker in the second video is reduced, the dynamic sticker in the second video can be added to the first video in a self-defined manner, the motion track of the dynamic sticker in the first video can be updated according to the user-defined mode, the time of a user is saved, and the efficiency of making the video is improved.
Optionally, the video suite indicated by the target suite identifier includes a dynamic sticker, and before step 103, the method further includes:
displaying a sticker track, the sticker track including S video frame thumbnails of a first video and a sticker slide bar;
receiving a sixth input of the user to the sticker slider bar;
responding to the sixth input, sequentially moving the sticker sliding bar to a first position and a second position, and adding the dynamic sticker to a target video frame, wherein a starting video frame of the target video frame is a video frame corresponding to the video frame thumbnail at the first position, and an ending video frame of the target video frame is a video frame corresponding to the video frame thumbnail at the second position;
wherein S is a positive integer.
In the above embodiment, the dynamic sticker, the S video frame thumbnails of the first video, and the sticker slider bar are displayed, and the user may move the position of the sticker slider bar by sixth input to the sticker slider bar, that is, the start display time and the end display time of the dynamic sticker in the first video may be updated.
The sixth input may be a click input, a double click input, a slide input, and the like of the sticker slide bar by the user, and the sixth input may also be a sixth operation, which is not specifically limited herein.
For example: as shown in fig. 9, if the user moves the sticker slide bar 93 to the first position, it can be determined that the first video frame thumbnail 96 corresponding to the first position where the sticker slide bar 93 is located, i.e., the starting video frame of the dynamic sticker 95 in the first video 91 (i.e., the video frame corresponding to the first video frame thumbnail 96 corresponding to the first position), starts to be displayed. As shown in fig. 10, if the user moves the sticker slide bar 93 to the second position, it can be determined that the second video frame thumbnail 92 corresponding to the second position where the sticker slide bar 93 is located, that is, the ending video frame of the dynamic sticker 95 in the first video 91 (that is, the video frame corresponding to the second video frame thumbnail 92 corresponding to the second position) ends displaying. As shown in fig. 9 and 10, the sticker slide bar 93 is a movable control, and in the process of moving the sticker slide bar 93 by a user, a video frame thumbnail corresponding to a video frame displayed by the dynamic sticker is selected through the movement of the sticker slide bar 93, as shown in fig. 9 to 10, in the process of moving the sticker slide bar 93 from the first video frame thumbnail 96 to the second video frame thumbnail 92, a video frame thumbnail (including the first video frame thumbnail 96, the second video frame thumbnail 92, and the video frame thumbnail between the first video frame thumbnail 96 and the second video frame thumbnail 92) passed by the sticker slide bar 93 is a video frame displayed by the dynamic sticker.
If the starting position of the sticker slide bar 93 can be used as the first position, in the process that the user moves the sticker slide bar 93 from the starting position to the second position, the video frames corresponding to the passed video frame thumbnails are all the video frames displaying the dynamic sticker 95, that is, the dynamic sticker 95 is displayed in the video frame corresponding to the passed video frame thumbnails in the process that the sticker slide bar 93 moves from the starting position to the second position, that is, the video frames corresponding to the video frame thumbnails between the first video frame thumbnail 96 and the second video frame thumbnail 92 all display the dynamic sticker 95.
Optionally, before step 103, the method further includes:
displaying a target control;
receiving a seventh input of the target control by the user;
in response to the seventh input, copying the dynamic sticker to at least one video frame in the first video.
In the above embodiment, a target control may be displayed, and the dynamic sticker may be copied to at least one video frame in the first video by a seventh input of the user to the target control, so that the user can use the dynamic sticker in the same video for multiple times.
It should be noted that the seventh input may be a click input, a double-click input, a slide input, and the like of the target control by the user, and the seventh input may also be a seventh operation, which is not specifically limited herein.
For example: as shown in fig. 9, when the video suite including the dynamic sticker is applied, the dynamic sticker is added to the same screen position as the second video, the start position of the video, the display duration of the dynamic sticker, and the like by default in the first video, and the user can manually edit the position of the dynamic sticker in the first video (including the position where the dynamic sticker is placed, the enlargement/reduction of the sticker, the moving direction of the dynamic sticker, and the like), the start display position and the end display position of the first video, and the like. The user may also copy the dynamic sticker by clicking on the target control 94, pasting multiple times to add the dynamic sticker multiple times at different locations of the first video.
In summary, in the above embodiments of the present invention, by identifying the video feature information in the second video, the AI technology is used to store the background music, the video subtitles, the video animation, the dynamic sticker, the filter, and the like in the extracted video feature information as a video suite, or store the video suite after re-editing, when a user needs to edit a video, the user directly uses the stored video suite to make a video by one key, and directly multiplexes the video feature information in other videos with good effects, so that complexity of video editing operation is reduced, the operation is simple and fast, the user time is saved, and the video making efficiency is improved.
As shown in fig. 11, an embodiment of the present invention further provides a video processing apparatus 900, including:
a first display module 901, configured to display N suite identifiers in a target area associated with a first video, where one suite identifier is used to indicate one video suite, and the video suite includes at least one item of video feature information;
a first receiving module 902, configured to receive a first input of a target suite identifier from the N suite identifiers from a user;
a first response module 903, configured to respond to the first input, and update video feature information of the first video according to at least one item of video feature information included in a video suite indicated by the target suite identifier;
wherein N is a positive integer.
Optionally, the video feature information includes at least one of: background music, video subtitles, video animation, the initial display time of the dynamic sticker, the display duration of the dynamic sticker, and the motion track of the dynamic sticker.
Optionally, the video processing apparatus 900 further includes:
the second receiving module is used for receiving a second input of the user to the second video;
a second response module, configured to display M video feature identifiers in response to the second input, where one video feature identifier indicates at least one item of video feature information included in the second video;
a third receiving module, configured to receive a third input of the T video feature identifiers from the M video feature identifiers from the user;
a fourth response module, configured to, in response to the third input, store the video feature information indicated by the T video feature identifiers as a first video suite;
wherein T, M are positive integers, and T is less than or equal to M.
Optionally, the video processing apparatus 900 further includes:
a fourth receiving module, configured to receive a fourth input of a target video feature identifier in the M video feature identifiers from a user;
a fourth response module, configured to update the video feature information indicated by the target video feature identifier in response to the fourth input.
Optionally, the fourth response module includes:
the first processing unit is used for storing the video characteristic information indicated by the T video characteristic identifications into a first video suite in a data form of a compressed packet;
and the data type of the video characteristic information indicated by each video characteristic identification is the data type matched with each video characteristic information.
Optionally, the video suite indicated by the target suite identifier includes a video animation, and the first response module 903 includes:
a first obtaining unit, configured to obtain a first number of video animation contained in the video suite indicated by the target suite identification and a second number of video segments in the first video;
and the first updating unit is used for updating the video dynamic effect in the first video according to the first preset corresponding relation between the first number and the second number and the video dynamic effect contained in the video suite indicated by the target suite identification.
Optionally, the video suite indicated by the target suite identifier includes a starting display time of the dynamic sticker, and the first response module 903 includes:
the second acquisition unit is used for acquiring the playing time length of the first video;
and the second updating unit is used for adding the dynamic paster to the first video according to a second preset corresponding relation between the initial display time of the dynamic paster and the playing time of the first video.
Optionally, the video suite indicated by the target suite identifier includes a display duration of the dynamic sticker, and the first response module 903 includes:
a third obtaining unit, configured to obtain a playing time length of the first video;
and the third updating unit is used for adding the dynamic paster to the first video according to a third preset corresponding relation between the display time length of the dynamic paster and the playing time length of the first video.
Optionally, the video suite indicated by the target suite identifier includes a motion track of a dynamic sticker, and the first response module 903 includes:
the first display unit is used for displaying the motion trail of the dynamic paster, and the motion trail of the dynamic paster consists of the display positions of the dynamic paster in the P video frames;
the first receiving unit is used for receiving a fifth input of a user to Q video frames in the P video frames;
a first response unit, configured to update display positions of the dynamic sticker in Q video frames in response to the fifth input, and update a motion trajectory of the dynamic sticker in the first video according to the updated display positions of the dynamic sticker in P video frames;
wherein Q, P are positive integers, and Q is less than or equal to P.
Optionally, the video suite indicated by the target suite identifier includes a dynamic sticker, and the video processing apparatus 900 further includes:
the display device comprises a first display module, a second display module and a display module, wherein the first display module is used for displaying a sticker track, and the sticker track comprises S video frame thumbnails of a first video and a sticker slide bar;
the fifth receiving module is used for receiving a sixth input of the user to the paster sliding bar;
a fifth response module, configured to respond to the sixth input, sequentially move the sticker slide bar to a first position and a second position, and add the dynamic sticker to a target video frame, where a starting video frame of the target video frame is a video frame corresponding to the video frame thumbnail at the first position, and an ending video frame of the target video frame is a video frame corresponding to the video frame thumbnail at the second position;
wherein S is a positive integer.
Optionally, the video processing apparatus 900 further includes:
the second display module is used for displaying the target control;
a sixth receiving module, configured to receive a seventh input to the target control by the user;
a sixth response module to copy the dynamic sticker to at least one video frame in the first video in response to the seventh input.
The video processing apparatus 900 can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 10, and is not described herein again to avoid repetition.
In the embodiment of the present invention, through displaying N suite identifiers in a target area associated with a first video, and through a first input of a user to a target suite identifier of the N suite identifiers, according to at least one item of video feature information included in a video suite indicated by the target suite identifier, updating the video feature information of the first video, a key set can be performed on at least one item of video feature information included in a video suite indicated by a pre-generated target suite identifier, an application program does not need to be downloaded, the operation is more convenient, and the efficiency is improved.
Fig. 12 is a schematic diagram of a hardware structure of an electronic device for implementing various embodiments of the present invention, where the electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010, and a power supply 1011. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 12 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein the display unit 1006 is configured to: displaying N suite identifications in a target area associated with a first video, wherein one suite identification is used for indicating one video suite, and the video suite contains at least one item of video characteristic information;
the processor 1010 is configured to: receiving a first input of a user to a target suite identification in the N suite identifications;
the processor 1010 is configured to: in response to the first input, updating video feature information of the first video according to at least one item of video feature information contained in the video suite indicated by the target suite identification;
wherein N is a positive integer.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 10, and details are not described here to avoid repetition.
Therefore, the electronic equipment displays the N suite identifications in the target area associated with the first video, updates the video characteristic information of the first video according to at least one item of video characteristic information contained in the video suite indicated by the target suite identification through first input of a user to the target suite identification in the N suite identifications, can perform one-key-use on at least one item of video characteristic information contained in the video suite indicated by the pre-generated target suite identification, does not need to download application programs, is more convenient to operate, and improves efficiency.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1001 may be used for receiving and sending signals during a message transmission or a call, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 1010; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 1001 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 1002, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 1003 may convert audio data received by the radio frequency unit 1001 or the network module 1002 or stored in the memory 1009 into an audio signal and output as sound. Also, the audio output unit 1003 may also provide audio output related to a specific function performed by the electronic apparatus 1000 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1003 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1004 is used to receive an audio or video signal. The input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, the Graphics processor 10041 Processing image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1006. The image frames processed by the graphic processor 10041 may be stored in the memory 1009 (or other storage medium) or transmitted via the radio frequency unit 1001 or the network module 1002. The microphone 10042 can receive sound and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1001 in case of a phone call mode.
The electronic device 1000 also includes at least one sensor 1005, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 10061 according to the brightness of ambient light and a proximity sensor that can turn off the display panel 10061 and/or the backlight when the electronic device 1000 moves to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1005 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The Display unit 1006 may include a Display panel 10061, and the Display panel 10061 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light-Emitting Diode (O L ED), or the like.
The user input unit 1007 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 10071 (e.g., operations by a user on or near the touch panel 10071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 10071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1010, and receives and executes commands sent by the processor 1010. In addition, the touch panel 10071 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 10071, the user input unit 1007 can include other input devices 10072. Specifically, the other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 10071 can be overlaid on the display panel 10061, and when the touch panel 10071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1010 to determine the type of the touch event, and then the processor 1010 provides a corresponding visual output on the display panel 10061 according to the type of the touch event. Although in fig. 12, the touch panel 10071 and the display panel 10061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 10071 and the display panel 10061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 1008 is an interface for connecting an external device to the electronic apparatus 1000. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1008 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within the electronic device 1000 or may be used to transmit data between the electronic device 1000 and the external devices.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, and the like), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1009 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1010 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 1009 and calling data stored in the memory 1009, thereby integrally monitoring the electronic device. Processor 1010 may include one or more processing units; preferably, the processor 1010 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The electronic device 1000 may further include a power source 1011 (e.g., a battery) for supplying power to various components, and preferably, the power source 1011 may be logically connected to the processor 1010 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
In addition, the electronic device 1000 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 1010, a memory 1009, and a computer program stored in the memory 1009 and capable of running on the processor 1010, where the computer program is executed by the processor 1010 to implement each process of the above-mentioned video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (13)

1. A video processing method applied to an electronic device is characterized by comprising the following steps:
displaying N suite identifications in a target area associated with a first video, wherein one suite identification is used for indicating one video suite, and the video suite contains at least one item of video characteristic information;
receiving a first input of a user to a target suite identification in the N suite identifications;
in response to the first input, updating video feature information of the first video according to at least one item of video feature information contained in the video suite indicated by the target suite identification;
wherein N is a positive integer.
2. The method of claim 1, wherein the video feature information comprises at least one of: background music, video subtitles, video animation, the initial display time of the dynamic sticker, the display duration of the dynamic sticker, and the motion track of the dynamic sticker.
3. The method of claim 1, wherein prior to displaying the N kit identifiers in the target area associated with the first video, the method further comprises:
receiving a second input of the user to the second video;
in response to the second input, displaying M video feature identifiers, one video feature identifier indicating at least one item of video feature information contained in the second video;
receiving a third input of the user to T video feature identifications in the M video feature identifications;
in response to the third input, storing the video feature information indicated by the T video feature identifications as a first video suite;
wherein T, M are positive integers, and T is less than or equal to M.
4. The method of claim 3, wherein before storing the video feature information indicated by the T video feature identifiers as the first video suite, the method further comprises:
receiving a fourth input of a user to a target video feature identifier in the M video feature identifiers;
updating video feature information indicated by the target video feature identification in response to the fourth input.
5. The method according to claim 3, wherein said storing the video feature information indicated by the T video feature identifiers as a first video suite comprises:
storing the video feature information indicated by the T video feature identifiers into a first video suite in a data form of a compressed packet;
and the data type of the video characteristic information indicated by each video characteristic identification is the data type matched with each video characteristic information.
6. The method according to claim 2, wherein the video suite indicated by the target suite identification contains video animation, and the updating the video feature information of the first video according to the at least one item of video feature information contained in the video suite indicated by the target suite identification comprises:
acquiring a first number of video animation contained in the video suite indicated by the target suite identification and a second number of video clips in the first video;
and updating the video dynamic effect in the first video according to the first preset corresponding relation between the first number and the second number and the video dynamic effect contained in the video suite indicated by the target suite identification.
7. The method of claim 2, wherein the target package identification indicates a video package containing a starting display time of a dynamic sticker, and the updating the video feature information of the first video according to at least one item of video feature information contained in the target package identification indicates the video package comprises:
acquiring the playing time length of the first video;
and adding the dynamic paster to the first video according to a second preset corresponding relation between the initial display time of the dynamic paster and the playing time of the first video.
8. The method of claim 2, wherein the target suite identification indicates a video suite containing a display duration of a dynamic sticker, and the updating the video feature information of the first video according to at least one item of video feature information contained in the target suite identification indicates includes:
acquiring the playing time length of the first video;
and adding the dynamic paster to the first video according to a third preset corresponding relation between the display time length of the dynamic paster and the playing time length of the first video.
9. The method according to claim 2, wherein the video suite indicated by the target suite identification contains a motion trail of a dynamic sticker, and the updating the video feature information of the first video according to the at least one item of video feature information contained in the video suite indicated by the target suite identification comprises:
displaying the motion trail of the dynamic paster, wherein the motion trail of the dynamic paster consists of the display positions of the dynamic paster in P video frames;
receiving a fifth input of the user to Q video frames in the P video frames;
in response to the fifth input, updating the display position of the dynamic sticker in Q video frames, and updating the motion track of the dynamic sticker in the first video according to the updated display position of the dynamic sticker in P video frames;
wherein Q, P are positive integers, and Q is less than or equal to P.
10. The method according to claim 2, wherein the video suite indicated by the target suite identification contains a dynamic sticker, and before updating the video feature information of the first video according to at least one item of video feature information contained in the video suite indicated by the target suite identification, the method further comprises:
displaying a sticker track, the sticker track including S video frame thumbnails of a first video and a sticker slide bar;
receiving a sixth input of the user to the sticker slider bar;
responding to the sixth input, sequentially moving the sticker sliding bar to a first position and a second position, and adding the dynamic sticker to a target video frame, wherein a starting video frame of the target video frame is a video frame corresponding to the video frame thumbnail at the first position, and an ending video frame of the target video frame is a video frame corresponding to the video frame thumbnail at the second position;
wherein S is a positive integer.
11. The method according to claim 10, wherein before updating the video feature information of the first video according to at least one item of video feature information contained in the video suite indicated by the target suite identification, the method further comprises:
displaying a target control;
receiving a seventh input of the target control by the user;
in response to the seventh input, copying the dynamic sticker to at least one video frame in the first video.
12. A video processing apparatus, comprising:
the device comprises a first display module, a second display module and a display module, wherein the first display module is used for displaying N suite identifications in a target area associated with a first video, one suite identification is used for indicating one video suite, and the video suite contains at least one item of video characteristic information;
the first receiving module is used for receiving first input of a user to a target suite identifier in the N suite identifiers;
a first response module, configured to respond to the first input, and update video feature information of the first video according to at least one item of video feature information included in a video suite indicated by the target suite identifier;
wherein N is a positive integer.
13. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video processing method according to any one of claims 1 to 11.
CN202010306978.6A 2020-04-17 2020-04-17 Video processing method and device and electronic equipment Active CN111491205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010306978.6A CN111491205B (en) 2020-04-17 2020-04-17 Video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010306978.6A CN111491205B (en) 2020-04-17 2020-04-17 Video processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111491205A true CN111491205A (en) 2020-08-04
CN111491205B CN111491205B (en) 2023-04-25

Family

ID=71797967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010306978.6A Active CN111491205B (en) 2020-04-17 2020-04-17 Video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111491205B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111954076A (en) * 2020-08-27 2020-11-17 维沃移动通信有限公司 Resource display method and device and electronic equipment
CN114338954A (en) * 2021-12-28 2022-04-12 维沃移动通信有限公司 Video generation circuit, method and electronic equipment
CN116600168A (en) * 2023-04-10 2023-08-15 深圳市赛凌伟业科技有限公司 Multimedia data processing method and device, electronic equipment and storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103928039A (en) * 2014-04-15 2014-07-16 北京奇艺世纪科技有限公司 Video compositing method and device
GB201409580D0 (en) * 2013-05-31 2014-07-16 Adobe Systems Inc Placing unobtrusive overlays in video content
CN105898619A (en) * 2015-12-08 2016-08-24 乐视网信息技术(北京)股份有限公司 Video caption recommending method, system, terminal and server
WO2016177296A1 (en) * 2015-05-04 2016-11-10 腾讯科技(深圳)有限公司 Video generation method and apparatus
CN106339201A (en) * 2016-09-14 2017-01-18 北京金山安全软件有限公司 Map processing method and device and electronic equipment
CN106572395A (en) * 2016-11-08 2017-04-19 广东小天才科技有限公司 Video processing method and device
CN106792071A (en) * 2016-12-19 2017-05-31 北京小米移动软件有限公司 Method for processing caption and device
CN107402985A (en) * 2017-07-14 2017-11-28 广州爱拍网络科技有限公司 Special video effect output control method, device and computer-readable recording medium
WO2018072652A1 (en) * 2016-10-17 2018-04-26 腾讯科技(深圳)有限公司 Video processing method, video processing device, and storage medium
CN108234825A (en) * 2018-01-12 2018-06-29 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal
CN108769731A (en) * 2018-05-25 2018-11-06 北京奇艺世纪科技有限公司 The method, apparatus and electronic equipment of target video segment in a kind of detection video
CN108900902A (en) * 2018-07-06 2018-11-27 北京微播视界科技有限公司 Determine method, apparatus, terminal device and the storage medium of video background music
CN109391826A (en) * 2018-08-07 2019-02-26 上海奇邑文化传播有限公司 A kind of video generates system and its generation method online
CN110177219A (en) * 2019-07-01 2019-08-27 百度在线网络技术(北京)有限公司 The template recommended method and device of video
CN110675310A (en) * 2019-07-02 2020-01-10 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN110677734A (en) * 2019-09-30 2020-01-10 北京达佳互联信息技术有限公司 Video synthesis method and device, electronic equipment and storage medium
CN110933487A (en) * 2019-12-18 2020-03-27 北京百度网讯科技有限公司 Method, device and equipment for generating click video and storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201409580D0 (en) * 2013-05-31 2014-07-16 Adobe Systems Inc Placing unobtrusive overlays in video content
CN103928039A (en) * 2014-04-15 2014-07-16 北京奇艺世纪科技有限公司 Video compositing method and device
WO2016177296A1 (en) * 2015-05-04 2016-11-10 腾讯科技(深圳)有限公司 Video generation method and apparatus
CN105898619A (en) * 2015-12-08 2016-08-24 乐视网信息技术(北京)股份有限公司 Video caption recommending method, system, terminal and server
CN106339201A (en) * 2016-09-14 2017-01-18 北京金山安全软件有限公司 Map processing method and device and electronic equipment
WO2018072652A1 (en) * 2016-10-17 2018-04-26 腾讯科技(深圳)有限公司 Video processing method, video processing device, and storage medium
CN106572395A (en) * 2016-11-08 2017-04-19 广东小天才科技有限公司 Video processing method and device
CN106792071A (en) * 2016-12-19 2017-05-31 北京小米移动软件有限公司 Method for processing caption and device
CN107402985A (en) * 2017-07-14 2017-11-28 广州爱拍网络科技有限公司 Special video effect output control method, device and computer-readable recording medium
CN108234825A (en) * 2018-01-12 2018-06-29 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal
CN108769731A (en) * 2018-05-25 2018-11-06 北京奇艺世纪科技有限公司 The method, apparatus and electronic equipment of target video segment in a kind of detection video
CN108900902A (en) * 2018-07-06 2018-11-27 北京微播视界科技有限公司 Determine method, apparatus, terminal device and the storage medium of video background music
CN109391826A (en) * 2018-08-07 2019-02-26 上海奇邑文化传播有限公司 A kind of video generates system and its generation method online
CN110177219A (en) * 2019-07-01 2019-08-27 百度在线网络技术(北京)有限公司 The template recommended method and device of video
CN110675310A (en) * 2019-07-02 2020-01-10 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN110677734A (en) * 2019-09-30 2020-01-10 北京达佳互联信息技术有限公司 Video synthesis method and device, electronic equipment and storage medium
CN110933487A (en) * 2019-12-18 2020-03-27 北京百度网讯科技有限公司 Method, device and equipment for generating click video and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111954076A (en) * 2020-08-27 2020-11-17 维沃移动通信有限公司 Resource display method and device and electronic equipment
CN114338954A (en) * 2021-12-28 2022-04-12 维沃移动通信有限公司 Video generation circuit, method and electronic equipment
CN116600168A (en) * 2023-04-10 2023-08-15 深圳市赛凌伟业科技有限公司 Multimedia data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111491205B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN110381371B (en) Video editing method and electronic equipment
CN109145142B (en) Management method and terminal for shared information of pictures
CN110933511B (en) Video sharing method, electronic device and medium
CN110557565B (en) Video processing method and mobile terminal
CN110913141B (en) Video display method, electronic device and medium
CN111314784B (en) Video playing method and electronic equipment
CN111491205B (en) Video processing method and device and electronic equipment
CN109213416B (en) Display information processing method and mobile terminal
CN111274777B (en) Thinking guide display method and electronic equipment
US20210096739A1 (en) Method For Editing Text And Mobile Terminal
CN108646960B (en) File processing method and flexible screen terminal
CN111445927B (en) Audio processing method and electronic equipment
CN108763540B (en) File browsing method and terminal
CN111491211B (en) Video processing method, video processing device and electronic equipment
CN109257649B (en) Multimedia file generation method and terminal equipment
CN110868633A (en) Video processing method and electronic equipment
CN110958485A (en) Video playing method, electronic equipment and computer readable storage medium
CN108984143B (en) Display control method and terminal equipment
CN110909524A (en) Editing method and electronic equipment
CN110913261A (en) Multimedia file generation method and electronic equipment
CN108093137B (en) Dialing method and mobile terminal
CN108989554B (en) Information processing method and terminal
CN110955788A (en) Information display method and electronic equipment
CN111372029A (en) Video display method and device and electronic equipment
CN109542307B (en) Image processing method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant