WO2019105438A1 - Video special effect adding method and apparatus, and smart mobile terminal - Google Patents

Video special effect adding method and apparatus, and smart mobile terminal Download PDF

Info

Publication number
WO2019105438A1
WO2019105438A1 PCT/CN2018/118370 CN2018118370W WO2019105438A1 WO 2019105438 A1 WO2019105438 A1 WO 2019105438A1 CN 2018118370 W CN2018118370 W CN 2018118370W WO 2019105438 A1 WO2019105438 A1 WO 2019105438A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
special effect
instruction
user
frame
Prior art date
Application number
PCT/CN2018/118370
Other languages
French (fr)
Chinese (zh)
Inventor
周宇涛
Original Assignee
广州市百果园信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州市百果园信息技术有限公司 filed Critical 广州市百果园信息技术有限公司
Publication of WO2019105438A1 publication Critical patent/WO2019105438A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Definitions

  • the embodiment of the invention relates to the field of live broadcast, in particular to a video special effect adding method and device, and an intelligent mobile terminal.
  • Video editing refers to the process of recording the desired image with a camera and then using the video editing software on the computer to make the image into a disc.
  • real-time video editing has become a demand for development, and editing short video shots by smart mobile terminals has become a new demand.
  • the editing of the video by the smart mobile terminal still stays in a relatively simple application, such as cutting and splicing the video, or changing the color of the video by the color brightness of the video and the mask with a color added to the video.
  • the inventor of the present invention found in the research that the video editing method of the smart mobile terminal in the prior art can only realize the simple splicing and color grading function of the video, and the video splicing only places the plurality of videos in a sequential relationship. Play on the same timeline. Therefore, when the user uses the editing function, the operation space is limited, and the editing operation with high degree of freedom cannot be performed according to the editing needs of the user.
  • the editing function of the intelligent mobile terminal has a poor customer experience, and it is difficult to promote the application.
  • Embodiments of the present invention provide a high-degree-of-freedom video editing method, apparatus, and intelligent mobile terminal capable of determining a key frame picture landing point coordinate in a special effect animation according to a user instruction.
  • a technical solution adopted by the embodiment of the present invention is to provide a video special effect adding method, which includes the following steps:
  • the edit video and the special effects animation are combined such that the key frame picture is overlaid at a user-specified position coordinate in the edit frame picture.
  • an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture in the video editing state
  • the step of acquiring the click command or the slide command of the user in the video editing state further includes the following steps:
  • the anchor point is updated with the user's sliding instruction to update the drop point coordinate.
  • the editing area in the video editing state includes: a first editing area and a frame progress bar; and the first editing area displays a frame picture image that is represented by the editing video at the stop time of the frame progress bar;
  • the step of acquiring the click command or the slide command of the user in the video editing state further includes the following steps:
  • a frame picture image characterized by the stop of the frame progress bar is retrieved as the edit frame picture.
  • the frame progress bar is provided with a sliding bar marked with the duration of the special effect animation, and the sliding bar is provided with a command bar for indicating the location of the key frame picture;
  • the step of acquiring the click command or the slide command of the user in the video editing state further includes the following steps:
  • the first editing area displays a frame picture image characterized by a stop of the frame progress bar pointed by the instruction bar.
  • the step of acquiring a position coordinate specified by the user in the edit frame screen according to the click instruction or the slide instruction, and using the position coordinate as a drop point coordinate of a key frame picture in the special effect animation further includes Steps:
  • the special effects animation is played synchronously and the special effects animation is displayed on the upper layer of the edited video.
  • the special effect animation is played synchronously and the special effect animation is located after the step of displaying the upper layer of the editing video, and further includes the following steps :
  • the temporary effect animation is deleted on the stack.
  • the step of acquiring a position coordinate specified by the user in the edit frame screen according to the click instruction or the slide instruction, and using the position coordinate as a drop point coordinate of a key frame picture in the special effect animation further includes Steps:
  • an embodiment of the present invention further provides a video special effect adding apparatus, including:
  • the obtaining module is configured to obtain a click command or a slide instruction of the user in a video editing state
  • a processing module configured to acquire, according to the click instruction or the sliding instruction, a position coordinate specified by the user in the edit frame screen, and use the position coordinate as a drop point coordinate of a key frame picture in the special effect animation;
  • a synthesizing module configured to synthesize the edit video and the special effect animation, so that the key frame picture is overlaid at a user-specified position coordinate in the edit frame picture.
  • an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture in the video editing state
  • a first obtaining submodule configured to acquire a first click instruction of the user, and calculate coordinates of the first click instruction
  • a first comparison submodule configured to compare whether coordinates of the first click instruction are within a coordinate area of the anchor point
  • a first update submodule configured to: when the coordinates of the first click instruction are within a coordinate area of the anchor point, the anchor point is updated with a sliding instruction of the user to update the drop point coordinate.
  • the editing area in the video editing state includes: a first editing area and a frame progress bar; and the first editing area displays a frame picture image that is represented by the editing video at the stop time of the frame progress bar;
  • a second obtaining submodule configured to obtain a click or slide instruction of the user within a range of a frame progress bar
  • a first calculation submodule configured to determine a stop time of the frame progress bar according to a click or slide instruction in a range of the frame progress bar
  • the first calling submodule is configured to retrieve a frame picture image represented by the stop time of the frame progress bar as the edit frame picture.
  • the frame progress bar is provided with a sliding bar marked with the duration of the special effect animation, and the sliding bar is provided with a command bar for indicating the location of the key frame picture;
  • a third obtaining sub-module configured to acquire a sliding instruction that the user acts within the range of the sliding bar, so that the sliding bar slides along the frame progress bar according to the sliding instruction;
  • a second calculating submodule configured to determine, according to a sliding instruction that the user acts within the range of the sliding bar, a frame progress bar stopping time pointed by the instruction bar;
  • a first display submodule configured to display, by the first editing area, a frame picture image represented by a stop time of a frame progress bar pointed by the instruction bar.
  • the video special effect adding device further includes:
  • a first setting submodule configured to separately place the edited video and the special effect animation on two parallel time tracks
  • the first preview sub-module when playing the edit video to the start time of the special effect animation, synchronously playing the special effect animation and the special effect animation is displayed on an upper layer of the edit video.
  • the video special effect adding device further includes:
  • a fourth obtaining submodule configured to acquire a revocation instruction of the user
  • the first undo sub-module is configured to delete the temporarily-supplied the effect animation in a stack manner.
  • the video special effect adding device further includes:
  • a fifth obtaining sub-module configured to acquire preset positional relationship information of each frame of the special effect animation and the key frame picture
  • a third calculation sub-module configured to calculate, according to the falling point coordinates and the position relationship information, a coverage coordinate of each frame of the special effect animation
  • a first determining submodule configured to determine, according to the coverage coordinate, a coverage position of each frame of the special effect animation.
  • an embodiment of the present invention further provides an intelligent mobile terminal, which includes:
  • One or more processors are One or more processors;
  • One or more applications wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the The video effect adding method.
  • the beneficial effects of the embodiment of the present invention are: in the double video editing and synthesis, a preset key animation picture is set in the preset special effect animation coordinate, and the falling position of the picture determines the entire special effect animation to edit the video overlay during the video synthesis.
  • the position of the user is free to set the position of the screen where the special effects animation is located in the edit video according to the way of setting the coordinates of the landing point. In this way, the user is free to control the view position of the special effect animation in the synthesized video, thereby improving the user's freedom in the video editing process, improving the entertainment of the video editing, and having a better customer experience and market prospect.
  • FIG. 1 is a schematic flowchart of a method for adding a video special effect according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of determining a position of another frame picture in a special effect animation according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a display manner for generating an anchor point in an edit frame picture according to an embodiment of the present invention
  • FIG. 4 is a schematic flow chart of determining anchor coordinates of an anchor point according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a first editing area and a frame progress bar display area according to an embodiment of the present invention
  • FIG. 6 is a schematic flowchart of an embodiment of selecting an edit frame picture according to an embodiment of the present invention.
  • FIG. 7 is a schematic view showing a display area provided with a slide bar and a command bar according to an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart diagram of another implementation manner of selecting an edit frame screen according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of a method for previewing a video result according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an editing effect cancellation process according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a video special effect adding apparatus according to an embodiment of the present invention.
  • FIG. 12 is a block diagram showing the basic structure of an intelligent mobile terminal according to an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of a basic flow of a video special effect adding method according to an embodiment of the present invention.
  • a video special effect adding method includes the following steps:
  • S1100 acquiring a user's click command or a slide instruction in a video editing state
  • the user uses the smart mobile terminal to edit the captured or locally stored video, and after entering the editing state, accepts the click or slide command sent by the user through the finger or the stylus.
  • Obtaining a position coordinate of a click command or a slide instruction of the user wherein when the user issues a click command, the coordinate position of the display area of the smart mobile terminal is obtained by the user, and the coordinate position is used as the position coordinate specified by the user.
  • the command issued by the user is a slide command, the coordinate position at the last point of the user's sliding track is obtained, and the coordinate position is taken as the position coordinate specified by the user.
  • the display area of the intelligent mobile terminal displays one frame in the edited video selected by the user as the edit frame screen, and the editing operation occurs when the user performs video editing on the edit frame screen, and the edit frame screen is partially edited after editing.
  • the operation can be copied on other frame pictures of the video.
  • the user specifies the coordinate position specified in the display area of the smart mobile terminal, that is, the coordinate position on the edit frame screen is specified.
  • the position coordinate is used as the coordinates of the falling point of the key frame of the special effect animation. It should be pointed out that the area of the key frame picture is smaller than the area of the edit frame picture, and the editing frame picture needs to be specified during editing.
  • a coordinate is used as a drop point in the edit frame picture of the key frame picture, and the drop point is a coordinate position designated by the user.
  • the video editing is to edit the special effect animation on the video.
  • the special effect animation refers to a video short film with a certain action scene (such as a meteorite falling or a shell explosion, etc.), or an animation subtitle having a certain motion change.
  • the present invention is not limited to this, and the special effects animation in this embodiment can be any video material having a video format.
  • a key frame picture is set in the special effect animation, and the key frame picture is pre-selected, and the frame picture in which the most tension or the plot transition occurs in the special effect animation is usually selected (for example, when the special effect animation is a shelling, when the projectile is grounded and exploded)
  • the frame picture; the special effect animation is the frame picture of the meteorite impact moment when the meteorite impacts; or the special effect animation is the frame picture when the subtitles are arranged in a straight line when the multi-word flight subtitles are arranged).
  • the key frame picture is not limited to this. According to the application scenario, the key frame picture can be any frame specified in the special effect animation.
  • the key frame picture After determining the coordinates of the drop point of the key frame picture in the edit frame picture, the key frame picture is overlaid on the position coordinate specified by the user in the edit frame picture, thereby completing the synthesis of the edit video and the special effect animation.
  • a preset key animation picture is set in the preset special effect animation coordinate, and the position of the falling point of the key frame picture determines the position where the entire special effect animation is superimposed in the editing video during video synthesis, the user According to the way of setting the coordinates of the coordinates, you can freely set the position of the screen where the special effects animation is located in the editing video. In this way, the user is free to control the view position of the special effect animation in the synthesized video, thereby improving the user's freedom in the video editing process, improving the entertainment of the video editing, and having a better customer experience and market prospect.
  • FIG. 2 is a schematic flowchart of determining the position of other frame pictures in the special effect animation of the embodiment.
  • step S1200 the following steps are further included:
  • the effect animation is composed of multi-frame pictures.
  • the center point of the selected key frame picture is taken as the coordinate origin.
  • the positional relationship of other frame pictures in the special effect animation with respect to the coordinate origin is calculated, and other frame pictures are calculated.
  • the relationship of the coordinates relative to the origin coordinates. For example, if the coordinates of the frame picture adjacent to the key frame picture are [2, 2], the key frame picture moves two units to the left, and after moving two units upward, it is the position of the frame picture.
  • the pre-stored positional relationship information can be acquired by accessing the location information storage location in the effect animation.
  • S1212 Calculate, according to the coordinates of the falling point and the positional relationship information, a coverage coordinate of each frame of the special effect animation
  • the coverage coordinates of each frame of the effect animation are calculated according to the coordinates of the landing point and the positional relationship information.
  • the coordinates of the frame picture adjacent to the key frame picture are [2, 2].
  • the coordinate of the key frame picture is [0, 0]
  • the coordinates of the determined key frame picture are [100, 200].
  • the corresponding overlay coordinate of the frame picture is [102, 202].
  • each frame of the special effects animation has a frame corresponding to it in the edited video, and the area of each frame in the edited video is the same. Therefore, the calculated overlay coordinates can be directly used in The effect animation is in the edit video frame picture corresponding to one frame.
  • the coverage position of each frame of the effect animation is determined according to the overlay coordinates and the area of the screen.
  • the coverage position of each frame of the entire special effect animation is calculated by the coordinates of the falling points of the key frame picture, thereby achieving the purpose of controlling the coverage position of the entire special effect animation by the coordinates of the falling points of the key frame picture, and reducing the position of the frame.
  • the complexity of editing is user-friendly.
  • an anchor point for calibrating the coordinates of the landing point is generated in the edit frame image in the video editing state, as shown in FIG. 3 and FIG. 4 .
  • FIG. 3 is a schematic diagram of a display manner for generating an anchor point in an edit frame picture according to the embodiment
  • FIG. 4 is a schematic flowchart of determining anchor coordinates of an anchor point according to the embodiment.
  • step S1100 As shown in FIG. 4, the following steps are further included between step S1100:
  • S1011 Acquire a first click instruction of the user, and calculate coordinates of the first click instruction
  • an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture, and the anchor point is specifically designed as a slamming anchor point, that is, an outer ring representing the range of the anchor point is provided and the design is completely The origin of the center.
  • the shape of the anchor point is not limited.
  • the anchor point can be designed into (not limited to) different patterns such as a circle, a ring, a triangle, or other polygons, and can also use a cartoon pattern according to different application scenarios or Other silhouette patterns are replaced.
  • the smart mobile terminal acquires the user's first click command and calculates the coordinates specified by the user's click command.
  • the coordinates of the shape of the anchor point are a collection of all the coordinates located within the outer circle of the anchor point. After obtaining the coordinates of the user's click instruction, it is determined by comparison whether the coordinates specified by the user are within the coordinate set of the anchor point. If not, it means that the user has not issued an instruction to change the coordinates of the landing point, and if it indicates that the user has issued an instruction to adjust the coordinates of the landing point, the process proceeds to step S1023.
  • the anchor point is updated with a sliding instruction of the user to update the drop point coordinate.
  • the anchor point moves with the sliding track of the user, and the anchor point is acquired after the user sliding instruction ends.
  • the coordinate position of the anchor point center of the new position, and the coordinate position of the anchor point center is the updated drop point coordinate.
  • the user can adjust the anchor point more intuitively, and at the same time, the user instruction confirmation program is set, which can prevent the user from performing other editing activities when setting the coordinates of the landing point, since only the click is
  • the coordinate of the anchor point can be adjusted in the coordinate area of the anchor point, so when the anchor point is not clicked, the user can perform other operations on the video to facilitate the user's editing.
  • FIG. 5 is a schematic diagram of a first editing area and a frame progress bar display area according to the embodiment.
  • FIG. 6 is a schematic flowchart of an embodiment of selecting an edit frame picture according to an embodiment.
  • the editing area in the video editing state includes: a first editing area and a frame progress bar; and the first editing area displays a frame picture image represented by the editing video at the stop time of the frame progress bar.
  • the first editing area is located above the frame progress bar, and the first editing area is a scaling frame for the display area.
  • the frame progress bar is the timeline for editing the video, which is composed of a number of frame picture thumbnails arranged by timeline.
  • the first edit area displays a frame picture image in which the edited video is characterized at the stop of the frame progress bar. If the frame progress bar stops at the position of 03:35 seconds, the first editing area displays the frame picture of the edited video at that time as the edit frame picture.
  • step S1100 shown in FIG. 6 The following steps are also included before step S1100 shown in FIG. 6:
  • S1021 Acquire a click or slide instruction of a user within a range of a frame progress bar
  • the smart mobile terminal acquires a user's click or slide instruction.
  • S1022 Determine a stop time of the frame progress bar according to a click or slide instruction in a range of the frame progress bar;
  • the coordinates of the range of the frame progress bar are a collection of all the coordinates located within the frame progress bar area. After obtaining the coordinates of the user's click command or the slide instruction, it is determined by comparison whether the coordinates specified by the user are within the set of frame progress bar coordinates. If not, the user has not issued an instruction to change the edit frame screen, and if so, the user has issued an instruction to adjust the edit frame screen.
  • the user After receiving the click or slide instruction of the user on the frame progress bar, the user stops determining the stop time on the progress bar according to the instruction, and the frame picture of the edited video represented by the time is the edit frame picture selected by the user.
  • S1023 Acknowledge a frame picture image represented by the stop timing of the frame progress bar as the edit frame picture.
  • an anchor point can be displayed in the first editing area while the frame progress bar is being set.
  • FIG. 7 is a schematic diagram of a display area provided with a slide bar and a command bar according to the embodiment.
  • FIG. 8 is a schematic flow chart of another embodiment of selecting an edit frame picture according to the embodiment.
  • the frame progress bar is provided with a slider bar marked with the duration of the special effect animation, and the slider bar is provided with a command bar for indicating the position of the key frame picture.
  • the slider bar is the frame that represents the duration of the effect animation.
  • the length of the slider is the proportion of the effect animation on the edit video frame progress bar. For example, the duration of the special effects animation is 5s, and the duration of editing the video is 20s.
  • the length of the slider on the frame progress bar is one quarter of the total length of the frame progress bar; the duration of the special effect animation is 5s, and the duration of editing the video is 45s. Then the length of the slider on the frame progress bar is one-ninth of the total length of the frame progress bar.
  • the command bar is disposed on the slide bar for indicating the position of the key frame picture in the special effect animation, and the instruction bar is designed to have an indication such as an arrow indicating the representation, such as (not limited to) a sniper anchor point or a triangular arrow.
  • step S1100 the following steps are further included:
  • S1013 Acquire a sliding instruction that the user acts in the range of the sliding bar, so that the sliding bar slides along the frame progress bar according to the sliding instruction;
  • a sliding instruction that the user acts within the range of the slider is obtained to enable the slider to slide following the user's sliding command.
  • the user After receiving the click or slide instruction applied by the user on the frame progress bar, the user stops determining the stop time on the frame progress bar according to the instruction, and the frame picture of the edited video represented by the time is the edit frame picture selected by the user.
  • the first editing area displays a frame picture image represented by a stop time of a frame progress bar pointed by the instruction bar.
  • the slide bar calls the frame picture represented by the time of the frame progress bar pointed by the instruction bar at this time as the edit frame picture, that is, the first edit area always displays the image represented by the frame progress bar of the command bar alignment time.
  • the user can intuitively adjust the position of the entire special effect animation on the frame progress bar, and can also visually see the play position of the key frame picture on the frame progress bar, thereby improving the intuitive operation.
  • FIG. 9 is a schematic flowchart of a method for previewing a video result according to an embodiment.
  • step S1200 the following steps are further included:
  • the edit video and the effect animation are placed on two parallel time tracks during preview. And the time track where the effect animation is located is always above the edit video time track. So that the effects animation is always on the top of the edit video.
  • the frame images of the editing video and the special effects animation are simultaneously read, and the two frame images are simultaneously rendered, and then placed in the video memory of the smart mobile terminal.
  • the display is called, the frame picture after the superimposed rendering is called for display, thereby completing the presentation of the two video overlays.
  • the user can preview the editing effect, which is convenient for the user to review, and helps to enhance the editing effect of the video. .
  • the editing effect can be quickly deleted in the preview state.
  • FIG. 10 is a schematic diagram of an editing effect revocation process according to the embodiment.
  • step S1222 the following steps are further included:
  • the user's revocation instruction is acquired, and the user issues a revocation instruction by clicking on a specific location (undo button) area of the display area of the smart mobile terminal.
  • S1232 The temporary effect animation is deleted in a stack manner.
  • the smart mobile terminal When the smart mobile terminal stores the special effect animation added to the edited video, it is saved in a stack manner, and is characterized by advanced post-out. Since a plurality of special effect animations can be set on the same editing video, the method of stack entry is used for storage during storage, and when the undoing is performed, the temporary effect animation is also deleted by stacking, that is, deleting and finally entering the temporary storage. The effect animation in the space, and finally delete the first effect animation into the temporary space.
  • FIG. 11 is a block diagram showing the basic structure of a video special effect adding apparatus according to this embodiment.
  • a video special effect adding apparatus includes: an obtaining module 2100, a processing module 2200, and a synthesizing module 2300.
  • the obtaining module 2100 is configured to acquire a click instruction or a slide instruction of the user in a video editing state;
  • the processing module 2200 is configured to acquire a position coordinate specified by the user in the edit frame screen according to the click instruction or the slide instruction, and use the position coordinate as a special effect animation.
  • the synthesizing module 2300 is configured to synthesize the edit video and the special effect animation so that the key frame picture is overlaid at the position coordinates specified by the user in the edit frame picture.
  • the preset special effect animation coordinates are provided with a key frame picture, and the position of the falling point of the picture determines the position where the entire special effect animation is superimposed in the editing video during the video synthesis, and the user Set the coordinates of the coordinates to freely set the position of the screen where the effect animation is located in the edit video. In this way, the user is free to control the view position of the special effect animation in the synthesized video, thereby improving the user's freedom in the video editing process, improving the entertainment of the video editing, and having a better customer experience and market prospect.
  • an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture in the video editing state.
  • the video special effect adding device further includes: a first acquiring submodule, a first comparing submodule, and a first updating submodule.
  • the first obtaining sub-module is configured to acquire a first click instruction of the user, and calculate coordinates of the first click instruction;
  • the first comparison sub-module is configured to compare whether the coordinates of the first click instruction are in a coordinate area of the anchor point.
  • the first update submodule is configured to: when the coordinates of the first click instruction are within the coordinate area of the anchor point, the anchor point is updated with the sliding instruction of the user to update the coordinates of the drop point.
  • the editing region in the video editing state includes: a first editing region and a frame progress bar; and the first editing region displays a frame image that is characterized by the editing video at the stop of the frame progress bar.
  • the video special effect adding device further includes: a second obtaining submodule, a first calculating submodule, and a first calling submodule.
  • the second obtaining sub-module is configured to obtain a click or slide instruction of the user in a range of a frame progress bar; the first calculating sub-module is configured to determine a stop time of the frame progress bar according to a click or a sliding instruction in a range of the frame progress bar; A calling sub-module is used to retrieve a frame picture image characterized by a stop of the frame progress bar as an edit frame picture.
  • the frame progress bar is provided with a slider bar marked with the duration of the effect animation
  • the slider bar is provided with a command bar for indicating the position of the key frame picture.
  • the video special effect adding device further includes: a third obtaining submodule, a second calculating submodule, and a first display submodule.
  • the third obtaining sub-module is configured to obtain a sliding instruction that the user acts in the range of the sliding bar, so that the sliding bar slides along the frame progress bar with the sliding instruction; and the second computing sub-module is configured to act within the sliding bar range according to the user.
  • the sliding instruction determines a frame progress bar stop time pointed to by the instruction bar;
  • the first display sub-module is configured to display a frame picture image represented by the stop of the frame progress bar pointed by the instruction bar in the first edit area.
  • the video special effect adding device further includes: a first setting submodule and a first preview submodule.
  • the first setting sub-module is used to place the editing video and the special effect animation on two parallel time tracks respectively; when the first preview sub-module is used to play the editing video to the start time of the special effect animation, the special effect animation and the special effect are synchronously played.
  • the animation is located on the top of the edit video.
  • the video special effect adding apparatus further includes: a fourth obtaining submodule and a first revocation submodule.
  • the fourth obtaining sub-module is configured to acquire a revocation instruction of the user; and the first revocation sub-module is configured to delete the temporarily-applied special effect animation in a stack manner.
  • the video special effect adding apparatus further includes: a fifth obtaining submodule, a third calculating submodule, and a first determining submodule.
  • the fifth obtaining sub-module is configured to obtain position relationship information of each frame picture and key frame picture in the preset special effect animation;
  • the third calculating sub-module is configured to calculate each special effect animation according to the falling point coordinate and the position relationship information.
  • the first determining sub-module is configured to determine a coverage position of each frame of the special effect animation according to the overlay coordinate.
  • FIG. 12 is a schematic diagram of a basic structure of an intelligent mobile terminal according to an embodiment of the present invention.
  • all the programs in the video special effect adding method in the embodiment are stored in the memory 1520 of the smart mobile terminal, and the processor 1580 can call the program in the memory 1520 to perform the above video special effect adding. All the features listed in the method.
  • the video effect adding method in this embodiment is described in detail in the function implemented by the smart mobile terminal, and details are not described herein again.
  • a key frame picture is set in the preset special effect animation coordinates, and the position of the falling point of the picture determines the position where the entire special effect animation is superimposed in the editing video during the video synthesis, and the user sets according to the setting.
  • the embodiment of the present invention further provides an intelligent mobile terminal.
  • the terminal may be any terminal device including a smart mobile terminal, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, and the terminal is an intelligent mobile terminal as an example:
  • FIG. 12 is a block diagram showing a partial structure of an intelligent mobile terminal related to a terminal provided by an embodiment of the present invention.
  • the smart mobile terminal includes: a radio frequency (RF) circuit 1510, a memory 1520, an input unit 1530, a display unit 1540, a sensor 1550, an audio circuit 1560, and a wireless fidelity (Wi-Fi) module 1570. , processor 1580, and power supply 1590 and other components.
  • RF radio frequency
  • the smart mobile terminal structure shown in FIG. 12 does not constitute a limitation on the smart mobile terminal, and may include more or less components than those illustrated, or combine some components or different components. Arrangement.
  • the RF circuit 1510 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is processed by the processor 1580. In addition, the data designed for the uplink is sent to the base station.
  • RF circuitry 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuitry 1510 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Messaging Service
  • the memory 1520 can be used to store software programs and modules, and the processor 1580 executes various functional applications and data processing of the smart mobile terminal by running software programs and modules stored in the memory 1520.
  • the memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a voiceprint playing function, an image playing function, etc.), and the like; the storage data area may be stored. Data created according to the use of the smart mobile terminal (such as audio data, phone book, etc.).
  • memory 1520 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 1530 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the smart mobile terminal.
  • the input unit 1530 may include a touch panel 1531 and other input devices 1532.
  • the touch panel 1531 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 1531 or near the touch panel 1531. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 1531 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 1580 is provided and can receive commands from the processor 1580 and execute them.
  • the touch panel 1531 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 1530 may also include other input devices 1532.
  • other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 1540 can be used to display information input by the user or information provided to the user as well as various menus of the smart mobile terminal.
  • the display unit 1540 can include a display panel 1541.
  • the display panel 1541 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 1531 may cover the display panel 1541. After the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch panel 1531 transmits to the processor 1580 to determine the type of the touch event, and then the processor 1580 according to the touch event. The type provides a corresponding visual output on display panel 1541.
  • the touch panel 1531 and the display panel 1541 are two independent components to implement the input and input functions of the smart mobile terminal, in some embodiments, the touch panel 1531 and the display panel 1541 may be Integrate to realize the input and output functions of intelligent mobile terminals.
  • the smart mobile terminal may also include at least one type of sensor 1550, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1541 according to the brightness of the ambient light, and the proximity sensor may close the display panel 1541 when the smart mobile terminal moves to the ear. And / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the posture of smart mobile terminals (such as horizontal and vertical screen switching).
  • An audio circuit 1560, a speaker 1561, and a microphone 1562 can provide an audio interface between the user and the smart mobile terminal.
  • the audio circuit 1560 can transmit the converted electrical data of the received audio data to the speaker 1561, and convert it into a voiceprint signal output by the speaker 1561.
  • the microphone 1562 converts the collected voiceprint signal into an electrical signal by the audio.
  • Circuit 1560 is converted to audio data upon receipt, processed by audio data output processor 1580, transmitted via RF circuitry 1510 to, for example, another smart mobile terminal, or output audio data to memory 1520 for further processing.
  • Wi-Fi is a short-range wireless transmission technology.
  • the smart mobile terminal can help users to send and receive emails, browse web pages and access streaming media through the Wi-Fi module 1570. It provides users with wireless broadband Internet access.
  • FIG. 12 shows the Wi-Fi module 1570, it can be understood that it does not belong to the essential configuration of the smart mobile terminal, and can be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 1580 is a control center of the smart mobile terminal that connects various portions of the entire smart mobile terminal using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 1520, and by calling them stored in the memory 1520.
  • the processor 1580 may include one or more processing units; preferably, the processor 1580 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 1580.
  • the smart mobile terminal also includes a power source 1590 (such as a battery) for supplying power to various components.
  • a power source 1590 such as a battery
  • the power source can be logically connected to the processor 1580 through a power management system to manage functions such as charging, discharging, and power management through the power management system. .
  • the smart mobile terminal may further include a camera, a Bluetooth module, and the like, and details are not described herein again.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

Disclosed in embodiments of the present invention are a video special effect adding method and apparatus, and a smart mobile terminal. The method comprises the following steps: obtaining a user's click instruction or slide instruction in a video editing state; obtaining position coordinates specified by a user in an editing frame according to the click instruction or the slide instruction, and using the position coordinates as fall point coordinates of one key frame in a special effect animation; and synthesizing an edition video and the special effect animation such that the key frame covers the position coordinates specified by the user in the editing frame. During dual-video editing synthesis, one key frame is provided in preset special effect animation coordinates, the fall point position of the frame decides the position where the whole special effect animation is superimposed at the edition video during the video synthesis, and the user freely sets, by setting fall point coordinates, the frame position where the special effect animation is located in the edition video.

Description

视频特效添加方法、装置及智能移动终端Video special effect adding method, device and intelligent mobile terminal 技术领域Technical field
本发明实施例涉及直播领域,尤其是一种视频特效添加方法、装置及智能移动终端。The embodiment of the invention relates to the field of live broadcast, in particular to a video special effect adding method and device, and an intelligent mobile terminal.
背景技术Background technique
视频编辑是指先用摄影机摄录下预期的影像,再在电脑上用视频编辑软件将影像制作成碟片的编辑过程。但是随着智能移动终端的处理能力越来越好,即时视频编辑成为发展的需求,通过智能移动终端对拍摄的短视频进行编辑成为新的需求。Video editing refers to the process of recording the desired image with a camera and then using the video editing software on the computer to make the image into a disc. However, as the processing power of smart mobile terminals becomes better and better, real-time video editing has become a demand for development, and editing short video shots by smart mobile terminals has become a new demand.
现有技术中,智能移动终端对视频的编辑还停留在较为简单的应用,如对视频进行剪切拼接,或者通过视频的颜色亮度和在视频上加设有颜色的蒙版改变视频的颜色。In the prior art, the editing of the video by the smart mobile terminal still stays in a relatively simple application, such as cutting and splicing the video, or changing the color of the video by the color brightness of the video and the mask with a color added to the video.
本发明创造的发明人,在研究中发现,现有技术中智能移动终端的视频编辑方法仅能够实现视频的简单拼接和调色功能,视频拼接时也仅仅是将多个视频分先后关系放置在同一时间轴上播放。因此,用户在使用编辑功能时,操作的空间有限,无法根据自己的编辑需要进行自由度较高的编辑操作。智能移动终端的编辑功能客户体验较差,推广应用难度较大。The inventor of the present invention found in the research that the video editing method of the smart mobile terminal in the prior art can only realize the simple splicing and color grading function of the video, and the video splicing only places the plurality of videos in a sequential relationship. Play on the same timeline. Therefore, when the user uses the editing function, the operation space is limited, and the editing operation with high degree of freedom cannot be performed according to the editing needs of the user. The editing function of the intelligent mobile terminal has a poor customer experience, and it is difficult to promote the application.
发明内容Summary of the invention
本发明实施例提供一种能够根据用户指令确定特效动画中关键帧画面落点坐标的高自由度视频编辑方法、装置及智能移动终端。Embodiments of the present invention provide a high-degree-of-freedom video editing method, apparatus, and intelligent mobile terminal capable of determining a key frame picture landing point coordinate in a special effect animation according to a user instruction.
为解决上述技术问题,本发明创造的实施例采用的一个技术方案是:提供一种视频特效添加方法,包括下述步骤:In order to solve the above technical problem, a technical solution adopted by the embodiment of the present invention is to provide a video special effect adding method, which includes the following steps:
在视频编辑状态下获取用户的点击指令或滑动指令;Obtaining a user's click command or a slide instruction in a video editing state;
根据所述点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将所述位置坐标作为特效动画中一个关键帧画面的落点坐标;Acquiring, according to the click instruction or the sliding instruction, a position coordinate specified by the user in the edit frame picture, and using the position coordinate as a drop point coordinate of a key frame picture in the special effect animation;
对编辑视频和所述特效动画进行合成,以使所述关键帧画面覆盖在所述编辑帧画面中用户指定的位置坐标处。The edit video and the special effects animation are combined such that the key frame picture is overlaid at a user-specified position coordinate in the edit frame picture.
可选地,在所述视频编辑状态下所述编辑帧画面中生成用于标定所述落点坐标的锚点;Optionally, an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture in the video editing state;
所述在视频编辑状态下获取用户的点击指令或滑动指令步骤之前还包括,下述步骤:The step of acquiring the click command or the slide command of the user in the video editing state further includes the following steps:
获取用户的第一点击指令,并计算所述第一点击指令的坐标;Obtaining a first click instruction of the user, and calculating coordinates of the first click instruction;
比对所述第一点击指令的坐标是否在所述锚点的坐标区域内;Aligning whether coordinates of the first click instruction are within a coordinate area of the anchor point;
当所述第一点击指令的坐标在所述锚点的坐标区域内时,所述锚点随用户的滑动指令进行位置更新,以更新所述落点坐标。When the coordinates of the first click instruction are within the coordinate area of the anchor point, the anchor point is updated with the user's sliding instruction to update the drop point coordinate.
可选地,在所述视频编辑状态下编辑区域包括:第一编辑区域与帧进度条;所述第一编辑区域显示所述编辑视频在所述帧进度条停止时刻表征的帧画面图像;Optionally, the editing area in the video editing state includes: a first editing area and a frame progress bar; and the first editing area displays a frame picture image that is represented by the editing video at the stop time of the frame progress bar;
所述在视频编辑状态下获取用户的点击指令或滑动指令步骤之前还包括,下述步 骤:The step of acquiring the click command or the slide command of the user in the video editing state further includes the following steps:
获取用户在帧进度条范围内的点击或滑动指令;Obtaining a click or slide instruction by the user within the range of the frame progress bar;
根据所述帧进度条范围内的点击或滑动指令确定所述帧进度条的停止时刻;Determining a stop time of the frame progress bar according to a click or slide instruction in a range of the frame progress bar;
调取所述帧进度条停止时刻表征的帧画面图像作为所述编辑帧画面。A frame picture image characterized by the stop of the frame progress bar is retrieved as the edit frame picture.
可选地,所述帧进度条上设置有标着所述特效动画时长的滑动条,所述滑动条上设有表征所述关键帧画面所在位置的指令条;Optionally, the frame progress bar is provided with a sliding bar marked with the duration of the special effect animation, and the sliding bar is provided with a command bar for indicating the location of the key frame picture;
所述在视频编辑状态下获取用户的点击指令或滑动指令步骤之前还包括,下述步骤:The step of acquiring the click command or the slide command of the user in the video editing state further includes the following steps:
获取用户作用在所述滑动条范围内的滑动指令,以使所述滑动条随所述滑动指令沿所述帧进度条滑动;Obtaining a sliding instruction that a user acts within the range of the slider to cause the slider to slide along the frame progress bar along with the sliding instruction;
根据用户作用在所述滑动条范围内的滑动指令确定所述指令条指向的帧进度条停止时刻;Determining, according to a sliding instruction of the user in the range of the sliding bar, a frame progress bar stopping time pointed by the instruction bar;
所述第一编辑区域显示所述指令条指向的帧进度条停止时刻表征的帧画面图像。The first editing area displays a frame picture image characterized by a stop of the frame progress bar pointed by the instruction bar.
可选地,所述根据所述点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将所述位置坐标作为特效动画中一个关键帧画面的落点坐标的步骤之后,还包括下述步骤:Optionally, the step of acquiring a position coordinate specified by the user in the edit frame screen according to the click instruction or the slide instruction, and using the position coordinate as a drop point coordinate of a key frame picture in the special effect animation, further includes Steps:
将所述编辑视频与所述特效动画分别放置在两个并行时间轨道上;And placing the edited video and the special effect animation on two parallel time tracks;
播放所述编辑视频至所述特效动画的起始时间时,同步播放所述特效动画且所述特效动画位于所述编辑视频的上层显示。When the editing video is played to the start time of the special effect animation, the special effects animation is played synchronously and the special effects animation is displayed on the upper layer of the edited video.
可选地,所述播放所述编辑视频至所述特效动画的起始时间时,同步播放所述特效动画且所述特效动画位于所述编辑视频的上层显示的步骤之后,还包括下述步骤:Optionally, when the playing video is played to the start time of the special effect animation, the special effect animation is played synchronously and the special effect animation is located after the step of displaying the upper layer of the editing video, and further includes the following steps :
获取用户的撤销指令;Obtain the user's revocation instruction;
将暂存的所述特效动画按堆栈的方式进行删除。The temporary effect animation is deleted on the stack.
可选地,所述根据所述点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将所述位置坐标作为特效动画中一个关键帧画面的落点坐标的步骤之后,还包括下述步骤:Optionally, the step of acquiring a position coordinate specified by the user in the edit frame screen according to the click instruction or the slide instruction, and using the position coordinate as a drop point coordinate of a key frame picture in the special effect animation, further includes Steps:
获取预设的所述特效动画中每一帧画面与所述关键帧画面的位置关系信息;Obtaining positional relationship information between each frame of the preset effect animation and the key frame picture;
根据所述落点坐标与位置关系信息计算所述特效动画中每一帧画面的覆盖坐标;Calculating, according to the falling point coordinates and the positional relationship information, a coverage coordinate of each frame of the special effect animation;
根据所述覆盖坐标确定所述特效动画中每一帧画面的覆盖位置。Determining a coverage position of each frame of the effect animation according to the coverage coordinates.
为解决上述技术问题,本发明实施例还提供一种视频特效添加装置,包括:In order to solve the above technical problem, an embodiment of the present invention further provides a video special effect adding apparatus, including:
获取模块,用于在视频编辑状态下获取用户的点击指令或滑动指令;The obtaining module is configured to obtain a click command or a slide instruction of the user in a video editing state;
处理模块,用于根据所述点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将所述位置坐标作为特效动画中一个关键帧画面的落点坐标;a processing module, configured to acquire, according to the click instruction or the sliding instruction, a position coordinate specified by the user in the edit frame screen, and use the position coordinate as a drop point coordinate of a key frame picture in the special effect animation;
合成模块,用于对编辑视频和所述特效动画进行合成,以使所述关键帧画面覆盖在所述编辑帧画面中用户指定的位置坐标处。And a synthesizing module, configured to synthesize the edit video and the special effect animation, so that the key frame picture is overlaid at a user-specified position coordinate in the edit frame picture.
可选地,在所述视频编辑状态下所述编辑帧画面中生成用于标定所述落点坐标的锚点;Optionally, an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture in the video editing state;
所述视频特效添加装置还包括:The video special effect adding device further includes:
第一获取子模块,用于获取用户的第一点击指令,并计算所述第一点击指令的坐标;a first obtaining submodule, configured to acquire a first click instruction of the user, and calculate coordinates of the first click instruction;
第一比对子模块,用于比对所述第一点击指令的坐标是否在所述锚点的坐标区域内;a first comparison submodule configured to compare whether coordinates of the first click instruction are within a coordinate area of the anchor point;
第一更新子模块,用于当所述第一点击指令的坐标在所述锚点的坐标区域内时,所述锚点随用户的滑动指令进行位置更新,以更新所述落点坐标。a first update submodule, configured to: when the coordinates of the first click instruction are within a coordinate area of the anchor point, the anchor point is updated with a sliding instruction of the user to update the drop point coordinate.
可选地,在所述视频编辑状态下编辑区域包括:第一编辑区域与帧进度条;所述第一编辑区域显示所述编辑视频在所述帧进度条停止时刻表征的帧画面图像;Optionally, the editing area in the video editing state includes: a first editing area and a frame progress bar; and the first editing area displays a frame picture image that is represented by the editing video at the stop time of the frame progress bar;
所述视频特效添加装置还包括:The video special effect adding device further includes:
第二获取子模块,用于获取用户在帧进度条范围内的点击或滑动指令;a second obtaining submodule, configured to obtain a click or slide instruction of the user within a range of a frame progress bar;
第一计算子模块,用于根据所述帧进度条范围内的点击或滑动指令确定所述帧进度条的停止时刻;a first calculation submodule, configured to determine a stop time of the frame progress bar according to a click or slide instruction in a range of the frame progress bar;
第一调用子模块,用于调取所述帧进度条停止时刻表征的帧画面图像作为所述编辑帧画面。The first calling submodule is configured to retrieve a frame picture image represented by the stop time of the frame progress bar as the edit frame picture.
可选地,所述帧进度条上设置有标着所述特效动画时长的滑动条,所述滑动条上设有表征所述关键帧画面所在位置的指令条;Optionally, the frame progress bar is provided with a sliding bar marked with the duration of the special effect animation, and the sliding bar is provided with a command bar for indicating the location of the key frame picture;
所述视频特效添加装置还包括:The video special effect adding device further includes:
第三获取子模块,用于获取用户作用在所述滑动条范围内的滑动指令,以使所述滑动条随所述滑动指令沿所述帧进度条滑动;a third obtaining sub-module, configured to acquire a sliding instruction that the user acts within the range of the sliding bar, so that the sliding bar slides along the frame progress bar according to the sliding instruction;
第二计算子模块,用于根据用户作用在所述滑动条范围内的滑动指令确定所述指令条指向的帧进度条停止时刻;a second calculating submodule, configured to determine, according to a sliding instruction that the user acts within the range of the sliding bar, a frame progress bar stopping time pointed by the instruction bar;
第一显示子模块,用于所述第一编辑区域显示所述指令条指向的帧进度条停止时刻表征的帧画面图像。And a first display submodule, configured to display, by the first editing area, a frame picture image represented by a stop time of a frame progress bar pointed by the instruction bar.
可选地,所述视频特效添加装置还包括:Optionally, the video special effect adding device further includes:
第一设置子模块,用于将所述编辑视频与所述特效动画分别放置在两个并行时间轨道上;a first setting submodule, configured to separately place the edited video and the special effect animation on two parallel time tracks;
第一预览子模块,用于播放所述编辑视频至所述特效动画的起始时间时,同步播放所述特效动画且所述特效动画位于所述编辑视频的上层显示。The first preview sub-module, when playing the edit video to the start time of the special effect animation, synchronously playing the special effect animation and the special effect animation is displayed on an upper layer of the edit video.
可选地,所述视频特效添加装置还包括:Optionally, the video special effect adding device further includes:
第四获取子模块,用于获取用户的撤销指令;a fourth obtaining submodule, configured to acquire a revocation instruction of the user;
第一撤销子模块,用于将暂存的所述特效动画按堆栈的方式进行删除。The first undo sub-module is configured to delete the temporarily-supplied the effect animation in a stack manner.
可选地,所述视频特效添加装置还包括:Optionally, the video special effect adding device further includes:
第五获取子模块,用于获取预设的所述特效动画中每一帧画面与所述关键帧画面的位置关系信息;a fifth obtaining sub-module, configured to acquire preset positional relationship information of each frame of the special effect animation and the key frame picture;
第三计算子模块,用于根据所述落点坐标与位置关系信息计算所述特效动画中每一帧画面的覆盖坐标;a third calculation sub-module, configured to calculate, according to the falling point coordinates and the position relationship information, a coverage coordinate of each frame of the special effect animation;
第一确定子模块,用于根据所述覆盖坐标确定所述特效动画中每一帧画面的覆盖位置。And a first determining submodule, configured to determine, according to the coverage coordinate, a coverage position of each frame of the special effect animation.
为解决上述技术问题,本发明实施例还提供一种智能移动终端,其特征在于,包括:In order to solve the above technical problem, an embodiment of the present invention further provides an intelligent mobile terminal, which includes:
一个或多个处理器;One or more processors;
存储器;Memory
一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述所述的视频特效添加方法。One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the The video effect adding method.
本发明实施例的有益效果是:在进行双视频编辑合成中,预设的特效动画坐标中设有一个关键帧画面,该画面的落点位置决定了整个特效动画在视频合成时在编辑视频叠加的位置,用户根据设定落点坐标的方式,自由设定特效动画在编辑视频中所在的画面位置。通过该方式使用户自由掌控特效动画在合成后视频中的视图位置,提高了用户在视频编辑过程中的自由度,提高了视频编辑的娱乐性,具有较好的客户体验和市场前景。The beneficial effects of the embodiment of the present invention are: in the double video editing and synthesis, a preset key animation picture is set in the preset special effect animation coordinate, and the falling position of the picture determines the entire special effect animation to edit the video overlay during the video synthesis. The position of the user is free to set the position of the screen where the special effects animation is located in the edit video according to the way of setting the coordinates of the landing point. In this way, the user is free to control the view position of the special effect animation in the synthesized video, thereby improving the user's freedom in the video editing process, improving the entertainment of the video editing, and having a better customer experience and market prospect.
附图说明DRAWINGS
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention. Other drawings can also be obtained from those skilled in the art based on these drawings without paying any creative effort.
图1为本发明实施例视频特效添加方法基本流程示意图;1 is a schematic flowchart of a method for adding a video special effect according to an embodiment of the present invention;
图2为本发明实施例特效动画中其他帧画面的位置确定流程示意图;2 is a schematic flowchart of determining a position of another frame picture in a special effect animation according to an embodiment of the present invention;
图3为本发明实施例编辑帧画面中生成锚点的一种显示方式示意图;FIG. 3 is a schematic diagram of a display manner for generating an anchor point in an edit frame picture according to an embodiment of the present invention; FIG.
图4为本发明实施例锚点确定落点坐标的流程示意图;4 is a schematic flow chart of determining anchor coordinates of an anchor point according to an embodiment of the present invention;
图5为本发明实施例第一编辑区域与帧进度条显示区域的一种示意图;FIG. 5 is a schematic diagram of a first editing area and a frame progress bar display area according to an embodiment of the present invention; FIG.
图6为本发明实施例选择编辑帧画面的一种实施方式流程示意图;FIG. 6 is a schematic flowchart of an embodiment of selecting an edit frame picture according to an embodiment of the present invention; FIG.
图7为本发明实施例设有滑动条与指令条的显示区域示意图;7 is a schematic view showing a display area provided with a slide bar and a command bar according to an embodiment of the present invention;
图8为本发明实施例选择编辑帧画面的另一种实施方式流程示意图;FIG. 8 is a schematic flowchart diagram of another implementation manner of selecting an edit frame screen according to an embodiment of the present invention; FIG.
图9为本发明实施例编辑视频结果预览方法的流程示意图;FIG. 9 is a schematic flowchart of a method for previewing a video result according to an embodiment of the present invention;
图10为本发明实施例编辑效果撤销流程示意图;FIG. 10 is a schematic diagram of an editing effect cancellation process according to an embodiment of the present invention; FIG.
图11为本发明实施例视频特效添加装置基本结构示意图;11 is a schematic structural diagram of a video special effect adding apparatus according to an embodiment of the present invention;
图12为本发明实施例智能移动终端基本结构框图。FIG. 12 is a block diagram showing the basic structure of an intelligent mobile terminal according to an embodiment of the present invention.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention.
在本发明的说明书和权利要求书及上述附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如101、102等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。In the flow of the description of the invention and the claims and the above-described figures, a plurality of operations in a particular order are included, but it should be clearly understood that these operations may not follow the order in which they appear in this document. Execution or parallel execution, the serial number of the operation such as 101, 102, etc., is only used to distinguish different operations, and the serial number itself does not represent any execution order. Additionally, these processes may include more or fewer operations, and these operations may be performed sequentially or in parallel. It should be noted that the descriptions of “first” and “second” in this document are used to distinguish different messages, devices, modules, etc., and do not represent the order, nor the “first” and “second”. It is a different type.
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
实施例Example
请参阅图1,图1为本实施例视频特效添加方法基本流程示意图。Please refer to FIG. 1. FIG. 1 is a schematic diagram of a basic flow of a video special effect adding method according to an embodiment of the present invention.
如图1所示,一种视频特效添加方法,包括下述步骤:As shown in FIG. 1, a video special effect adding method includes the following steps:
S1100、在视频编辑状态下获取用户的点击指令或滑动指令;S1100: acquiring a user's click command or a slide instruction in a video editing state;
用户使用智能移动终端对拍摄或者本地存储的视频进行编辑,进入到编辑状态后,接受用户通过手指或触控笔发送的点击或滑动指令。The user uses the smart mobile terminal to edit the captured or locally stored video, and after entering the editing state, accepts the click or slide command sent by the user through the finger or the stylus.
S1200、根据所述点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将所述位置坐标作为特效动画中一个关键帧画面的落点坐标;S1200. Acquire a position coordinate specified by the user in the edit frame screen according to the click instruction or the slide instruction, and use the position coordinate as a drop point coordinate of a key frame picture in the special effect animation;
获取用户的点击指令或滑动指令的位置坐标,其中,当用户发出点击指令时,获取用户点击智能移动终端显示区域的坐标位置,将该坐标位置作为用户指定的位置坐标。当用户发出的指令为滑动指令时,获取用户滑动轨迹最后一点处的坐标位置,将该坐标位置作为用户指定的位置坐标。Obtaining a position coordinate of a click command or a slide instruction of the user, wherein when the user issues a click command, the coordinate position of the display area of the smart mobile terminal is obtained by the user, and the coordinate position is used as the position coordinate specified by the user. When the command issued by the user is a slide command, the coordinate position at the last point of the user's sliding track is obtained, and the coordinate position is taken as the position coordinate specified by the user.
在编辑状态下智能移动终端的显示区域显示由用户选定的编辑视频中的一帧作为编辑帧画面,用户进行视频编辑时编辑操作均发生在编辑帧画面上,编辑帧画面编辑完成后部分编辑操作能够复制在视频的其他帧画面上。用户指定在智能移动终端显示区域指定的坐标位置,也就是指定了编辑帧画面上的坐标位置。In the editing state, the display area of the intelligent mobile terminal displays one frame in the edited video selected by the user as the edit frame screen, and the editing operation occurs when the user performs video editing on the edit frame screen, and the edit frame screen is partially edited after editing. The operation can be copied on other frame pictures of the video. The user specifies the coordinate position specified in the display area of the smart mobile terminal, that is, the coordinate position on the edit frame screen is specified.
获取到用户指定的位置坐标后,将该位置坐标作为特效动画关键帧画面的落点坐标,需要指出的是关键帧画面的面积小于编辑帧画面的面积,进行编辑时需要指定编辑帧画面中的一个坐标作为关键帧画面在编辑帧画面中的落点,该落点即为用户指定的坐标位置。After obtaining the coordinates of the position specified by the user, the position coordinate is used as the coordinates of the falling point of the key frame of the special effect animation. It should be pointed out that the area of the key frame picture is smaller than the area of the edit frame picture, and the editing frame picture needs to be specified during editing. A coordinate is used as a drop point in the edit frame picture of the key frame picture, and the drop point is a coordinate position designated by the user.
本实施方式中对视频编辑是在视频上编辑特效动画,本实施方式中,特效动画是指具有一定动作场景的视频短片(如陨石坠落或炮弹爆炸等),或者具有一定运动变化的动画字幕等,但不限于此,本实施方式中特效动画能够是具有视频格式的任意视频资料。In the embodiment, the video editing is to edit the special effect animation on the video. In the embodiment, the special effect animation refers to a video short film with a certain action scene (such as a meteorite falling or a shell explosion, etc.), or an animation subtitle having a certain motion change. However, the present invention is not limited to this, and the special effects animation in this embodiment can be any video material having a video format.
本实施方式中,特效动画中设定有关键帧画面,关键帧画面预先选定,通常选用该特效动画中最具张力或剧情转折发生的帧画面(如特效动画为炮击时,炮弹落地爆炸时的帧画面;特效动画为陨石撞击时,陨石撞击瞬间的帧画面;又或者特效动画为多字飞行字幕时,字幕排成一条直线时的帧画面)。但关键帧画面不局限与此,根据应用场景的不同,关键帧画面能够是特效动画中任意指定的一帧的画面。In this embodiment, a key frame picture is set in the special effect animation, and the key frame picture is pre-selected, and the frame picture in which the most tension or the plot transition occurs in the special effect animation is usually selected (for example, when the special effect animation is a shelling, when the projectile is grounded and exploded) The frame picture; the special effect animation is the frame picture of the meteorite impact moment when the meteorite impacts; or the special effect animation is the frame picture when the subtitles are arranged in a straight line when the multi-word flight subtitles are arranged). However, the key frame picture is not limited to this. According to the application scenario, the key frame picture can be any frame specified in the special effect animation.
1300、对编辑视频和所述特效动画进行合成,以使所述关键帧画面覆盖在所述编辑帧画面中用户指定的位置坐标处。1300. Synthesize the edit video and the special effect animation so that the key frame picture is overlaid at a position coordinate specified by the user in the edit frame picture.
确定了编辑帧画面中关键帧画面的落点坐标后,将关键帧画面覆盖在编辑帧画面中用户指定的位置坐标处,从而完成了编辑视频与特效动画的合成。After determining the coordinates of the drop point of the key frame picture in the edit frame picture, the key frame picture is overlaid on the position coordinate specified by the user in the edit frame picture, thereby completing the synthesis of the edit video and the special effect animation.
上述实施方式在进行双视频编辑合成中,预设的特效动画坐标中设有一个关键帧画面,该关键帧画面的落点位置决定了整个特效动画在视频合成时在编辑视频叠加的位置,用户根据设定落点坐标的方式,自由设定特效动画在编辑视频中所在的画面位置。通过该方式使用户自由掌控特效动画在合成后视频中的视图位置,提高了用户在视频编辑过程中的自由度,提高了视频编辑的娱乐性,具有较好的客户体验和市场前景。In the above embodiment, in the dual video editing and synthesis, a preset key animation picture is set in the preset special effect animation coordinate, and the position of the falling point of the key frame picture determines the position where the entire special effect animation is superimposed in the editing video during video synthesis, the user According to the way of setting the coordinates of the coordinates, you can freely set the position of the screen where the special effects animation is located in the editing video. In this way, the user is free to control the view position of the special effect animation in the synthesized video, thereby improving the user's freedom in the video editing process, improving the entertainment of the video editing, and having a better customer experience and market prospect.
落点坐标具有可复制性,特效动画由多个帧画面组成,关键帧画面的位置能够确定特效动画中其他帧画面在编辑视频中的覆盖位置。具体请参阅图2,图2为本实施例特效动画中其他帧画面的位置确定流程示意图。The coordinates of the drop point are reproducible, and the effect animation consists of multiple frame pictures. The position of the key frame picture can determine the coverage position of other frame pictures in the effect animation in the edited video. For details, please refer to FIG. 2. FIG. 2 is a schematic flowchart of determining the position of other frame pictures in the special effect animation of the embodiment.
如图2所示,步骤S1200之后还包括下述步骤:As shown in FIG. 2, after step S1200, the following steps are further included:
S1211、获取预设的所述特效动画中每一帧画面与所述关键帧画面的位置关系信息;S1211: Obtain positional relationship information between each frame of the preset effect animation and the key frame picture;
特效动画是由多帧画面组成的,以选定的关键帧画面的中心点为坐标原点,在构建特效动画时,计算出特效动画中其他帧画面相对于坐标原点的位置关系,及其他帧画面的坐标相对于原点坐标的关系。例如,与关键帧画面临近的帧画面的坐标为[2,2],则关键帧画面向左移动两个单位,在向上移动两个单位后,即是该帧画面的位置。The effect animation is composed of multi-frame pictures. The center point of the selected key frame picture is taken as the coordinate origin. When constructing the special effect animation, the positional relationship of other frame pictures in the special effect animation with respect to the coordinate origin is calculated, and other frame pictures are calculated. The relationship of the coordinates relative to the origin coordinates. For example, if the coordinates of the frame picture adjacent to the key frame picture are [2, 2], the key frame picture moves two units to the left, and after moving two units upward, it is the position of the frame picture.
当确定了关键帧画面的落点坐标后,通过访问特效动画中位置信息存储位置,就能够获取到预存储的位置关系信息。After the coordinates of the drop point of the key frame picture are determined, the pre-stored positional relationship information can be acquired by accessing the location information storage location in the effect animation.
S1212、根据所述落点坐标与位置关系信息计算所述特效动画中每一帧画面的覆盖坐标;S1212: Calculate, according to the coordinates of the falling point and the positional relationship information, a coverage coordinate of each frame of the special effect animation;
由于编辑视频中每一帧画面的大小均相同,因此,根据落点坐标计算的其他帧画面的坐标能够直接应用于编辑视频的其他帧画面中。Since the size of each frame in the edited video is the same, the coordinates of other frame pictures calculated according to the coordinates of the coordinates can be directly applied to other frame pictures of the edited video.
因此,根据落点坐标与位置关系信息计算出特效动画中每一帧画面的覆盖坐标。例如,与关键帧画面临近的帧画面的坐标为[2,2],原始设定中,关键帧画面的坐标为[0,0],而确定的关键帧画面的坐标为[100,200],则对应的该帧画面的覆盖坐标为[102,202]。Therefore, the coverage coordinates of each frame of the effect animation are calculated according to the coordinates of the landing point and the positional relationship information. For example, the coordinates of the frame picture adjacent to the key frame picture are [2, 2]. In the original setting, the coordinate of the key frame picture is [0, 0], and the coordinates of the determined key frame picture are [100, 200]. Then, the corresponding overlay coordinate of the frame picture is [102, 202].
S1213、根据所述覆盖坐标确定所述特效动画中每一帧画面的覆盖位置。S1213. Determine, according to the coverage coordinate, a coverage position of each frame of the special effect animation.
视频编辑时,在特效动画覆盖位置处,特效动画的每一帧在编辑视频中均有一帧与其对应,而编辑视频中每一帧的面积又相同,因此,计算出的覆盖坐标能够直接用在特效动画一帧画面对应的编辑视频帧画面中。根据覆盖坐标以及画面的面积,确定特效动画每一帧画面的覆盖位置。In video editing, at the special effect animation coverage position, each frame of the special effects animation has a frame corresponding to it in the edited video, and the area of each frame in the edited video is the same. Therefore, the calculated overlay coordinates can be directly used in The effect animation is in the edit video frame picture corresponding to one frame. The coverage position of each frame of the effect animation is determined according to the overlay coordinates and the area of the screen.
本实施方式通过关键帧画面的落点坐标,计算出整个特效动画中每一帧画面的覆盖位置,从而实现了通过关键帧画面的落点坐标控制整个特效动画的覆盖位置的目的,同时降低了编辑的复杂程度,方便用户操作。In the embodiment, the coverage position of each frame of the entire special effect animation is calculated by the coordinates of the falling points of the key frame picture, thereby achieving the purpose of controlling the coverage position of the entire special effect animation by the coordinates of the falling points of the key frame picture, and reducing the position of the frame. The complexity of editing is user-friendly.
在一些实施方式中,为使用户能够更加直观的对落点坐标进行确定,在视频编辑状态下编辑帧画面中生成用于标定所述落点坐标的锚点,具体请参阅图3和图4,图3为本实施例编辑帧画面中生成锚点的一种显示方式示意图;图4为本实施例锚点确定落点坐标的流程示意图。In some embodiments, in order to enable the user to more intuitively determine the coordinates of the landing point, an anchor point for calibrating the coordinates of the landing point is generated in the edit frame image in the video editing state, as shown in FIG. 3 and FIG. 4 . FIG. 3 is a schematic diagram of a display manner for generating an anchor point in an edit frame picture according to the embodiment; FIG. 4 is a schematic flowchart of determining anchor coordinates of an anchor point according to the embodiment.
如图4所述,步骤S1100之间还包括下述步骤:As shown in FIG. 4, the following steps are further included between step S1100:
S1011、获取用户的第一点击指令,并计算所述第一点击指令的坐标;S1011: Acquire a first click instruction of the user, and calculate coordinates of the first click instruction;
如图3所示,在视频编辑状态下,编辑帧画面中生成用于标定落点坐标的锚点,锚点具体设计为狙击锚点,即设有表示锚点范围的外圈与设计在完全中心的原点。但锚点的形态不局限于,根据应用场景的不同,锚点能够被设计成为(不限于)圆形、圆环、三角形或其它多边形等不同图案,也能够根据应用场景的不同用卡通图案或其他剪影图案替代。As shown in FIG. 3, in the video editing state, an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture, and the anchor point is specifically designed as a slamming anchor point, that is, an outer ring representing the range of the anchor point is provided and the design is completely The origin of the center. However, the shape of the anchor point is not limited. According to different application scenarios, the anchor point can be designed into (not limited to) different patterns such as a circle, a ring, a triangle, or other polygons, and can also use a cartoon pattern according to different application scenarios or Other silhouette patterns are replaced.
在锚点生成状态下,智能移动终端获取用户的第一点击指令,并计算出用户的点击指令指定的坐标。In the anchor generation state, the smart mobile terminal acquires the user's first click command and calculates the coordinates specified by the user's click command.
S1012、比对所述第一点击指令的坐标是否在所述锚点的坐标区域内;S1012. Align whether coordinates of the first click instruction are within a coordinate area of the anchor point;
锚点的形状的坐标是由位于锚点外圈内的所有坐标构成的集合。获取用户的点击指令的坐标后,通过比对确定用户指定的坐标是否在锚点的坐标集合内。若不在,则表示用户未下达变更落点坐标进行调整的指令,若在则表示用户下达了调整落点坐标的指令,则继续执行步骤S1023。The coordinates of the shape of the anchor point are a collection of all the coordinates located within the outer circle of the anchor point. After obtaining the coordinates of the user's click instruction, it is determined by comparison whether the coordinates specified by the user are within the coordinate set of the anchor point. If not, it means that the user has not issued an instruction to change the coordinates of the landing point, and if it indicates that the user has issued an instruction to adjust the coordinates of the landing point, the process proceeds to step S1023.
S1023、当所述第一点击指令的坐标在所述锚点坐标的区域内时,所述锚点随用户的滑动指令进行位置更新,以更新所述落点坐标。S1023. When the coordinates of the first click command are in the area of the anchor point coordinate, the anchor point is updated with a sliding instruction of the user to update the drop point coordinate.
当第一点击指令的坐标在锚点的坐标区域内时,确定用户指令对落点坐标进行调整,此时,锚点随用户的滑动轨迹进行移动,并在用户滑动指令结束后获取锚点在新位置的锚点中心的坐标位置,该锚点中心的坐标位置即更新后的落点坐标。When the coordinates of the first click instruction are within the coordinate area of the anchor point, it is determined that the user instruction adjusts the coordinates of the landing point. At this time, the anchor point moves with the sliding track of the user, and the anchor point is acquired after the user sliding instruction ends. The coordinate position of the anchor point center of the new position, and the coordinate position of the anchor point center is the updated drop point coordinate.
通过上述实施方法,能够使用户更加直观的对锚点进行调整,同时,设定用户指令确认程序,能够避免用户在进行落点坐标设定时,无法进行其他编辑活动的问题,由于只有点击在锚点的坐标区域内才能够对落点坐标进行调整,所以在不点击锚点时,用户能够对视频进行其他操作,方便用户的编辑。According to the above implementation method, the user can adjust the anchor point more intuitively, and at the same time, the user instruction confirmation program is set, which can prevent the user from performing other editing activities when setting the coordinates of the landing point, since only the click is The coordinate of the anchor point can be adjusted in the coordinate area of the anchor point, so when the anchor point is not clicked, the user can perform other operations on the video to facilitate the user's editing.
在一些实施方式中,用户需要在编辑视频的多张帧画面中确定其中一帧为编辑帧画面。具体请参阅图5和图6,其中图5为本实施例第一编辑区域与帧进度条显示区域的一种示意图;图6为本实施例选择编辑帧画面的一种实施方式流程示意图。In some embodiments, the user needs to determine one of the frames in the edited video as an edit frame. Referring to FIG. 5 and FIG. 6 , FIG. 5 is a schematic diagram of a first editing area and a frame progress bar display area according to the embodiment. FIG. 6 is a schematic flowchart of an embodiment of selecting an edit frame picture according to an embodiment.
如图5所示,视频编辑状态下编辑区域包括:第一编辑区域与帧进度条;所述第一编辑区域显示所述编辑视频在所述帧进度条停止时刻表征的帧画面图像。As shown in FIG. 5, the editing area in the video editing state includes: a first editing area and a frame progress bar; and the first editing area displays a frame picture image represented by the editing video at the stop time of the frame progress bar.
具体地,第一编辑区域位于帧进度条的上方,第一编辑区域为显示区域进行同比例缩放框体。帧进度条为编辑视频的时间轴,该时间轴是由若干个帧画面缩略图按时间线排布组成。第一编辑区域显示编辑视频在帧进度条停止时刻表征的帧画面图像。如帧进度条停止在03:35秒的位置,则第一编辑区域显示编辑视频在该时刻的帧画面作为编辑帧画面。Specifically, the first editing area is located above the frame progress bar, and the first editing area is a scaling frame for the display area. The frame progress bar is the timeline for editing the video, which is composed of a number of frame picture thumbnails arranged by timeline. The first edit area displays a frame picture image in which the edited video is characterized at the stop of the frame progress bar. If the frame progress bar stops at the position of 03:35 seconds, the first editing area displays the frame picture of the edited video at that time as the edit frame picture.
如图6所示步骤S1100之前还包括下述步骤:The following steps are also included before step S1100 shown in FIG. 6:
S1021、获取用户在帧进度条范围内的点击或滑动指令;S1021: Acquire a click or slide instruction of a user within a range of a frame progress bar;
在锚点显示状态下,智能移动终端获取用户的点击或滑动指令。In the anchor display state, the smart mobile terminal acquires a user's click or slide instruction.
S1022、根据所述帧进度条范围内的点击或滑动指令确定所述帧进度条的停止时刻;S1022: Determine a stop time of the frame progress bar according to a click or slide instruction in a range of the frame progress bar;
帧进度条的范围的坐标是由位于帧进度条区域内的所有坐标构成的集合。获取用户的点击指令或滑动指令的坐标后,通过比对确定用户指定的坐标是否在帧进度条坐标的集合内。若不在,则表示用户未下达变更编辑帧画面的指令,若在则表示用户下达了调整编辑帧画面的指令。The coordinates of the range of the frame progress bar are a collection of all the coordinates located within the frame progress bar area. After obtaining the coordinates of the user's click command or the slide instruction, it is determined by comparison whether the coordinates specified by the user are within the set of frame progress bar coordinates. If not, the user has not issued an instruction to change the edit frame screen, and if so, the user has issued an instruction to adjust the edit frame screen.
接收到用户的作用在帧进度条上的点击或滑动指令后,根据指令确定用户在进度条上的停止时刻,该时刻表征的编辑视频的帧画面,即为用户选定的编辑帧画面。After receiving the click or slide instruction of the user on the frame progress bar, the user stops determining the stop time on the progress bar according to the instruction, and the frame picture of the edited video represented by the time is the edit frame picture selected by the user.
S1023、调取所述帧进度条停止时刻表征的帧画面图像作为所述编辑帧画面。S1023: Acknowledge a frame picture image represented by the stop timing of the frame progress bar as the edit frame picture.
确认用户指令在帧进度条上的停止时刻,调取该停止时刻表征的帧画面,并将该帧画面显示在第一编辑区域。Confirming the stop time of the user instruction on the frame progress bar, picking up the frame picture characterized by the stop time, and displaying the frame picture in the first edit area.
如图5所示,在一些选择性实施方式中,在设置帧进度条的同时,能够在第一编辑区域显示锚点。As shown in FIG. 5, in some alternative implementations, an anchor point can be displayed in the first editing area while the frame progress bar is being set.
在一些实施方式中,添加特效动画后在帧进度条上添加表征特效动画时长的滑动条,并在滑动条上设置表征关键帧画面所在位置的指令条。具体请参阅图7和图8,图 7为本实施例设有滑动条与指令条的显示区域示意图;图8为本实施例选择编辑帧画面的另一种实施方式流程示意图。In some embodiments, after adding the effect animation, a slider that represents the duration of the effect animation is added to the frame progress bar, and a bar that represents the position of the key frame is set on the slider. For details, please refer to FIG. 7 and FIG. 8. FIG. 7 is a schematic diagram of a display area provided with a slide bar and a command bar according to the embodiment. FIG. 8 is a schematic flow chart of another embodiment of selecting an edit frame picture according to the embodiment.
如图7所示,帧进度条上设置有标着特效动画时长的滑动条,滑动条上设有表征关键帧画面所在位置的指令条。As shown in FIG. 7, the frame progress bar is provided with a slider bar marked with the duration of the special effect animation, and the slider bar is provided with a command bar for indicating the position of the key frame picture.
滑动条为表示特效动画时长的框体,滑动条的长度为特效动画在编辑视频帧进度条上的比例。如:特效动画时长为5s,编辑视频的时长为20s,则滑动条在帧进度条上的长度为帧进度条总长度的四分之一;特效动画时长为5s,编辑视频的时长为45s,则滑动条在帧进度条上的长度为帧进度条总长度的九分之一。The slider bar is the frame that represents the duration of the effect animation. The length of the slider is the proportion of the effect animation on the edit video frame progress bar. For example, the duration of the special effects animation is 5s, and the duration of editing the video is 20s. The length of the slider on the frame progress bar is one quarter of the total length of the frame progress bar; the duration of the special effect animation is 5s, and the duration of editing the video is 45s. Then the length of the slider on the frame progress bar is one-ninth of the total length of the frame progress bar.
指令条设置在滑动条上,用于指示关键帧画面在特效动画中的位置,指令条设计成具有指示表示的箭头等标识,如(不限于)狙击锚点或三角箭头等。The command bar is disposed on the slide bar for indicating the position of the key frame picture in the special effect animation, and the instruction bar is designed to have an indication such as an arrow indicating the representation, such as (not limited to) a sniper anchor point or a triangular arrow.
如图8所示,步骤S1100之前还包括下述步骤:As shown in FIG. 8, before step S1100, the following steps are further included:
S1031、获取用户作用在所述滑动条范围内的滑动指令,以使所述滑动条随所述滑动指令沿所述帧进度条滑动;S1013: Acquire a sliding instruction that the user acts in the range of the sliding bar, so that the sliding bar slides along the frame progress bar according to the sliding instruction;
获取用户作用在滑动条范围内的滑动指令,以使滑动条能够跟随用户的滑动指令进行滑动。A sliding instruction that the user acts within the range of the slider is obtained to enable the slider to slide following the user's sliding command.
S1032、根据用户作用在所述滑动条范围内的滑动指令确定所述指令条指向的帧进度条停止时刻;S1032, determining, according to a sliding instruction that is performed by the user in the range of the sliding bar, a frame progress bar stopping time pointed by the instruction bar;
接收到用户作用在帧进度条上的点击或滑动指令后,根据指令确定用户在帧进度条上的停止时刻,该时刻表征的编辑视频的帧画面,即为用户选定的编辑帧画面。After receiving the click or slide instruction applied by the user on the frame progress bar, the user stops determining the stop time on the frame progress bar according to the instruction, and the frame picture of the edited video represented by the time is the edit frame picture selected by the user.
S1033、所述第一编辑区域显示所述指令条指向的帧进度条停止时刻表征的帧画面图像。S1033. The first editing area displays a frame picture image represented by a stop time of a frame progress bar pointed by the instruction bar.
滑动条根据用户停止滑动后,调用此时指令条指向的帧进度条的时刻表征的帧画面作为编辑帧画面,即第一编辑区域总是显示指令条对准时刻的帧进度条表征的图像。After the slide bar is stopped, the slide bar calls the frame picture represented by the time of the frame progress bar pointed by the instruction bar at this time as the edit frame picture, that is, the first edit area always displays the image represented by the frame progress bar of the command bar alignment time.
采用上述方法,能够使用户直观的调整整个特效动画在帧进度条上的位置,也能够直观的看到关键帧画面在帧进度条上的播放位置,提高了操作的直观性。By adopting the above method, the user can intuitively adjust the position of the entire special effect animation on the frame progress bar, and can also visually see the play position of the key frame picture on the frame progress bar, thereby improving the intuitive operation.
在一些实施方式之中,在完成编辑后需要对编辑效果进行预览,具体请参阅图9,图9为本实施例编辑视频结果预览方法的流程示意图。In some embodiments, the editing effect needs to be previewed after the editing is completed. For details, please refer to FIG. 9. FIG. 9 is a schematic flowchart of a method for previewing a video result according to an embodiment.
如图9所示,步骤S1200之后还包括下述步骤:As shown in FIG. 9, after step S1200, the following steps are further included:
S1221、将所述编辑视频与所述特效动画分别放置在两个并行时间轨道上;S1221. The edit video and the special effect animation are respectively placed on two parallel time tracks;
预览时将编辑视频与特效动画放置在两个并行的时间轨道上。且特效动画所在的时间轨道总是位于编辑视频时间轨道的上方。以使特效动画总是位于编辑视频的上层显示。The edit video and the effect animation are placed on two parallel time tracks during preview. And the time track where the effect animation is located is always above the edit video time track. So that the effects animation is always on the top of the edit video.
S1222、播放所述编辑视频至所述特效动画的起始时间时,同步播放所述特效动画且所述特效动画位于所述编辑视频的上层显示。S1222: When the editing video is played to the start time of the special effect animation, the special effect animation is played synchronously and the special effect animation is displayed on an upper layer of the edited video.
播放时当编辑视频特效动画均需要进行播放时,同时读取编辑视频与特效动画在同一时刻的帧画面,并将两个帧画面进行同时渲染后,放置在智能移动终端的显存内。显示时调用经过叠加渲染后的帧画面进行显示,从而完成两个视频叠加后的演示。During playback, when editing video effects animations need to be played, the frame images of the editing video and the special effects animation are simultaneously read, and the two frame images are simultaneously rendered, and then placed in the video memory of the smart mobile terminal. When the display is called, the frame picture after the superimposed rendering is called for display, thereby completing the presentation of the two video overlays.
通过在播放预览时,播放至特效动画的起始时间时同时调用两个视频同一时刻的帧画面进行叠加渲染的方式,使用户能够预览编辑效果,方便用户审视,有助于加强视频的编辑效果。By playing the preview to the start time of the special effect animation and simultaneously calling the frame images of the two video at the same time for superimposed rendering, the user can preview the editing effect, which is convenient for the user to review, and helps to enhance the editing effect of the video. .
在一些实施方式中,能够在预览状态下对编辑效果进行快速删除。具体请参阅图10,图10为本实施例编辑效果撤销流程示意图。In some embodiments, the editing effect can be quickly deleted in the preview state. For details, please refer to FIG. 10 , which is a schematic diagram of an editing effect revocation process according to the embodiment.
如图10所示,步骤S1222之后还包括下述步骤:As shown in FIG. 10, after step S1222, the following steps are further included:
S1231、获取用户的撤销指令;S1231: Obtain a user's revocation instruction;
在上述预览状态下,获取用户的撤销指令,用户通过在智能移动终端的显示区域的特定位置(撤销按钮)区域进行点击,发布撤销指令。In the preview state described above, the user's revocation instruction is acquired, and the user issues a revocation instruction by clicking on a specific location (undo button) area of the display area of the smart mobile terminal.
S1232、将暂存的所述特效动画按堆栈的方式进行删除。S1232: The temporary effect animation is deleted in a stack manner.
智能移动终端在存储添加在编辑视频的特效动画时,采用堆栈的方式进行保存,其特点在于先进后出。由于在同一个编辑视频上能够设置多个特效动画,存储时采用堆栈进入的方法进行存储,在进行撤销时,也是通过堆栈的方式对暂存的特效动画进行删除,即先删除最后进入暂存空间内的特效动画,最后删除第一个进入暂存空间内的特效动画。When the smart mobile terminal stores the special effect animation added to the edited video, it is saved in a stack manner, and is characterized by advanced post-out. Since a plurality of special effect animations can be set on the same editing video, the method of stack entry is used for storage during storage, and when the undoing is performed, the temporary effect animation is also deleted by stacking, that is, deleting and finally entering the temporary storage. The effect animation in the space, and finally delete the first effect animation into the temporary space.
为解决上述技术问题,本发明实施例还提供一种视频特效添加装置。具体请参阅图11,图11为本实施例视频特效添加装置基本结构框图。In order to solve the above technical problem, an embodiment of the present invention further provides a video special effect adding apparatus. For details, please refer to FIG. 11. FIG. 11 is a block diagram showing the basic structure of a video special effect adding apparatus according to this embodiment.
如图11所示,一种视频特效添加装置,包括:获取模块2100、处理模块2200和合成模块2300。其中,获取模块2100用于在视频编辑状态下获取用户的点击指令或滑动指令;处理模块2200用于根据点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将位置坐标作为特效动画中一个关键帧画面的落点坐标;合成模块2300用于对编辑视频和特效动画进行合成,以使关键帧画面覆盖在编辑帧画面中用户指定的位置坐标处。As shown in FIG. 11 , a video special effect adding apparatus includes: an obtaining module 2100, a processing module 2200, and a synthesizing module 2300. The obtaining module 2100 is configured to acquire a click instruction or a slide instruction of the user in a video editing state; the processing module 2200 is configured to acquire a position coordinate specified by the user in the edit frame screen according to the click instruction or the slide instruction, and use the position coordinate as a special effect animation. The coordinates of the drop point of one of the key frame pictures; the synthesizing module 2300 is configured to synthesize the edit video and the special effect animation so that the key frame picture is overlaid at the position coordinates specified by the user in the edit frame picture.
视频特效添加装置在进行双视频编辑合成中,预设的特效动画坐标中设有一个关键帧画面,该画面的落点位置决定了整个特效动画在视频合成时在编辑视频叠加的位置,用户根据设定落点坐标的方式,自由设定特效动画在编辑视频中所在的画面位置。通过该方式使用户自由掌控特效动画在合成后视频中的视图位置,提高了用户在视频编辑过程中的自由度,提高了视频编辑的娱乐性,具有较好的客户体验和市场前景。In the video special effect adding device, the preset special effect animation coordinates are provided with a key frame picture, and the position of the falling point of the picture determines the position where the entire special effect animation is superimposed in the editing video during the video synthesis, and the user Set the coordinates of the coordinates to freely set the position of the screen where the effect animation is located in the edit video. In this way, the user is free to control the view position of the special effect animation in the synthesized video, thereby improving the user's freedom in the video editing process, improving the entertainment of the video editing, and having a better customer experience and market prospect.
在一些实施方式中,在视频编辑状态下编辑帧画面中生成用于标定落点坐标的锚点。视频特效添加装置还包括:第一获取子模块、第一比对子模块和第一更新子模块。其中,第一获取子模块用于获取用户的第一点击指令,并计算第一点击指令的坐标;第一比对子模块用于比对第一点击指令的坐标是否在锚点的坐标区域内;第一更新子模块用于当第一点击指令的坐标在锚点的坐标区域内时,锚点随用户的滑动指令进行位置更新,以更新落点坐标。In some embodiments, an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture in the video editing state. The video special effect adding device further includes: a first acquiring submodule, a first comparing submodule, and a first updating submodule. The first obtaining sub-module is configured to acquire a first click instruction of the user, and calculate coordinates of the first click instruction; the first comparison sub-module is configured to compare whether the coordinates of the first click instruction are in a coordinate area of the anchor point. The first update submodule is configured to: when the coordinates of the first click instruction are within the coordinate area of the anchor point, the anchor point is updated with the sliding instruction of the user to update the coordinates of the drop point.
在一些实施方式中,在视频编辑状态下编辑区域包括:第一编辑区域与帧进度条;第一编辑区域显示编辑视频在帧进度条停止时刻表征的帧画面图像。视频特效添加装置还包括:第二获取子模块、第一计算子模块和第一调用子模块。其中,第二获取子模块用于获取用户在帧进度条范围内的点击或滑动指令;第一计算子模块用于根据帧进度条范围内的点击或滑动指令确定帧进度条的停止时刻;第一调用子模块用于调取帧进度条停止时刻表征的帧画面图像作为编辑帧画面。In some embodiments, the editing region in the video editing state includes: a first editing region and a frame progress bar; and the first editing region displays a frame image that is characterized by the editing video at the stop of the frame progress bar. The video special effect adding device further includes: a second obtaining submodule, a first calculating submodule, and a first calling submodule. The second obtaining sub-module is configured to obtain a click or slide instruction of the user in a range of a frame progress bar; the first calculating sub-module is configured to determine a stop time of the frame progress bar according to a click or a sliding instruction in a range of the frame progress bar; A calling sub-module is used to retrieve a frame picture image characterized by a stop of the frame progress bar as an edit frame picture.
在一些实施方式中,帧进度条上设置有标着特效动画时长的滑动条,滑动条上设有表征关键帧画面所在位置的指令条。视频特效添加装置还包括:第三获取子模块、第二计算子模块和第一显示子模块。其中,第三获取子模块用于获取用户作用在滑动 条范围内的滑动指令,以使滑动条随滑动指令沿帧进度条滑动;第二计算子模块用于根据用户作用在滑动条范围内的滑动指令确定指令条指向的帧进度条停止时刻;第一显示子模块用于第一编辑区域显示指令条指向的帧进度条停止时刻表征的帧画面图像。In some embodiments, the frame progress bar is provided with a slider bar marked with the duration of the effect animation, and the slider bar is provided with a command bar for indicating the position of the key frame picture. The video special effect adding device further includes: a third obtaining submodule, a second calculating submodule, and a first display submodule. The third obtaining sub-module is configured to obtain a sliding instruction that the user acts in the range of the sliding bar, so that the sliding bar slides along the frame progress bar with the sliding instruction; and the second computing sub-module is configured to act within the sliding bar range according to the user. The sliding instruction determines a frame progress bar stop time pointed to by the instruction bar; the first display sub-module is configured to display a frame picture image represented by the stop of the frame progress bar pointed by the instruction bar in the first edit area.
在一些实施方式中,视频特效添加装置还包括:第一设置子模块和第一预览子模块。其中,第一设置子模块用于将编辑视频与特效动画分别放置在两个并行时间轨道上;第一预览子模块用于播放编辑视频至特效动画的起始时间时,同步播放特效动画且特效动画位于编辑视频的上层显示。In some embodiments, the video special effect adding device further includes: a first setting submodule and a first preview submodule. The first setting sub-module is used to place the editing video and the special effect animation on two parallel time tracks respectively; when the first preview sub-module is used to play the editing video to the start time of the special effect animation, the special effect animation and the special effect are synchronously played. The animation is located on the top of the edit video.
在一些实施方式中,视频特效添加装置还包括:第四获取子模块和第一撤销子模块。其中,第四获取子模块用于获取用户的撤销指令;第一撤销子模块用于将暂存的特效动画按堆栈的方式进行删除。In some embodiments, the video special effect adding apparatus further includes: a fourth obtaining submodule and a first revocation submodule. The fourth obtaining sub-module is configured to acquire a revocation instruction of the user; and the first revocation sub-module is configured to delete the temporarily-applied special effect animation in a stack manner.
在一些实施方式中,视频特效添加装置还包括:第五获取子模块、第三计算子模块和第一确定子模块。其中,第五获取子模块用于获取预设的特效动画中每一帧画面与关键帧画面的位置关系信息;第三计算子模块用于根据落点坐标与位置关系信息计算特效动画中每一帧画面的覆盖坐标;第一确定子模块用于根据覆盖坐标确定特效动画中每一帧画面的覆盖位置。In some embodiments, the video special effect adding apparatus further includes: a fifth obtaining submodule, a third calculating submodule, and a first determining submodule. The fifth obtaining sub-module is configured to obtain position relationship information of each frame picture and key frame picture in the preset special effect animation; the third calculating sub-module is configured to calculate each special effect animation according to the falling point coordinate and the position relationship information. The overlay coordinate of the frame picture; the first determining sub-module is configured to determine a coverage position of each frame of the special effect animation according to the overlay coordinate.
本实施例还提供一种智能移动终端。具体请参阅图12,图12为本实施例智能移动终端基本结构示意图。This embodiment also provides an intelligent mobile terminal. For details, refer to FIG. 12. FIG. 12 is a schematic diagram of a basic structure of an intelligent mobile terminal according to an embodiment of the present invention.
需要指出的是本实施列中,智能移动终端的存储器1520内存储用于实现本实施例中视频特效添加方法中的所有程序,处理器1580能够调用该存储器1520内的程序,执行上述视频特效添加方法所列举的所有功能。由于智能移动终端实现的功能在本实施例中的视频特效添加方法进行了详述,在此不再进行赘述。It should be noted that in the present embodiment, all the programs in the video special effect adding method in the embodiment are stored in the memory 1520 of the smart mobile terminal, and the processor 1580 can call the program in the memory 1520 to perform the above video special effect adding. All the features listed in the method. The video effect adding method in this embodiment is described in detail in the function implemented by the smart mobile terminal, and details are not described herein again.
智能移动终端在进行双视频编辑合成中,预设的特效动画坐标中设有一个关键帧画面,该画面的落点位置决定了整个特效动画在视频合成时在编辑视频叠加的位置,用户根据设定落点坐标的方式,自由设定特效动画在编辑视频中所在的画面位置。通过该方式使用户自由掌控特效动画在合成后视频中的视图位置,提高了用户在视频编辑过程中的自由度,提高了视频编辑的娱乐性,具有较好的客户体验和市场前景。In the dual video editing and synthesis of the intelligent mobile terminal, a key frame picture is set in the preset special effect animation coordinates, and the position of the falling point of the picture determines the position where the entire special effect animation is superimposed in the editing video during the video synthesis, and the user sets according to the setting. Set the coordinates of the drop point, and freely set the position of the screen where the special effects animation is located in the edit video. In this way, the user is free to control the view position of the special effect animation in the synthesized video, thereby improving the user's freedom in the video editing process, improving the entertainment of the video editing, and having a better customer experience and market prospect.
本发明实施例还提供了智能移动终端,如图12所示,为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。该终端可以为包括智能移动终端、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备,以终端为智能移动终端为例:The embodiment of the present invention further provides an intelligent mobile terminal. As shown in FIG. 12, for the convenience of description, only the parts related to the embodiment of the present invention are shown. If the specific technical details are not disclosed, please refer to the method part of the embodiment of the present invention. The terminal may be any terminal device including a smart mobile terminal, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, and the terminal is an intelligent mobile terminal as an example:
图12示出的是与本发明实施例提供的终端相关的智能移动终端的部分结构的框图。参考图12,智能移动终端包括:射频(Radio Frequency,RF)电路1510、存储器1520、输入单元1530、显示单元1540、传感器1550、音频电路1560、无线保真(wireless fidelity,Wi-Fi)模块1570、处理器1580、以及电源1590等部件。本领域技术人员可以理解,图12中示出的智能移动终端结构并不构成对智能移动终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。FIG. 12 is a block diagram showing a partial structure of an intelligent mobile terminal related to a terminal provided by an embodiment of the present invention. Referring to FIG. 12, the smart mobile terminal includes: a radio frequency (RF) circuit 1510, a memory 1520, an input unit 1530, a display unit 1540, a sensor 1550, an audio circuit 1560, and a wireless fidelity (Wi-Fi) module 1570. , processor 1580, and power supply 1590 and other components. It will be understood by those skilled in the art that the smart mobile terminal structure shown in FIG. 12 does not constitute a limitation on the smart mobile terminal, and may include more or less components than those illustrated, or combine some components or different components. Arrangement.
下面结合图12对智能移动终端的各个构成部件进行具体的介绍:The following describes the components of the smart mobile terminal in detail with reference to FIG. 12:
RF电路1510可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器1580处理;另外,将设计上行的数据发送给基站。通 常,RF电路1510包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路1510还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。The RF circuit 1510 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is processed by the processor 1580. In addition, the data designed for the uplink is sent to the base station. Typically, RF circuitry 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuitry 1510 can also communicate with the network and other devices via wireless communication. The above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
存储器1520可用于存储软件程序以及模块,处理器1580通过运行存储在存储器1520的软件程序以及模块,从而执行智能移动终端的各种功能应用以及数据处理。存储器1520可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声纹播放功能、图像播放功能等)等;存储数据区可存储根据智能移动终端的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 1520 can be used to store software programs and modules, and the processor 1580 executes various functional applications and data processing of the smart mobile terminal by running software programs and modules stored in the memory 1520. The memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a voiceprint playing function, an image playing function, etc.), and the like; the storage data area may be stored. Data created according to the use of the smart mobile terminal (such as audio data, phone book, etc.). Moreover, memory 1520 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
输入单元1530可用于接收输入的数字或字符信息,以及产生与智能移动终端的用户设置以及功能控制有关的键信号输入。具体地,输入单元1530可包括触控面板1531以及其他输入设备1532。触控面板1531,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1531上或在触控面板1531附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1531可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1580,并能接收处理器1580发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1531。除了触控面板1531,输入单元1530还可以包括其他输入设备1532。具体地,其他输入设备1532可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 1530 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the smart mobile terminal. Specifically, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 1531 or near the touch panel 1531. Operation), and drive the corresponding connecting device according to a preset program. Optionally, the touch panel 1531 may include two parts: a touch detection device and a touch controller. Wherein, the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information. The processor 1580 is provided and can receive commands from the processor 1580 and execute them. In addition, the touch panel 1531 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch panel 1531, the input unit 1530 may also include other input devices 1532. Specifically, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
显示单元1540可用于显示由用户输入的信息或提供给用户的信息以及智能移动终端的各种菜单。显示单元1540可包括显示面板1541,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1541。进一步的,触控面板1531可覆盖显示面板1541,当触控面板1531检测到在其上或附近的触摸操作后,传送给处理器1580以确定触摸事件的类型,随后处理器1580根据触摸事件的类型在显示面板1541上提供相应的视觉输出。虽然在图12中,触控面板1531与显示面板1541是作为两个独立的部件来实现智能移动终端的输入和输入功能,但是在某些实施例中,可以将触控面板1531与显示面板1541集成而实现智能移动终端的输入和输出功能。The display unit 1540 can be used to display information input by the user or information provided to the user as well as various menus of the smart mobile terminal. The display unit 1540 can include a display panel 1541. Alternatively, the display panel 1541 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 1531 may cover the display panel 1541. After the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch panel 1531 transmits to the processor 1580 to determine the type of the touch event, and then the processor 1580 according to the touch event. The type provides a corresponding visual output on display panel 1541. Although in FIG. 12, the touch panel 1531 and the display panel 1541 are two independent components to implement the input and input functions of the smart mobile terminal, in some embodiments, the touch panel 1531 and the display panel 1541 may be Integrate to realize the input and output functions of intelligent mobile terminals.
智能移动终端还可包括至少一种传感器1550,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1541的亮度,接近传感器可在智能移动终端移动到耳边时,关闭显示面板1541和/或背光。作为运动传感器的一种,加速计传感器 可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别智能移动终端姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于智能移动终端还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。The smart mobile terminal may also include at least one type of sensor 1550, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1541 according to the brightness of the ambient light, and the proximity sensor may close the display panel 1541 when the smart mobile terminal moves to the ear. And / or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the posture of smart mobile terminals (such as horizontal and vertical screen switching). , related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as well as other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that smart mobile terminals can also configure, I will not repeat them here.
音频电路1560、扬声器1561,传声器1562可提供用户与智能移动终端之间的音频接口。音频电路1560可将接收到的音频数据转换后的电信号,传输到扬声器1561,由扬声器1561转换为声纹信号输出;另一方面,传声器1562将收集的声纹信号转换为电信号,由音频电路1560接收后转换为音频数据,再将音频数据输出处理器1580处理后,经RF电路1510以发送给比如另一智能移动终端,或者将音频数据输出至存储器1520以便进一步处理。An audio circuit 1560, a speaker 1561, and a microphone 1562 can provide an audio interface between the user and the smart mobile terminal. The audio circuit 1560 can transmit the converted electrical data of the received audio data to the speaker 1561, and convert it into a voiceprint signal output by the speaker 1561. On the other hand, the microphone 1562 converts the collected voiceprint signal into an electrical signal by the audio. Circuit 1560 is converted to audio data upon receipt, processed by audio data output processor 1580, transmitted via RF circuitry 1510 to, for example, another smart mobile terminal, or output audio data to memory 1520 for further processing.
Wi-Fi属于短距离无线传输技术,智能移动终端通过Wi-Fi模块1570可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图12示出了Wi-Fi模块1570,但是可以理解的是,其并不属于智能移动终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。Wi-Fi is a short-range wireless transmission technology. The smart mobile terminal can help users to send and receive emails, browse web pages and access streaming media through the Wi-Fi module 1570. It provides users with wireless broadband Internet access. Although FIG. 12 shows the Wi-Fi module 1570, it can be understood that it does not belong to the essential configuration of the smart mobile terminal, and can be omitted as needed within the scope of not changing the essence of the invention.
处理器1580是智能移动终端的控制中心,利用各种接口和线路连接整个智能移动终端的各个部分,通过运行或执行存储在存储器1520内的软件程序和/或模块,以及调用存储在存储器1520内的数据,执行智能移动终端的各种功能和处理数据,从而对智能移动终端进行整体监控。可选的,处理器1580可包括一个或多个处理单元;优选的,处理器1580可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1580中。The processor 1580 is a control center of the smart mobile terminal that connects various portions of the entire smart mobile terminal using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 1520, and by calling them stored in the memory 1520. The data, performing various functions and processing data of the intelligent mobile terminal, thereby performing overall monitoring of the smart mobile terminal. Optionally, the processor 1580 may include one or more processing units; preferably, the processor 1580 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like. The modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 1580.
智能移动终端还包括给各个部件供电的电源1590(比如电池),优选的,电源可以通过电源管理系统与处理器1580逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The smart mobile terminal also includes a power source 1590 (such as a battery) for supplying power to various components. Preferably, the power source can be logically connected to the processor 1580 through a power management system to manage functions such as charging, discharging, and power management through the power management system. .
尽管未示出,智能移动终端还可以包括摄像头、蓝牙模块等,在此不再赘述。Although not shown, the smart mobile terminal may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
需要说明的是,本发明的说明书及其附图中给出了本发明的较佳的实施例,但是,本发明可以通过许多不同的形式来实现,并不限于本说明书所描述的实施例,这些实施例不作为对本发明内容的额外限制,提供这些实施例的目的是使对本发明的公开内容的理解更加透彻全面。并且,上述各技术特征继续相互组合,形成未在上面列举的各种实施例,均视为本发明说明书记载的范围;进一步地,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,而所有这些改进和变换都应属于本发明所附权利要求的保护范围。It is to be noted that the preferred embodiments of the present invention are described in the specification of the present invention, and the present invention may be embodied in many different forms and not limited to the embodiments described herein. These examples are not intended to be limiting as to the scope of the present invention, which is intended to provide a more thorough understanding of the present disclosure. Further, the above various technical features are further combined with each other to form various embodiments not enumerated above, and are considered as the scope of the description of the present invention; further, those skilled in the art can improve or change according to the above description. All such improvements and modifications are intended to be included within the scope of the appended claims.

Claims (10)

  1. 一种视频特效添加方法,其特征在于,包括下述步骤:A video special effect adding method, comprising the steps of:
    在视频编辑状态下获取用户的点击指令或滑动指令;Obtaining a user's click command or a slide instruction in a video editing state;
    根据所述点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将所述位置坐标作为特效动画中一个关键帧画面的落点坐标;Acquiring, according to the click instruction or the sliding instruction, a position coordinate specified by the user in the edit frame picture, and using the position coordinate as a drop point coordinate of a key frame picture in the special effect animation;
    对编辑视频和所述特效动画进行合成,以使所述关键帧画面覆盖在所述编辑帧画面中用户指定的位置坐标处。The edit video and the special effects animation are combined such that the key frame picture is overlaid at a user-specified position coordinate in the edit frame picture.
  2. 根据权利要求1所述的视频特效添加方法,其特征在于,在所述视频编辑状态下所述编辑帧画面中生成用于标定所述落点坐标的锚点;The video special effect adding method according to claim 1, wherein an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture in the video editing state;
    所述在视频编辑状态下获取用户的点击指令或滑动指令步骤之前还包括,下述步骤:The step of acquiring the click command or the slide command of the user in the video editing state further includes the following steps:
    获取用户的第一点击指令,并计算所述第一点击指令的坐标;Obtaining a first click instruction of the user, and calculating coordinates of the first click instruction;
    比对所述第一点击指令的坐标是否在所述锚点的坐标区域内;Aligning whether coordinates of the first click instruction are within a coordinate area of the anchor point;
    当所述第一点击指令的坐标在所述锚点的坐标区域内时,所述锚点随用户的滑动指令进行位置更新,以更新所述落点坐标。When the coordinates of the first click instruction are within the coordinate area of the anchor point, the anchor point is updated with the user's sliding instruction to update the drop point coordinate.
  3. 根据权利要求1所述的视频特效添加方法,其特征在于,在所述视频编辑状态下编辑区域包括:第一编辑区域与帧进度条;所述第一编辑区域显示所述编辑视频在所述帧进度条停止时刻表征的帧画面图像;The video special effect adding method according to claim 1, wherein the editing area in the video editing state comprises: a first editing area and a frame progress bar; and the first editing area displays the editing video in the a frame picture image characterized by a stop of the frame progress bar;
    所述在视频编辑状态下获取用户的点击指令或滑动指令步骤之前还包括,下述步骤:The step of acquiring the click command or the slide command of the user in the video editing state further includes the following steps:
    获取用户在帧进度条范围内的点击或滑动指令;Obtaining a click or slide instruction by the user within the range of the frame progress bar;
    根据所述帧进度条范围内的点击或滑动指令确定所述帧进度条的停止时刻;Determining a stop time of the frame progress bar according to a click or slide instruction in a range of the frame progress bar;
    调取所述帧进度条停止时刻表征的帧画面图像作为所述编辑帧画面。A frame picture image characterized by the stop of the frame progress bar is retrieved as the edit frame picture.
  4. 根据权利要求3所述的视频特效添加方法,其特征在于,所述帧进度条上设置有标着所述特效动画时长的滑动条,所述滑动条上设有表征所述关键帧画面所在位置的指令条;The video special effect adding method according to claim 3, wherein the frame progress bar is provided with a sliding bar marked with the duration of the special effect animation, and the sliding bar is provided with a position indicating the key frame picture. Instruction bar;
    所述在视频编辑状态下获取用户的点击指令或滑动指令步骤之前还包括,下述步骤:The step of acquiring the click command or the slide command of the user in the video editing state further includes the following steps:
    获取用户作用在所述滑动条范围内的滑动指令,以使所述滑动条随所述滑动指令沿所述帧进度条滑动;Obtaining a sliding instruction that a user acts within the range of the slider to cause the slider to slide along the frame progress bar along with the sliding instruction;
    根据用户作用在所述滑动条范围内的滑动指令确定所述指令条指向的帧进度条停 止时刻;Determining, according to a sliding instruction of the user within the range of the slider bar, a frame progress bar stop time pointed by the instruction bar;
    所述第一编辑区域显示所述指令条指向的帧进度条停止时刻表征的帧画面图像。The first editing area displays a frame picture image characterized by a stop of the frame progress bar pointed by the instruction bar.
  5. 根据权利要求1所述的视频特效添加方法,其特征在于,所述根据所述点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将所述位置坐标作为特效动画中一个关键帧画面的落点坐标的步骤之后,还包括下述步骤:The video special effect adding method according to claim 1, wherein the acquiring a position coordinate specified by a user in an edit frame picture according to the click instruction or a slide instruction, and using the position coordinate as a key frame in the special effect animation After the step of dropping coordinates of the screen, the following steps are also included:
    将所述编辑视频与所述特效动画分别放置在两个并行时间轨道上;And placing the edited video and the special effect animation on two parallel time tracks;
    播放所述编辑视频至所述特效动画的起始时间时,同步播放所述特效动画且所述特效动画位于所述编辑视频的上层显示。When the editing video is played to the start time of the special effect animation, the special effects animation is played synchronously and the special effects animation is displayed on the upper layer of the edited video.
  6. 根据权利要求5所述的视频特效添加方法,其特征在于,所述播放所述编辑视频至所述特效动画的起始时间时,同步播放所述特效动画且所述特效动画位于所述编辑视频的上层显示的步骤之后,还包括下述步骤:The video special effect adding method according to claim 5, wherein, when the editing video is played to a start time of the special effect animation, the special effects animation is synchronously played and the special effect animation is located in the editing video. After the steps shown in the upper layer, the following steps are also included:
    获取用户的撤销指令;Obtain the user's revocation instruction;
    将暂存的所述特效动画按堆栈的方式进行删除。The temporary effect animation is deleted on the stack.
  7. 根据权利要求1所述的视频特效添加方法,其特征在于,所述根据所述点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将所述位置坐标作为特效动画中一个关键帧画面的落点坐标的步骤之后,还包括下述步骤:The video special effect adding method according to claim 1, wherein the acquiring a position coordinate specified by a user in an edit frame picture according to the click instruction or a slide instruction, and using the position coordinate as a key frame in the special effect animation After the step of dropping coordinates of the screen, the following steps are also included:
    获取预设的所述特效动画中每一帧画面与所述关键帧画面的位置关系信息;Obtaining positional relationship information between each frame of the preset effect animation and the key frame picture;
    根据所述落点坐标与位置关系信息计算所述特效动画中每一帧画面的覆盖坐标;Calculating, according to the falling point coordinates and the positional relationship information, a coverage coordinate of each frame of the special effect animation;
    根据所述覆盖坐标确定所述特效动画中每一帧画面的覆盖位置。Determining a coverage position of each frame of the effect animation according to the coverage coordinates.
  8. 一种视频特效添加装置,其特征在于,包括:A video special effect adding device, comprising:
    获取模块,用于在视频编辑状态下获取用户的点击指令或滑动指令;The obtaining module is configured to obtain a click command or a slide instruction of the user in a video editing state;
    处理模块,用于根据所述点击指令或滑动指令获取用户在编辑帧画面中指定的位置坐标,将所述位置坐标作为特效动画中一个关键帧画面的落点坐标;a processing module, configured to acquire, according to the click instruction or the sliding instruction, a position coordinate specified by the user in the edit frame screen, and use the position coordinate as a drop point coordinate of a key frame picture in the special effect animation;
    合成模块,用于对编辑视频和所述特效动画进行合成,以使所述关键帧画面覆盖在所述编辑帧画面中用户指定的位置坐标处。And a synthesizing module, configured to synthesize the edit video and the special effect animation, so that the key frame picture is overlaid at a user-specified position coordinate in the edit frame picture.
  9. 根据权利要求8所述的视频特效添加装置,其特征在于,在所述视频编辑状态下所述编辑帧画面中生成用于标定所述落点坐标的锚点;The video special effect adding apparatus according to claim 8, wherein an anchor point for calibrating the coordinates of the drop point is generated in the edit frame picture in the video editing state;
    所述视频特效添加装置还包括:The video special effect adding device further includes:
    第一获取子模块,用于获取用户的第一点击指令,并计算所述第一点击指令的坐标;a first obtaining submodule, configured to acquire a first click instruction of the user, and calculate coordinates of the first click instruction;
    第一比对子模块,用于比对所述第一点击指令的坐标是否在所述锚点的坐标区域 内;a first comparison submodule configured to compare whether coordinates of the first click instruction are within a coordinate area of the anchor point;
    第一更新子模块,用于当所述第一点击指令的坐标在所述锚点的坐标区域内时,所述锚点随用户的滑动指令进行位置更新,以更新所述落点坐标。a first update submodule, configured to: when the coordinates of the first click instruction are within a coordinate area of the anchor point, the anchor point is updated with a sliding instruction of the user to update the drop point coordinate.
  10. 一种智能移动终端,其特征在于,包括:An intelligent mobile terminal, comprising:
    一个或多个处理器;One or more processors;
    存储器;Memory
    一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行权利要求1-7任意一项所述的视频特效添加方法。One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to execute rights The method for adding a video effect according to any one of claims 1-7.
PCT/CN2018/118370 2017-11-30 2018-11-30 Video special effect adding method and apparatus, and smart mobile terminal WO2019105438A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711242163.0 2017-11-30
CN201711242163.0A CN108022279B (en) 2017-11-30 2017-11-30 Video special effect adding method and device and intelligent mobile terminal

Publications (1)

Publication Number Publication Date
WO2019105438A1 true WO2019105438A1 (en) 2019-06-06

Family

ID=62077714

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/118370 WO2019105438A1 (en) 2017-11-30 2018-11-30 Video special effect adding method and apparatus, and smart mobile terminal

Country Status (2)

Country Link
CN (1) CN108022279B (en)
WO (1) WO2019105438A1 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022279B (en) * 2017-11-30 2021-07-06 广州市百果园信息技术有限公司 Video special effect adding method and device and intelligent mobile terminal
CN108734756B (en) * 2018-05-15 2022-03-25 深圳市腾讯网络信息技术有限公司 Animation production method and device, storage medium and electronic device
CN108712661B (en) * 2018-05-28 2022-02-25 广州虎牙信息科技有限公司 Live video processing method, device, equipment and storage medium
CN108958610A (en) * 2018-07-27 2018-12-07 北京微播视界科技有限公司 Special efficacy generation method, device and electronic equipment based on face
CN109040615A (en) * 2018-08-10 2018-12-18 北京微播视界科技有限公司 Special video effect adding method, device, terminal device and computer storage medium
CN110166842B (en) * 2018-11-19 2020-10-16 深圳市腾讯信息技术有限公司 Video file operation method and device and storage medium
CN109379631B (en) * 2018-12-13 2020-11-24 广州艾美网络科技有限公司 Method for editing video captions through mobile terminal
CN110213638B (en) * 2019-06-05 2021-10-08 北京达佳互联信息技术有限公司 Animation display method, device, terminal and storage medium
CN110493630B (en) * 2019-09-11 2020-12-01 广州华多网络科技有限公司 Processing method and device for special effect of virtual gift and live broadcast system
CN111050203B (en) * 2019-12-06 2022-06-14 腾讯科技(深圳)有限公司 Video processing method and device, video processing equipment and storage medium
CN113452929B (en) * 2020-03-24 2022-10-04 北京达佳互联信息技术有限公司 Video rendering method and device, electronic equipment and storage medium
CN111739127A (en) * 2020-06-09 2020-10-02 广联达科技股份有限公司 Method and device for simulating associated motion in mechanical linkage process
CN111756952A (en) * 2020-07-23 2020-10-09 北京字节跳动网络技术有限公司 Preview method, device, equipment and storage medium of effect application
CN111897483A (en) * 2020-08-11 2020-11-06 网易(杭州)网络有限公司 Live broadcast interaction processing method, device, equipment and storage medium
CN114257775B (en) * 2020-09-25 2023-04-07 荣耀终端有限公司 Video special effect adding method and device and terminal equipment
CN112199016B (en) * 2020-09-30 2023-02-21 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113038228B (en) * 2021-02-25 2023-05-30 广州方硅信息技术有限公司 Virtual gift transmission and request method, device, equipment and medium thereof
CN116033181A (en) * 2021-10-26 2023-04-28 脸萌有限公司 Video processing method, device, equipment and storage medium
CN114125555B (en) * 2021-11-12 2024-02-09 深圳麦风科技有限公司 Editing data preview method, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104423779A (en) * 2013-08-26 2015-03-18 鸿合科技有限公司 Interactive display implementation method and device
CN104780338A (en) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 Method and electronic equipment for loading expression effect animation in instant video
CN105844987A (en) * 2016-05-30 2016-08-10 深圳科润视讯技术有限公司 Multimedia teaching interaction operating method and device
WO2016177296A1 (en) * 2015-05-04 2016-11-10 腾讯科技(深圳)有限公司 Video generation method and apparatus
CN108022279A (en) * 2017-11-30 2018-05-11 广州市百果园信息技术有限公司 Special video effect adding method, device and intelligent mobile terminal

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4001882A (en) * 1975-03-12 1977-01-04 Spectra-Vision Corporation Magnetic tape editing, previewing and animating method and system
US20040071453A1 (en) * 2002-10-08 2004-04-15 Valderas Harold M. Method and system for producing interactive DVD video slides
US7434155B2 (en) * 2005-04-04 2008-10-07 Leitch Technology, Inc. Icon bar display for video editing system
US8473846B2 (en) * 2006-12-22 2013-06-25 Apple Inc. Anchor point in media
CN101217638B (en) * 2007-12-28 2012-10-24 深圳市迅雷网络技术有限公司 Downloading method, system and device of video file fragmentation
CN102385613A (en) * 2011-09-30 2012-03-21 广州市动景计算机科技有限公司 Web page positioning method and system
CN103220490A (en) * 2013-03-15 2013-07-24 广东欧珀移动通信有限公司 Special effect implementation method in video communication and video user terminal
CN105609121B (en) * 2014-11-20 2019-03-12 广州酷狗计算机科技有限公司 Multimedia progress monitoring method and device
KR101707203B1 (en) * 2015-09-04 2017-02-15 주식회사 씨지픽셀스튜디오 Transforming method of computer graphic animation file by applying joint rotating value
CN106792078A (en) * 2016-07-12 2017-05-31 乐视控股(北京)有限公司 Method for processing video frequency and device
CN106385591B (en) * 2016-10-17 2020-05-15 腾讯科技(上海)有限公司 Video processing method and video processing device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104423779A (en) * 2013-08-26 2015-03-18 鸿合科技有限公司 Interactive display implementation method and device
CN104780338A (en) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 Method and electronic equipment for loading expression effect animation in instant video
WO2016177296A1 (en) * 2015-05-04 2016-11-10 腾讯科技(深圳)有限公司 Video generation method and apparatus
CN105844987A (en) * 2016-05-30 2016-08-10 深圳科润视讯技术有限公司 Multimedia teaching interaction operating method and device
CN108022279A (en) * 2017-11-30 2018-05-11 广州市百果园信息技术有限公司 Special video effect adding method, device and intelligent mobile terminal

Also Published As

Publication number Publication date
CN108022279A (en) 2018-05-11
CN108022279B (en) 2021-07-06

Similar Documents

Publication Publication Date Title
WO2019105438A1 (en) Video special effect adding method and apparatus, and smart mobile terminal
WO2021104365A1 (en) Object sharing method and electronic device
WO2016177296A1 (en) Video generation method and apparatus
CN111010510B (en) Shooting control method and device and electronic equipment
WO2021104236A1 (en) Method for sharing photographing parameter, and electronic apparatus
WO2016124095A1 (en) Video generation method, apparatus and terminal
CN107707828B (en) A kind of method for processing video frequency and mobile terminal
JP7393541B2 (en) Video display methods, electronic equipment and media
WO2019037040A1 (en) Method for recording video on the basis of a virtual reality application, terminal device, and storage medium
CN110248251B (en) Multimedia playing method and terminal equipment
WO2019105446A1 (en) Video editing method and device, and smart mobile terminal
US11568899B2 (en) Method, apparatus and smart mobile terminal for editing video
WO2021254429A1 (en) Video recording method and apparatus, electronic device, and storage medium
CN109151546A (en) A kind of method for processing video frequency, terminal and computer readable storage medium
WO2019196929A1 (en) Video data processing method and mobile terminal
JP2021505092A (en) Methods and devices for playing audio data
CN108920239A (en) A kind of long screenshotss method and mobile terminal
CN108769374A (en) A kind of image management method and mobile terminal
US20230015943A1 (en) Scratchpad creation method and electronic device
CN110913261A (en) Multimedia file generation method and electronic equipment
CN109618218A (en) A kind of method for processing video frequency and mobile terminal
CN109445589B (en) Multimedia file playing control method and terminal equipment
CN110941378A (en) Video content display method and electronic equipment
CN113936699B (en) Audio processing method, device, equipment and storage medium
CN111049977B (en) Alarm clock reminding method and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18883682

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18883682

Country of ref document: EP

Kind code of ref document: A1